Flexible expressions could lift 3D-generated faces out of the uncanny valley

3D-rendered faces are a big part of any big movie or game nowadays, but the task of capturing and naturally animating them can be a tough one. Disney Research is working on smooth out this process, a machine learning tool that makes it much easier to generate and manipulate 3D faces without dipping into the uncanny valley.

Of course, this new technology has come a long way from the wooden expressions and limited details of earlier days. High-resolution, convincing 3D faces can be animated quickly and well, but the subtleties of human expression are not just limitless in variety; they’re straightforward to get wrong.

Think of how someone’s entire face changes when they smile — it’s different for everyone, but there are enough similarities that we fancy we can tell when someone is “really” smiling or just faking it. How can you achieve that level of detail in an artificial face?

Existing “linear” models simplify the subtlety of expression, making “happiness” or “anger” minutely adjustable. Still, at the cost of accuracy — they can’t express every possible face but can easily result in impossible faces. Newer neural models learn complexity from watching the interconnectedness of expressions. Like other such models, their workings are obscure and difficult to control, and perhaps not generalizable beyond the faces they learned from. They don’t enable the level of control an artist working on a movie or game needs or result in faces that (humans are remarkably good at detecting this) are just off somehow.

A team at Disney Research proposes a new model with the best of both worlds — what it calls a “semantic deep face model.” Without getting into the exact technical execution, the basic improvement is that it’s a neural model that learns how a facial expression affects the whole face. Still, it is not specific to a single face — and is nonlinear, allowing flexibility in how expressions interact with a face’s geometry and each other.

Think of it this way: A linear model lets you take an expression (a smile, or kiss, say) from 0-100 on any 3D face, but the results may be unrealistic. A neural model lets you take a learned expression from 0-100 realistically, but only on the face, it learned it from. This model can take an expression from 0-100 smoothly on any 3D face. That’s something of an over-simplification, but you get the idea.

disney-faces-deep-min

The results are powerful: You could generate a thousand faces with different shapes and tones and then animate all of them with the same expressions without any extra work. Think about how that could result in diverse CG crowds you can summon with a couple of clicks or characters in games that have realistic facial expressions regardless of whether they were hand-crafted or not.

It’s not a silver bullet, and it’s only part of a huge set of improvements artists and engineers are making in the various industries where this technology is employed — markerless face tracking, better skin deformation, realistic eye movements, and dozens of more areas of interest are also important parts of this process.

The Disney Research paper was presented at the International Conference on 3D Vision; you can read the full thing here.

Share

Thomas Burn is a blogger, digital marketing expert and working with Techlofy. Being a social media enthusiast, he believes in the power of writing.

163 views
Share via
Copy link