From rigs to textures via mocap, the animation business is in a state of flux. Develop investigates

The changing face of game animation

As player expectations continue to rise, and static photorealism becomes a standard in triple-A gaming, the industry and public’s attention is now zeroing in on character animation.

At a time when stunning footage of L.A. Noire has made facial animation a talking point across the industry, the onus is on character animators of all kinds to leap the uncanny valley and take gaming with them.

The democratisation of technology means numerous techniques now compete for the attention of studios large and small, from audio-driven facial animation tools to desktop motion capture solutions used for, among other things, pre-visualisation and rapid prototyping.

But what trends today are shaping the in-game characters of tomorrow, and how will approaches to motion-capture, rigging, modelling and facial animation change?

HIGH FIDELITY

“The biggest development is getting a high enough fidelity on in-game characters to where nothing hinders an animator, whether that person is using performance capture or keyframing,” says Epic’s lead animator Jay Hosfelt, who is quick to highlight that performance capture is an approach that offers both pros and cons.

“Performance capture will always need the final touches of an animator, so it’s important to have a pipeline that makes it very easy for an animator to go in and polish.”

Certainly, integration of techniques seems to be important to most of those working with animated characters, and a model is emerging that sees studios harnessing multiple methods.

“The most exciting development is the tight integration of procedural animation with more traditional performance capture and key-framing,” suggests Torsten Reil, CEO of run-time animation engine specialist NaturalMotion.

“We’re now starting to see systems that seamlessly combine the two approaches. In addition to that, we can now use animation to instruct or influence the style of procedural motion, which is extremely powerful.”

FORWARD THINKING

A mood of refinement, however, doesn’t mean that innovation is stifled, and progress in the character animation space is still rife, most notably in Team Bondi’s much-hyped ‘game noir’.

“Undoubtedly the successful implementation of surface based animation by L.A. Noire’s engine is important,” says Nataska Statham, producer at art outsourcing and animation specialist Imagination Studios.

“Surface animation has been used by other engines before, but always on a limited scale. Being able to achieve good compression to successfully implement it on a large scale will bring games even closer to the level of quality of pre-rendered movies.”

As ever, Hollywood is one step ahead in the photorealism race, but expectations are that within the next generation of consoles real-time support for the likes of muscle deformation, less expensive cloth deformation, and of more support for blendshapes and surface animation will become standard.

Another trend defining the future of character animation is one that sees the specialty cross paths with artificial intelligence, and already tool companies are gearing up to support such a convergence.

“One of the most interesting areas out there is creating characters that interact believably and navigate with their environment,” states Havok’s head of product management Andrew Bowell.

“We see a lot of cool problems when it comes to informing and responding to a pathfinder based on a character’s animation set. This only gets more challenging when you start working with dynamic environments and multiple characters.”

STEP OUTSIDE

Elsewhere, even ‘traditional’ optical mocap is changing, as actors and artists are beginning to escape the confines of the studio. “The recent development of shooting performance capture outdoors is particularly exciting for me,” reveals Ninja Theory’s visual art director Stuart Adcock.

“Not only is it good for the soul to be outdoors rather than in a dark room, but it also opens up the opportunity for actors to perform within the context of an outdoor environment.”

An example of the benefits of exterior mocap work would be actors performing whilst squinting in the sun for a game scene set in bright daylight, where a developer could set our star’s position in game to maximise realism.

The trends sculpting in-game characters today, however, are not all positive. Developers are already striving to implement new technologies and techniques on five-year-old platforms, meaning hardware and engine limitations are significantly downsizing what can be implemented in games.

And the rush to keep up with progress doesn’t only falter on platforms.

“We are at an odd plateau right now where the technology is ahead of our understanding of it,” says Imagination’s Statham.

“It will take some time before the animators and art directors catch up with the full potential that techniques such as performance capture have to offer.”

PERFORMING ARTS

Developers aren’t the only ones suffering as a result of industry pace. All too often, some suggest, mocap performers are still not given time to prepare for their roles, spend time with the director and fully understand the vision of the game.

“Lack of rehearsal time is a killer,” insists Audiomotion’s Mick Morris.

“If the performer knows his lines he can then use this as a base from which to improvise or play with different aspects of his character. However, if the studio time is eaten up by fudging lines due to lack of preparation then the director is going to have a hard time getting that outstanding performance.”

Morris certainly isn’t alone in advocating a more methodical approach to character animation, and over at Cubic Motion taking time to prepare for motion capture work is a matter of principle.

“We custom build every solution based around the client’s character rigs,” confirms the company’s director Gareth Edwards.

“This means we choose to invest several days of set-up time to build a very customised ‘solver’ for every rig before we begin to start producing animation.”

This set-up follows a strict protocol which Cubic Motion developed specifically with project scaling in mind, and is conducted by the art team and the technology team working in unison.

“The extra effort pays great dividends down the line, because it allows us to then produce very large amounts of animation cost effectively.”

Fortunately for Edwards and his contemporaries, across the huge pool of developers specialising in character animation, there has been a move to embracing a more measured approach to progress: “What’s great for us is we are seeing much longer periods of preproduction. Clients are taking longer to plan for their performance capture,” says Morris.

“There’s a willingness to embrace both writers and directors who have amazing experience in narrative, in direction and in storytelling. Time invested in casting, finding the right director and planning properly for sessions is time truly well spent.”

Indeed, the preparation that led to the shooting of the infamous and remarkably striking Dead Island trailer not once but twice is testament to the benefits of pre-visualisation and planning.

STRIKE A BALANCE

Looking to the future, specifically considering the realm of motion capture and body animation, optimism absolutely abounds. Several companies delivering distinct techniques are predictably convinced that their offering has the potential to reshape the entire discipline.

There is a growing sentiment that as mocap is increasingly used by animators as a key animation tool, the industry needs to strike a balance between cost, capture time, ease of use and what Stuart Brown, lead animator at inertial capture specialist Animazoo, highlights as the most important factor – quality of data.

“Optical [mocap] has been the weapon of choice for a number of years in the animation industry but as inertial systems and pipelines get increasingly easy to use and cost effective we’ll definitely see the playing field open up as animators have a lot more choice when they choose their motion capture tools.”

Elsewhere, inverse kinematics specialist IKinema is convinced a move to procedurally generated animation during gameplay is set to emerge.

“The biggest challenge, in my opinion, is in how to provide an input to the artist in this process,” adds IKinema’s CEO Alexandre Pechev.

“The animator must and will play a central role in this by specifying the behaviour and ‘constraining’ the motion. This would inevitably resort to using new tools and environments.”

Another trend dominating character animation is certainly the rise of solutions, techniques and services aimed specifically at more convincing facial expression.
Titles like Uncharted 2 and Heavenly Sword have placed a spotlight on key characters delivering convincing facial animation performances, and consumers are lapping up the results whilst having their expectations lifted significantly.

“As with any kind of motion capture, facial animation and lip syncing is all about the subtleties,” says Realtime UK’s CG director Ian Jones, who is keen to see the industry move on from processes that only capture an approximation of a given performance requiring the actor to exaggerate and over act in order to communicate their performance.

“The results can feel rather hammy,” he says. “Even with the most extreme expressions, there are tiny details and nuances that need to be visible to really sell the emotion to the audience, otherwise the character can feel numb or dead.“

The ripple from the impact of high quality facial animation is also far reaching, piling pressure on those working in disciplines previously used to dealing with less convincing character realism.

“As the mocap fidelity of an actor’s performance increases, so does the emphasis on the quality of an actor’s performance and the material they are provided,” says James Comstock, VP of engineering and production at Captive Motion.

“Since we can now achieve such high fidelity with facial mocap, a great script and performance by an actor will make your scene shine while cheesy dialogue and a bad performance will make you cringe.”

SOWING THE SEEDS

That seed change has meant an increased focus on performance, with companies like Side leading the charge.

“We work with a number of different providers using a multitude of approaches and whether that’s head-mounted or marker-based facial capture, full performance capture, or a combination, they all have advantages and disadvantages,” confirms Side’s creative director Andy Emery, who argues that whatever the approach, ensuring the process doesn’t get in way of the performance is paramount. After all, it’s the performance that generates the character in character animation.

“For us, the key is casting and direction,” he adds. “We use fully filmed auditions and sometimes even cast for likeness. We want the best actors for the job and then we make sure we use professional directors to get the performance required.”

Of course, not every project has the budget – or even the need – for facial capture, meaning other, less data heavy solutions flourish, with audio-driven technology being a prime example.

Projects that require translating or dubbing into multiple different languages, for example, need a solution that won’t require multilingual performances under the costly glare of a full mocap studio. In instances like these, where a sound recording serves as the lowest common denominator, quick, affordable and scaleable audio-driven solutions like that provided by FaceFX shine.

Speaking with the FaceFX team, another trend becomes clear; something very exciting, and only possible as facial animation becomes convincing and practical without blowing the budget.

“In our real lives what we do is interact through facial expressions. If we can’t simulate that, then there’s a large percentage of life that games can’t be based on or tap into,” says FaceFX’s CEO and co-founder
Doug Perkowski.

“Facial animation is key because the extent to which we can simulate human interaction realistically means we can create gameplay and interactions built around those interactions and make them fun.”

Is an intriguing idea, and one almost within the public’s grasp as L.A. Noire introduces reading faces as a gameplay mechanic. But Perkowski is already looking forward, and is filled with ideas about how players will soon be able to assume control of facial expressions in-game. And he’s not alone.

“With new peripherals for games like Kinect, we now open an even wider door of possibilities,” suggests Statham. “For the first time we have a form of entertainment where we can truly be the protagonist of our own story, not only in our imagination, but with an actual sensory response.”

Fully immersing the player inside the game environment has certainly captivated the consumer, meaning an interesting phase of technological evolution lays ahead of us.

That’s all well and good, but what of the increasing complexity of character animation workflows as studios try to convey the subtlety required to deliver convincing performances?

“More and more use is being made of advanced graphics techniques such as per-frame normal and texture maps,” states Dr Colin Urquhart, CEO of Dimensional Imaging, which uses 3D scanning technology to as a facial animation technique.

“However, the question is how to generate this type of data in a way that remains cost effective for video game projects?”

It’s an important question, and one the industry must address as the gap between technological progress and affordability widens. Still, there is a school of thought that suggests that in fact, character animation is becoming both more affordable, and more accessible, as more studios and industry’s embrace the technology.

SPOILT FOR CHOICE

With that accessibility comes competition; a race between the various service and tech companies trying to woo studios looking to breathe life into their characters.

Choosing the right technique, or the right combination of techniques, is remarkably important, as Adcock points out: “There are now lots of different methods of making characters look very real, with advances in scanning actors and in modelling skin, so we’re really under pressure to match the lifelike looks with lifelike movement, expression and speech.”

All of the emotional engagement that companies like Ninja Theory attempt to build with the player can be lost in an instance if, for example, a limb moves in an unnatural, unbelievable way, or the expression on the characters face doesn’t quite match what they’re saying. Getting it right is vital.

“In our games we combine hand-animated movements with those driven by performance capture, so our challenge is to seamlessly merge the two together in a realistic way,” confirms Ninja Theory’s Adcock.

“We prefer to utilise optical markers on the face for full performance capture,” offers Morris, who advocates adopting a traditional filmmaking methodology with a camera crew, sound crew, technicians and the director on set.

“Shooting on a soundstage recording full body, fingers, faces and final audio is the most effective way to capture all of the elements of a good performance.

“The actors are free to move around un-tethered on the stage – unhindered by the technology, free to get into character and focus on what they were hired to do: give a truly believable performance.”

Facial animation outfit Image Metrics is also an advocate of the importance of the actor’s performance.

“The key driving factor to successful performance capture or character animation is simply one thing; performance,” claims VP of product management Nick Ramsay, who asserts that whether performance is derived by an actor or defined by an animator, no technology can completely replace the need for performance in creating convincing and believable animation.

“Our tool Faceware perfectly utilises both the performance of an actor and the skilled hand of an animator to effortlessly produce highly believable facial animation driven by performance capture, allowing artists to have as much creative input into the final result as is needed.”

Elsewhere mocap outfit Animazoo is – of course – an advocate of its own inertial motion capture system. “Inertial systems are portable and easy to use in their design,” states lead animator Brown.

“This means that you can mocap in pretty much in any environment, without the need for perfect lighting and dozens of optical sensor cameras – and there isn’t such a steep learning curve.”

This flexibility has been conceived so that animators can make and direct their own animations, without having to rely on optical system experts.

“Also, inertial systems do not suffer from inherent problems such as marker swapping and occlusion – when the optical systems loose track of the body markers. Then there’s skin artefact. The list goes on.”

THE FABRIC OF LIFE

So far the emphasis here has been on capturing and animating the movements that make characters in games more convincing. But what of all the other elements that prevent the illusion being broken? Rigs and skins and textures are just as important as mocap techniques, but attract far less attention.

When it comes to providing real-time cloth, Havok is one of the most prominent companies, and it sees the practice defined by two key developments in technology.

“The first is a set of in modeller tools that puts the authoring, rigging and tuning of character clothing in the hands of artists, both creative and technical,” says the company’s Bowell.

“Ultimately adding cloth to characters dramatically enhances the visuals of the game. To that end artists play a key role in rigging and tuning cloth and need powerful tools make it look just right.”

The second technical development is a highly optimised runtime to simulate the cloth in-game. Here, optimising means not only developing algorithms that leverage today’s multi-core SIMD architectures, but also crafting algorithms that perform the computation cleverly, and only when needed.

“There has been a clear progression from the movie special effects space where cloth was a purely offline process, to games, where the algorithms have been specially modified to run in a considerably smaller time slice,” concludes Bowell.

A generic challenge, not directly related to cloth but one that impacts it, is that of content generation.

“Any modern character driven game has a number of options when it comes to clothing,” explains Bowell.

“The first option is to model the cloth as part of the character’s skin and have animators animate the cloth. This option adds a considerable number of new clothing animations for each character, the number of which scales up quickly. Often what happens is only a single garment gets animated – for example a tie or a cloak – and as the animations are cyclical, the results typically don’t look very realistic.”

The second option is to simulate the cloth with a real-time system such as Havok Cloth. In this case once an artist has rigged up the properties of Havok Cloth, the simulation can take care of the rest.

This option means that the artist still has a lot of control over how the cloth looks and feels but the burden of creating many animations is no longer necessary.

A sizable challenge in the cloth space is to create material that looks good at all times.

“The range and speed of movement that often goes on within a game can easily break the illusion, add to this that you are often not animating to a set camera, so the challenge is to make a cloth solution that can be honed, constrained and posed by animators. Creating real-world physics will only get you so far,” offers Ninja Theory’s Adcock.

BIG RIG

Over in the rigging space, a paradigm shift is in place that is seeing the order of process made more dynamic.
“One of the biggest issues we have with rigging is that it’s traditionally a rather linear process. In the past, starting the rigging process pretty much meant the character model had to be complete,” says Jones, highlighting a fact that can cause substantial problems when a client needs to change or swap out a character at the last minute. It also meant that rigging would tend to happen somewhat late on in the schedule. Things are changing, however.

“By taking advantage of proxies and wrapping techniques, we’re able to work on a character’s rig entirely independently from the model and then apply the model to the rig at a much later date,” says Realtime UK CG director Jones. “This allows us to invest a lot more time developing and refining each rig in tandem with the characters development.”

Skinning is another area where new developments are helping developers offer greater realism with better efficiency.

“One very exciting recent development has been advances in the ability to model skin interaction,” suggests Adcock.

“For example, when you move your chin down to your chest the skin around your body is being stretched and folded in lots of different places around your neck, chin and the upper chest. We can now model all of this, making things just that little more lifelike and much more believable.”

PIPING UP

Charcater animation pipelines are also changing, responding to the consumer expectation for greater number of believable characters in-game and on-screen.

Perhaps nobody knows more about this than The Creative Assembly, whose famously busy battlefields have become a trademark of the Total War series.
“For Total War, it’s imperative that we keep the pipeline and workflow as simple as possible,” reveals Alston.

“Because the series deals with thousands of characters on screen, we are imposed with very tight constraints on our characters, plus the breadth and depth required for the games means we are dealing with thousands of animations.

Not including cinematics, Total War: Shogun 2 featured just under 3,000 in-game character animations. Each was created in 3ds Max, before being exported into MotionBuilder, in which the team created the animation control rig. Dependent on the character and situation, Creative Assembly then either hand keys the animation, or imports the mocap into MotionBuilder.

“Keeping all the hand-keying and mocap editing in one package makes the animation process a lot more efficient for us. From MotionBuilder we import our animation data straight into the game.”

It’s an impressively succinct approach, and one that can still be improved, according to the company’s lead character artist Chris Waller: “The more DCC applications open up to working with scripting languages, the better the flexibility in the pipeline. From my perspective, gone are the days of endless reworking and tangles of pipeline problems relying on only the software’s shipped features to find a workaround.”

Now artists like Waller have the flexibility to do almost anything which is required by his pipeline, and store any and all meaningful data for use in the skinning process to reuse intelligently on similar asset types.

As to the future of character animation pipelines, Creative Assembly is ever watchful. “Node-based workflows in applications such as Softimage allow bespoke skinning solutions to be prototyped and tested without soaking up programmer time, which is an exciting feature, though one we have not yet fully exploited,” says Waller.

It isn’t only studios that are having to focus attention on their character animation pipelines. Over on the other side, tech and service providers are having to ready their offering for the contemporary pipelines.

“Character animators don’t want to be stressed with pipeline issues, it is all about the creativity for them,” admits Hein Beute, product manager of inertial motion capture specialist Xsens. “The challenge for us is to make the product fit in the pipeline. If an animator has an idea in the morning, it needs to be in game by the afternoon, without too much technical issues and post processing.”

LOOKING AHEAD

With so many options from rig right through to cloth, the character animation space is certainly one of the most intricate in the business. To this day new ideas and approaches abound, and as a result convergence has become a prominent buzzword in the sector.

“What works for mocap might be tricky for a keyframe animator, and vice-versa. We try to bridge the gap between rigs – and can animate pretty much anything – but any type of convergence, even in general philosophy, would be a good thing,” says Edwards of the future.

“The specific challenge for Cubic Motion and other specialist studios is to convince developers that letting a team of external experts take on this most demanding of jobs is nearly always a better plan than trying to put a large team together in-house.”

Undoubtedly, the aforementioned convergence of methods will let game developers achieve the best results possible, and save time and money as better integration into modern workflows. It will also help the collective effort needed to pass that ubiquitous chasm over which animators and roboticists obsess; the uncanny valley.

“Film will get there first and games will not be far behind,” predicts Hosfelt, who has a keen sense of which technological avenue leads most directly to the valley.

“There are some new technologies that digitally scan an actor’s face as he or she performs. It’s essentially 3D video capture.

“If you’re shooting for reality then that may be the best bet going forward. If you want to mimic reality, just record it. For now, the best CG human performances are the ones that are interpreted and filtered through an artist or animator.”

The problem with this greatest of realism milestones, inevitably, is that the closer one gets to crossing the uncanny valley, the harder the task in hand becomes; a fact that makes some doubt whether such an ambitious goal is even possible.

“I am not sure that it will ever be crossed completely in that it will be possible to create purely synthetic virtual characters that are indistinguishable from a real life person,” professes Urquhart.

Possible or not, expectation is high, and the fascination with realism is one that is not going away. “There is certainly a lot of pressure from within the industry and also from the audience to try and achieve this leap,” says Statham.

“Many have come very close, especially when it comes to still images. Whether we will or not be able to cross the uncanny valley, we can certainly create lifelike characters that evoke strong empathy and emotional response from the public.”

“I don’t think in-game animation has come out of the uncanny valley yet,” says Reil, adding: “Believable and interactive run-time animation is still an unsolved problem in most games.”

Fortunately the technology to change this is becoming available, and people across the industry are becoming more experienced in authoring run-time animation as opposed to animation clips.

“What we’re currently seeing is the beginning of much wider-scale adoption of new animation run-time techniques, whether it’s from us or one of our competitors. This is a really exciting trend,” concludes the NaturalMotion boss.

Overall, there is optimism about character animation’s future, and the discipline is driving forward with incredible pace. Defined by innovation, diversification and progress, the space’s challenges are largely about refinement and perfection.

Whether the industry can cross the valley or not, one thing is certain; today’s video game characters are becoming remarkably lifelike, and their capacity to change the way consumers interact with games now seems an absolute certainty.

About MCV Staff

Check Also

The shortlist for the 2024 MCV/DEVELOP Awards!

After carefully considering the many hundreds of nominations, we have a shortlist! Voting on the winners will begin soon, ahead of the awards ceremony on June 20th