With the PS4 and Xbox One now upon us, Develop looks at what the new hardware can unlock for animators
Despite the longest console generation yet, with Microsoft’s Xbox 360 launched as far back as 2005 – eight years ago – progress in the character animation space has still been steady, bordering on slow, in recent times.
Some advancements have come as devs got to grips with the ageing hardware, which can be seen over time in franchises such as the Gears of War and Mass Effect series, with each entry steadily upping the ante graphically.
Such improvements in character animation have arguably now peaked on the cusp of the next generation of consoles, with the likes of Naughty Dog’s The Last of Us, Quantic Dream’s Beyond: Two Souls and the artistically-styled The Walking Dead by Telltale Games showing just how far the sector has come the past eight years, and how important this space is to game narrative and player immersion.
The smoothness of these animations is critical. For players to feel immersed, they must believe in the world around them, and not be put off by strange facial animations or odd reactions to the physical world.
It’s no wonder then, with the launch of the PS4 and Xbox One this month, that a host of the world’s biggest developers and publishers have invested in new state-of-the-art game engines, such as EA’s Ignite and Frostbite 3 development platforms, Kojima’s Fox Engine and Square Enix’s Luminous engine, with player expectations in visuals and character animation set to jump alongside the huge leap in hardware specs.
A plethora of enhancements in character and facial animation have also been made in the third-party middleware space. Epic’s Unreal Engine 4 features a new integrated toolset called Unreal Persona, built on top of the Blueprint visual scripting system, which offers blend spaces and state machines and also allows animators to create a playable character without the aid of a programmer.
Unity 4 meanwhile also includes character animation tool Mecanim, and version 4.3 will add BlendShapes, helping developers create realistic and emotional facial animations.
Mixamo’s new free character creator Fuse allows developers to create new 3D models from scratch or select from an array of rigged 3D character models. The tech effectively acts as a ‘free-to-play’ character animation tool.
The company has also just launched Face Plus, a facial animation technology integrated into the Unity engine that allows developers to capture and apply facial animations onto a 3D character by using a webcam in real-time.
A believable performance
Mixamo CEO and co-founder Stefano Corazza says progress in character and facial animation has become a key part of game development, particularly in narrative-driven titles, as developers look to drive the aforementioned immersive experience.
“With storytelling becoming an increasingly important part of games, it is crucial for developers to deliver high quality character animations, including facial animation. Games like The Last of Us, Tomb Raider, GTA V, Assassin's Creed III and Heavy Rain are setting player expectations and gamers now pay even more attention to story, voice acting and animations. They look more and more for immersion in their experience,” says Corazza.
Colin Urquhart, CEO of facial performance capture firm Dimensional Imaging, agrees that the need to convey strong and emotional performances in many games is a driving force behind the character and facial animation sector, with demand increasing for such tech as developers continually raise the bar to create more film-like experiences.
“The demand for more engaging and emotive performances is driving the need for more realistic facial animation,” he says. “We are also seeing much more use of ‘digital doubles’, with casting for likeness, than ever before. The use of head-mounted cameras to capture facial performance simultaneously with traditional body mo-cap is also increasing – and there is huge demand to improve the fidelity of capture from such devices.”
Unity software engineer Pierre Paul Giroux says this increased use of performance capture to gather body motion, face motion and audio at the same time is one of the most important recent trends in animation. He says such a process has been incredibly expensive in the past and only affordable to large studios, and it wasn’t until recently that more affordable solutions have begun to emerge.
“Performance capture greatly enhances character integration into virtual worlds. Actors can adjust performances based on direct visualisation into the game. There’s no more loop time, between capture, or cleaning and integrating animations into the game engine,” he explains.
But is this drive for complete performance capture and life-like character animation significantly driving up the costs of development? Achieving realism isn’t cheap, and with modern triple-A budgets in excess of tens of millions of pounds, there is a danger that costs will spiral out of control in the next generation with its powerful new hardware.
NaturalMotion lead technical animator Simon Mack says there is an ongoing drive to increase character fidelity, and the quality of characters and their animations needs to once again keep pace with the new systems. He believes interactions between characters and dynamic environments will become increasingly important.
Mack warns however that with such an increase in power and the new capabilities available to developers, keeping control of the budget and managing time is one of the key challenges that lies ahead.
“It may not sound terribly exciting, but one of the biggest issues is managing development time and cost, particularly as studios re-work their engines for next-gen. There’s just an awful lot to do. Naturally, we see middleware as a key solution. Used properly, it allows teams to focus on building the things that makes their game unique rather than reinventing the wheel.”
Urquhart agrees that costs for animation are continuing to rise, but notes that as costs in areas such as facial animation rise, the quality of performance must also increase to match advancements in graphical realism and fidelity.
“One of the key challenges the industry is facing is the spiralling cost of continually improving the quality of facial animation. How can studios create hours of facial animation at ever-higher levels of detail, quality and realism within reasonable budgets?” says Urquhart.
“I think the answer has to be to use technology like Dimensional Imaging’s to accurately capture the performance from real life talent. However, this then introduces the challenge that as the realism and fidelity of the animation improves, so the quality of the performance must also improve to match. I see the secondary challenge of obtaining good acting performances becoming increasingly important to the industry.”
Iterating on animation
Some of the more obvious challenges include difficulties in animating and rigging, which Brad Peebler, president of the Americas for Modo firm The Foundry, says suffers from a poor feedback process.
The perhaps contradictory nature of animating and the technical focus of rigging creates what Peebler calls a highly inefficient and oftentimes painfully slow process, one which can require a few minutes to judge each model and result meaningfully.
“Rigging is hard. Animation is hard. That’s the real throttle on the animation market and process,” he says.
“Both of these areas suffer from poor feedback. Unlike a model where you can see the shape taking form in a millisecond, rigging and animation still requires seconds and minutes to see a result in a meaningful way. Since the animators tend to be less technically focused and rigging is incredibly technical, there is an implicit feedback loop required yet the linear nature of these two processes makes it highly inefficient, painful and downright punishing.”
Unity’s Giroux adds: “Lowering the iteration time between animation authoring – mo-cap or other – and visualisation in-game is an important challenge,” he says.
“Also, game developers are animating more and more in-game, whether we’re talking about additional characters or simpler objects, which increases the complexity of the data. That means it’s important to have smarter workflows that allow animators to arrange and assemble animations efficiently with high-level tools.
"While neither of these challenges are new, the huge increase in the amount of data introduced with next-gen consoles – which will translate into more animation – will make them more important than ever.”
Another challenge facing developers is ensuring realistic interactions between the game’s characters and the physical world they inhabit. While some advancements have been made in this area, which can be seen in titles such as Naughty Dog’s Uncharted and The Last of Us, there are still difficulties ensuring characters correctly respond to the world around them when the user is in control.
The issue of characters skirting across different terrain without being realistically grounded or odd animations, such as climbing stairs, are just examples of the difficulties facing developers in this area, as well as ensuring NPCs react believably to objects placed in their way.
Mack says NaturalMotion has tried to solve this with its own runtime character animation and graphical authoring environment Morpheme, which it recently launched with simulation tech Euphoria. This, he hopes, will help developers to push character physics to the next level.
“The interaction of characters with a physically simulated environment is a problem we’ve been working on for over a decade,” states Mack. “I believe that Euphoria has long provided the most advanced and flexible solution for dynamic character animation. The major problem has been the level of expertise required in order to be able to integrate and use it to best effect. That’s why so much of our focus has been on simplifying the authoring experience. Now that we’ve launched Morpheme with Euphoria, we hope that we’ll see developers using our tools to push character physics further than ever before.
“Another key area is the interaction between AI planning and animation that’s required to have characters respond believably to their surroundings. We have been working hard on tools to assist with this, and Morpheme includes our Prediction Model system. This can provide a game’s AI code with the information it needs about animation capabilities to make smart control decisions at runtime.”
Animation on the move
Away from consoles, the booming mobile sector offers its own unique challenges in character animation. While the tech behind smartphones and tablets has improved rapidly, it has arguably still not yet surpassed the current generation consoles, and appears a long way off exceeding next-gen consoles.
Epic lead technical animator Jeremy Ernst says the challenges facing devs creating games for cutting edge mobile devices is similar to those faced on the PS2 and Xbox. He highlights potential performance issues with real-time physics and cloth for in-game characters that can prove a limiting factor for many titles.
“Bone count and influences per vertex are still limiting factors, though even the bone count was pushed quite far on Infinity Blade, similar to those we used in Gears of War 3,” he explains. “Real-time physics and cloth for characters is still a performance burden as well. These are limitations generally associated with the hardware, though.”
NaturalMotion’s Mack asserts however that, from a technical standpoint, mobile is not far away from the current generation of consoles. But he warns that many mobile developers are inexperienced in creating 3D characters and animations, and questions whether, at least for now, these studios are ready to take advantage of better hardware.
He also believes in the past mobile developers have stayed away from many external tools, but says as the hardware gets more powerful, developers may open up to third-party offerings and create more ambitious 3D titles.
“From a technical point of view, mobile is not so different to console now. Obviously the complexity and size of assets has to be managed, but at NaturalMotion Games we’re using Morpheme and Euphoria to great effect on mobile,” he says.
“A bigger issue in mobile is that many developers have little or no experience building 3D games and 3D characters. There’s a lot to learn, and development budgets have to accommodate the extra work involved.
Mobile studios have also historically been reluctant to license middleware other than complete engine solutions.
“As the complexity of mobile games increases and budgets rise to match, we may see this change and open up greater possibilities for animation on mobile.”
The next-gen step
As hardware across the board gets more powerful with the PS4 and Xbox One, the future of character animation looks at both times promising and extremely challenging.
As more games take a narrative-driven approach and expectations rise, so will the costs of character animation as developers strive to create believable characters.
Dimensional Imaging’s Urquhart claims the next-gen jump could require game assets and performances on par with that in TV and film, with more complex rigs created for increasingly realistic facial animation.
“The levels of quality and realism that could previously only be achieved with offline rendering will be able to be achieved in real-time on next-gen consoles,” he says.
“I therefore think that we are truly entering an era of convergence, where the quality of both the assets and performances required in games will be much more on a par with those required for television and movies. In terms of facial animation, the additional resources available in new consoles and engines will allow much more complex rigs that will be able to deliver more realistic animation and convincing performances than ever before. Traditional rig based facial animation will also begin to be augmented or even replaced by vertex cache driven animation, animated detail maps and animated textures.”
On the subject of facial animation, Mixamo’s Corazza adds: “The next step for character animation, is going to be about facial animation first and foremost. Cutscenes showing static, lifeless characters without facial expressions or slapping a mask or helmet on characters to avoid dealing with facial animation will not be accepted by players for much longer as they are getting more and more familiar with great facial animations.”
Mack also says that next-gen games will be required to push the boundaries of physics simulation further than ever before, and characters will need to respond realistically and intelligently if this new wave of titles are to match up to the hardware and high consumer expectations.
“Characters are going to have to be able to respond intelligently to these dynamic environments,” he explains.
“Motion synthesis and other procedural and adaptive animation technologies are going to be increasingly important. The rise in graphical quality will also put a spotlight on animation quality as a whole, and many of the shortcuts used in games now may not hold up to the fidelity required.”