Shown off at SIGGRAPH and reported by Ars Technica, the technology uses nureal networks to create first time, accurate facial animation results
Technology developer Nvidia and game developer Remedy Entertainment have been working together to create a new approach to capturing accurate facial animations without the need for costly and time-consuming touch ups post-capture, as reported by Ars Technica from SIGGRAPH in LA.
The solution, which has been created in conjunction with researchers at the University of Southern California and Pinscreen uses deep learning neural networks to not only capture and process the data but also to automatically produce accurate predictive animations with the only further input being voice dialogue.
The idea is that the data captured from motion capture and facial recordings, provided by Remedy for this case, is fed into this neural network. This is all powered by Nvidia's expensive and rather meaty eight GPU DGX-1 server system. The neural network gets a lot of data straight away and after a short time (reportedly five-ten minutes) can predict and animate a face without the need for further visual reference data. Once an actors dialogue is fed in, the system can then produce an accurate facial animation of the character talking straight away, without the need for further work by animators or additional visual or motion capture.
The video below is a demonstration from of the technology in use.
Remedy has often used new and impressive technology to better the results of its games, especially when it comes to storytelling and expression. It's most recent release, Quantum Break, used DI4D's motion capture software to create realistic characters in-engine that stood up the dual television production element of the game.