Unity Focus: Monument Valley

Unity Focus: Monument Valley
James Batchelor

By James Batchelor

August 1st 2014 at 11:13AM

Ustwo technical director Peter Pashley discusses the origins of the mobile hit

Why did you choose Unity when developing Monument Valley?
Our first game, Whale Trail, was built in our own C++ engine but we decided to try out Unity while building the following game, Blip Blup. When we started prototyping ideas for the next game, it made sense to continue using Unity because it was so easy to chuck together a quick test for any idea. Our prototyping methodology is to spend up to one week on an idea before asking ourselves if it’s a go-er. With Monument Valley, we had a test level including impossibility and geometry manipulation up and running within the week.

We then spent another month or so properly prototyping the navigation tech and testing out gameplay ideas. We wanted the game to be ‘all killer, no filler’ so we knew that we would need to do a lot of iteration on both the gameplay and art for each level. We didn’t want the tools to get in between the developer and the game, so Unity’s WYSIWYG editor, rapid build-test loop and user-friendliness made it a very good fit.

How did the tech help you achieve the unique art style you were aiming for?
Our lead designer Ken Wong knew from his first piece of concept art that he wanted a very clean, simple look for the game. We ended up writing all our own shaders, using a custom directional lighting model instead of the standard Unity surface shaders. The final look combines that custom lighting with Unity’s standard lightmapping tech (for static ambient occlusion) and some hefty overdraw for vignetting and faked volumetric glows. Being able to rapidly iterate textures, shaders, textures and particles inside the Unity editor was invaluable to achieving the final result.

How did it cope with the optical illusion, level-shifting puzzles in the gameplay?
Navigation was the biggest unknown when we started the project, for several reasons. Firstly, the game is built in 3D with most structures only making sense from the single (orthographic) camera angle. This meant that navigation made no sense in a 3D world space; navigable areas were often not ‘physically’ connected, they were only ever adjacent from the camera’s point of view.

Secondly, we wanted our structures to move, to reconfigure. This meant we couldn’t simply hardcode the ‘impossible’ connections – they had to support so many different geometry configurations that it was completely unrealistic to use manual mark-up.

Thirdly, our characters had to be able to locomote on these impossible connections whilst still appearing to be connected and properly occluded by other geometry. Any lapses would completely break the player’s suspension of disbelief.

Finally, we wanted this to work in real-time. We wanted someone designing a level to be able to move some geometry, hit play and navigation would just work. No extra steps, everything as WYSIWYG as possible.
All this meant that we had to create a new navigation mark-up, route-planning and locomotion system from scratch. We also had to fall back to the legacy animation system because we needed frame-accurate animation sampling to cope with all the teleportation that the character had to do.

The second part of supporting the puzzle gameplay was being able to script the various sequences that the player triggers. We wanted this to be accessible to everyone who was working on level design, and to use standardised methods across different levels, so we created a system where a wide range of animation and state events could be scripted using a drag-and-drop system in-editor.

What single piece of advice would you give to games developers that are new to using Unity?
Take your time. Make something small first to properly understand how Unity works and play to its strengths. Be open-minded – even if it’s not how you are used to implementing something, find out how Unity expects it to be done and go with the flow.