Scott Downie from Tag Games takes us through the struggles and realisations of developing for VR in their recent game jam
The resurgence of VR hasn’t just brought with it new ways to experience games, it has also shifted the goalposts in the way games are created. It’s been a paradigm shift perhaps even greater than those that took place in the dawn of the 3D and touchscreen eras, completely turning some of the principles of game design on their heads.
Like most studios, we’ve been keen to explore the potential of VR and recently held a 24 hour Game Jam event where we experimented with Gear VR and Oculus platforms. Our winning game turned out to be an asynchronous multiplayer Gear VR project which mixed elements of Guess Who? with FPS and stealth gameplay. We’ve since expanded the game into a fuller prototype and here’s some the key learnings we’ve taken away so far:
There’s no perfect solution for movement
Movement is the issue that sits front and centre of VR’s well-documented nausea problem and so far, it’s an issue that has been tackled with workarounds rather than full solutions. VR-based sickness usually stems from the discrepancy between what your eyes see and what your body physically experiences and the simplest way to avoid this disconnect is to remove movement altogether. The concept of a game without movement sounds highly limiting at first but it’s an approach many developers are taking in steps to avoid separating players from their lunch. It could even be argued that in-game movement breaks the illusion of being in the virtual world as the player is always subconsciously aware that they are sitting or standing still.
Initially in our first-person prototype we cut movement altogether and stuck to a fixed viewpoint with a 360-degree view, but during development felt that adding some movement could allow us to explore more varied gameplay and so we began experimenting with a variety of methods.
We started by implementing standard FPS controls, which actually worked ok barring a couple of problems:
- We were developing for the Gear VR platform and therefore had no hardware that could tell us which direction the player’s body was facing relative to their head. This meant we had to implement “snake movement” where the player walks in the direction they are facing.
- As a result of the “snake movement” we got complaints from some testers that they felt nauseous, particularly when moving diagonally. This wasn’t surprising as it is not natural to walk diagonally in real life.
To combat this we dramatically reduced walking speed and further slowed down strafe and backwards speeds to better mirror real life, but this made movement too time-consuming and boring. We then considered condensing the environment to reduce walking distances, but this would have in turn reduced the impressive sense of scale players experienced when playing the game so we opted not to. Instead we took inspiration from several other games and implemented a teleportation mechanism.
Initially we were sceptical of teleporting as we felt it would be a staccato, unintuitive solution and jarring for players, however, it turned out to be a revelation. The teleportation option allowed players to move around our virtual world quickly whilst markedly reducing a lot of the motion-sickness problems we’d seen in earlier testing. We also implemented the ability to snap rotate the camera by a fixed angle allowing the player to turn around in the virtual world without turning around in the real world – this further reduced disorientation for most players, though some still opted not to use it..
I think the most important lesson we learned about movement is that there is no single solution. Some people prefer one over another, some people feel disoriented and some don’t and you have to be prepared to accommodate multiple different movement schemes. The bigger and more complex VR titles get going forward, the more this will prove to be one of the major challenges in VR development.
The player is in control
This sounds obvious and applicable to most games but one of the most challenging and frustrating aspects of creating a VR game is surrendering full control of the camera to the player.
In traditional games, developers often force the player to focus on something important by moving the camera to look at it e.g. “Hey look the target has just arrived” or “Look over here there is a lever to pull”. One of the fundamental rules of VR development is that you should never prevent the player from being able to look around (if they move their head and the world doesn’t move get a mop and bucket ready) – this means that your cool set piece might be completely missed by the player who happened to be staring vacantly at a wall at the time.
Instead of forcing the player to look at something, you can employ various level design principles to encourage the player to look in the direction you want. For example using positional audio sounds that attract the player’s attention or designing the environment and lighting to lead the player’s gaze in a particular direction ( we found that players tend to go towards well lit areas and into open spaces). You can also take advantage of the fact that humans perceive movement particularly well and have something move across their field of vision in the direction you want them to look.
If that fails you can “force” the player to look in a certain direction as long as they don’t see the movement. One gameplay feature we developed required the player to avoid being caught by an enemy agent. Part of the design was that the enemy agent could sneak up behind the player and catch them unaware, but we found that even with audio cues, players were confused as to why the game had ended and didn’t realise they had been caught. We didn’t want to cut the feature as it added a real sense of jeopardy but were struggling to think of a natural and effective way to message to the player that they had been caught without spinning the camera around. In the end we darkened the screen to simulate the player’s eyes closing, then had the player fall to the ground offscreen before opening their “eyes” again to look at the enemy looming over them. As a result we were able to get the player to look where we wanted, without removing their control of the camera or creating a movement disconnect.
Immersion and realism are different things
One of the things we discovered early on in our VR journey is that the virtual world doesn’t have to look photo-realistic to be immersive. This is hugely liberating and one that has so far proven popular in VR, with many developers experimenting with stylized rather than realistic worlds. It means that VR can offer escapism in the way many traditional games do and from a more pragmatic point of view, and also helps with the vital process of keeping framerates as high and smooth as possible.
We underestimated how much players would enjoy just exploring the environment – especially those pulling on a VR headset for the first time. It was important to get the balancing act right between showing the player how the game worked whilst allowing them to indulge their natural urge to explore. Our prototype did involve quite a bit of exploration but in hindsight it would have been nice to have given players a little more impetus to really immerse themselves in the environment early on, before getting into the missions once the initial VR honeymoon period was over.
Where we did manage to succeed with the environment was in creating a sense of scale and contrast – certain areas had high atriums and were well lit, while others were more cramped, crowded and dingy. We found that players will spend more time looking up than they would in a traditional game so make sure you consider the vertical plane when designing environments.
Don’t fight real world interactions
Where realism becomes more important is with regards to how the player interacts with the environment. We’ve already mentioned how difficult it can be to force the player to look at something and this makes traditional tutorials difficult to implement. Where possible, developers should ditch tutorials and instead rely on real world interactions that players are already familiar with. Players shouldn’t need to be taught how to open doors, pick up objects or, in our case, stab someone in the back!
Controller limitations however can make this type of interaction difficult and this is something we discovered while developing for Gear VR, where due to the lack of motion controller support we had to develop for standard gamepads. This was less than ideal as players had to remember the various button functions and would often look down, unsuccessfully, to try and see where the buttons were. To solve this we projected the appropriate button and its position relative to the other buttons onto the surface of objects that players were attempting to interact with. Embedding these messages into the virtual world felt a lot more natural than doing it via clunky UI overlays, but even though this solved some of our interaction issues, we never really came up with a good solution for non-context sensitive actions. When it came to reminding players of the buttons used to teleport and fire, our testers would often have to call out to be reminded of the controls, especially in their first few games
Players inevitably become more comfortable with the controls over time, but we’d say if your VR game can’t support more intuitive motion control then it’s best to keep things simple, tailoring controls to the lowest common denominator.
VR is not a single platform
We quickly learned that there is no such thing as simply developing “for VR”. The differences between platforms are pretty extensive not only in terms of specs but in target audience. During our Game Jam we had teams working on Oculus as well as Gear VR and developing for Oculus brings with it different expectations in terms of visuals, controls, gameplay depth and session length. With PSVR and HTC Vive you can add two more very different platforms into the mix, so if you’re thinking of doing a VR project it works best to view VR as a whole ecosystem rather than a single platform. Don’t forget about the hardware and end user differences.