Coding The Assembly: Let’s Get Technical
Following on from the roaring success of our blogs focusing on the Production and Design aspects of developing The Assembly - our upcoming immersive, interactive story in VR – today we’ll be shining a light on some issues that our fearless Code team have faced while building the game.
When it comes to in-game user interfaces, things have been fairly standardised since 1980s. Any text and images the player needs to be aware (such as health, acceleration or a mini-map etc.) around the edges of the screen. That way, the player can glance at them when needed without getting in the way of the moment-to-moment action happening in the middle of their view.
However, this perfectly universal solution for flatscreen games pretty much broke immediately when we tried it in virtual reality. In VR – just like IRL – a player’s focal point is on the centre of the screen. However, in VR a certain amount of detail is lost the further away an object is from centre point for each eye, so needing to glance at the edges wouldn’t really work. Moreover, you can’t really use a fixed 2D HUD either – again, just like in real life, everything that you see needs to exist in location in virtual physical space.
Let’s do the timewarp
One of the big issues in delivering a workable UI in VR – one that both exists in 3D space and also follows where the player is looking – relates to Motion-to-Photon latency. “But wait, what does this technical jargon actually mean?” said your humble scribe when first hearing the term. Well, motion-to-photon latency is the time needed for a user movement to be fully reflected on a display screen. Low motion-to-photon (< 20ms) latency is necessary to convince your mind that you’re in another place, i.e. to achieve a sense of presence in the virtual world. Conversely, a high motion-to-photon latency can induce simulator sickness and thus makes for a very VR experience.
So, how do we reduce latency? We employ a technique known as timewarp, which happens to be just as futuristic as it sounds. It’s actually a very simple idea - we predict where the player's head movement is travelling and render an image in advance to match that prediction. When we know where the player's head is, we fix the image so that it corresponds more accurately. What’s amazing is that all this happens in less than 2 milliseconds – this is of vital importance if we’re to deliver rock-solid performance and prevent simulator sickness.
For an example of what we mean, check out the diagram below:
You may also want to check out this video by eVRydayVR, in which they discuss the technology when it first introduced back in 2014.
Things should be attached to your head
So far, so good… but take a step back for a second. We need The Assembly’s UI to follow the player’s head, don’t we? Otherwise it wouldn’t always be visible and that wouldn’t make for a particularly useful UI. Unfortunately, with timewarp anything fixed to the player’s head position would be slightly left behind whenever the player moved their head to look around, causing simulator sickness. We found that timewarp caused our UI jittered all over the place, so another solution was needed.
How would we solve this stomach-churningly difficult problem? With a mind-bogglingly simple solution – don’t use timewarp for the UI! After all, the correct render position for something that always follows your gaze would be its position relative to your head, not to the game world.
Just when you thought it was safe to go back in the game world
Turning timewarp off for UI but keeping it turned on for everything else wasn’t as simple as it sounds… which isn’t all that simple in the first place. We’re building The Assembly for Oculus Rift, PlayStation VR and HTC Vive (i.e. Steam VR), however there wasn’t a common solution to this issue for all three platforms. As some platforms had different ways of rendering in-game objects, so we needed to have multiple ways of solving the same issue.
On PlayStation VR at least, we had a bit of headway as that platform’s reprojection system is essentially the same thing as timewarp, but manages to go one better. PSVR allow developers to pass up to 2 images or layers to the VR headset simultaneously. The first image will be reprojected (à la timewarp) and the second image is composited on top of the first. We were able to use this two-layer rendering reprojection system to separate the world from the UI. This enabled us to have a UI that not only didn’t jitter around the screen or cause simulator sickness (yay!), but also functioned well, displaying critical information in the AR-like way that we wanted.
Furthermore, this solution provided a baseline that we could test against for our version of the fix on Oculus Rift.
Taking a second look
Oculus provide an impressive amount of support and different software solutions for VR developers. Unluckily for us, they didn't provide a way to layer in a 3D image without timewarp, which is what we needed for The Assembly’s UI.
The best match in terms of our toolbox from Oculus was to use a quad (square) render locked to the player’s head position. This that let us place the UI on a flat plane – hurrah! Unfortunately, that's not what we wanted - we had our hearts’ set on a 3D UI hovering in front of the player’s head position.
Steam VR’s solution closely followed Oculus’, so we assumed any solution we developed for the Oculus Rift would work on that platform too.
It came in like a wrecking ball
Our initial solution was to carry over Oculus’ quad renderer to PlayStation VR so that we had a unified solution. This was limiting as it confined our UI to a single-depth, rectangular shape in space, when we wanted more. We’d envisioned a UI that had depth, form and with beautiful transitions - not just a 2D rectangle floating in space!
The best we could do was to render multiple quads at different depths and at different orientations, to give a pseudo-3D look but with real depth. Essentially, we were creating 3D shapes in virtual space out of flat 2D sides. A flat-pack UI solution, if you will.
On the plus side, this allowed us to be more inventive with our UI - for example, we could finally implement rotating logos on Oculus Rift as part of our AR-like UI. On the downside, though, we were limited by the number of quads we could actually use, limiting the 3D shapes we could fit together.
The biggest flaw, however, was the fact that this method hit our render times (and thus the game’s overall performance) like a wrecking ball, due to the number of render passes required for our flat-pack UI. We had, essentially, created the most processor-intensive polygon renderer in VR.
Who dares, wins
We’d reached an impasse. We didn’t want to back away from our beautiful, AR-like UI, but we couldn’t move forwards with the solution that we’d implemented as it made the game run like sludge.
We decided to petition Oculus to add this feature to their developer toolbox, explained our specific case and hoped other developers were thinking along the same lines.
A couple of months later – and to the benefit of all VR developers everywhere - what we needed was in the SDK. So, we re-engineered the multi-quad renderer to only use a single render pass, just like on PlayStation VR, and displayed it via the new API that Oculus had delivered.
This means we're free to implement anything inside that whole render pass, including going back to full 3D rendering for UI for our players to enjoy. Moreover, this ultimate solution should work for any VR headset moving forwards.
I suppose it just goes to show that if you don’t ask, you don’t get.
We’re here for you, virtually