I am often approached by someone who has purchased an Oculus Rift or HTC Vive, who usually says “I am doing VR, so I have bought an HMD!”. I smile.  I try not to smile, but I do.

In essence, a VR system is made up of four components, the engine/software to drive the graphics, the tracking to know where you are in the scene, the hardware that is allowing you to see the 3D scene and most importantly your 3D content or data.

Therefore, the guy that has bought a Vive, has only two of those components, which if I use my favourite car analogy, is like only having the body work and chassis/wheels of the car. Yes, it looks like a car. Yes you can park it on your driveway, wash it every Sunday. However, owning a car is about the experience of driving and the ability of that car to take you places – much like a good VR system.

Taking each component in-turn, let me explain the role they play and what questions you should be asking yourself when selecting them.

3D VR Software

The Software/Engine, drives the graphics, takes input from the input devices and tracking, and also provides the platform or environment for you to develop your 3D scene.

When choosing the software, you should ask yourself:

Does it support large complex 3D models and can I easily change the level of density of the mesh?

Does it support animated data, so I can show how my 3D model moves?

Do I need to code or does it have point and click menus?

How quickly can I get to a viewable 3D scene once I have read in my data?

Does it support a variety of VR hardware and viewing options, so I am covered for future hardware developments?

Does it support collaboration, so I can go online with my colleagues and work together on a 3D model?

Am I looking for just a real-time engine that I will code myself or Full Function Software more similar to my CAD system?

Vicon Apex

Tracking System

The tracking system determines your position in the 3D world, and usually uses a tracking sensor camera that records your movement. With the Vive or the Oculus, these tracking sensors are supplied as part of the headset bundle, but that is not the case with all headsets, and if you are using a 3D screen, Activewall or “CAVE”, like our ActiveCube, then you need to have a method of tracking the user’s position as part of the system.

When choosing the tracking, you should ask yourself:

Will I just need to track my head & hands positions?

Will I need to track my whole body and all my limbs?

Will I need to track multiple people within the same tracked space?

Will there be multiple devices to track, such as a controller, a tool, or a weapon?

Viewing System

The viewing System is the hardware that allows you to see the 3D scene in stereo. In the case of the Vive or the Oculus, this is via the Head Mounted Display or HMD.  However, HMDs are not the only way to see VR. ActiveWalls, 3D Screens and “CAVEs”, like the Virtalis ActiveCube, all deploy stereo projection and when combined with a user wearing 3D glasses, a 3D image appears in front of you.

When choosing the viewing system, you should ask yourself:

Will it just be me viewing the VR scene (in which case an HMD is an option) or will it be a group of viewers (in which case an ActiveWall or ActiveCube is the right option)?

Will it need to be a multi-user experience with networked HMDs, or connected to ActiveWalls?

Will it be collaborative, with colleagues connecting in from other locations?

Will I need to be fully immersed, so I experience all the sensations, such as height or noise?

Will it need to be multi-screen?


The 3D data or 3D scene is what you see and what you interact with, so what is in the scene and what you can pick and interact with is important. If you use the “create and publish” model used by many real-time engines, then you have to remember to code into your application everything you are going to need to do in the 3D scene, much the same way as a game developer codes into the game all the functionality he wants you to have. The “interactive” approach used by VR software such as Visionary Render, means that you are constantly in a menu-driven, interactive 3D scene, so your options are not limited – much the same way as your CAD software works.

Some other points to consider when you build your 3D scene are:

Will your scene by controlled by multiple input devices or external inputs, like a live data feed?

Will it use multiple datasets?

Will it use multiple data types?

Will it need to be distributed via a network, or to users where you want them to have limited functionality?

Will it need to be collaborative?

Will it use mechanisms?

Will it use custom GUI?

Like I said, I try not to smile at the eagerness of the VR newbie. I agree, it is a lot to consider, which is why my colleagues and I spend our time talking to customers to understand the use case. Only then can we recommend the solution that best serves them, rather than just buying a piece of hardware and seeing if it works – otherwise, like an incomplete car, it could prove to be a just an expensive piece of driveway art!