Staring into our VR Crystal Ball- The Future of VR
I recently gave a presentation about the history of VR (it started in 1935 B.T.W.), which also included my view of where we are going in the future, so, as we stand at the start of 2018, I thought it would be a good time to gaze into my VR Crystal Ball and consider the future.
What many in the audience at my presentation said they gained the most was not that surprising date of 1935, but what we can actually do today, right now, with commercial off-the-shelf software and hardware – so let’s start there.
Firstly, Collaboration – Right now you can connect up multiple VR users, using differing VR systems (Virtalis HMD-based or ActiveWall) via a simple internet connection, and they can work together in the same VR scene. This means you can see each other, hand items to each other and talk to each other; collaborating just as if you are all standing in the same room, despite being potentially thousands of miles apart.
Remarkably, you don’t need a high-speed connection, so it is possible to have users out in the field on slow connections joining in as well. Did you also know that with Virtalis software we can go one step further and add a viewer that allows you to sit at a screen and view the interaction of others in the VR scene as though you were looking through a window or in God mode? To be part of the VR action with your colleagues, you don’t need a VR system, just a suitable workstation.
Secondly, Combine – Reading and combining multiple data sources together in the same VR scene is a crucial attribute and it is entirely possible now. You can see CAD data from various suppliers alongside your own, or point cloud data from a high resolution scan showing the actual shape, versus what is drawn in your CAD system. A lot of VR solutions are based on viewing only a single data type or source. However, with a more general-purpose tool, like Virtalis’ Visionary Render, it is possible to read in multiple sources – which enhances the VR scene.
Thirdly, Record – When moving objects in a VR scene, that movement can be recorded, so you can analyse the engineering impact back in the CAD system. The great power of VR is interacting with a 1:1 scale version of your design. Interacting at 1:1 scale enables you to see parts in a different way. It becomes immediately apparent, for example, that the desk is too close to wall, or the switch panel is too far to reach. The natural next step is to grab it and move it to the right place, but in most VR systems this is just a visual change. However, in Virtalis’ Visionary Render, this change can be recorded and written out in a change log file that can then be handed to the CAD engineer to see if the proposed change can be engineered into the design.
Lastly, Interact – The ability to pick up, move or swap out any of the objects in the scene at any time is immensely valuable. I have written before about the major difference between game engine-like VR systems, which operate a “Create and Publish” workflow, and CAD-like systems that operate an “Everything Active” workflow. When you have to script ahead of time, all your interactivity and all your actions are in a Create stage – it means you have to think of everything ahead of time, just like a game developer. If you are in the published scene and you try and pick up something that isn’t selectable, you can’t, just like in a game, so you are forced to return to the Create stage and add that functionality. Alternatively, if you are using an “Everything Active” system, like Virtalis’ Visionary Render supports, you can just pick it there and then, change its properties, its position, animate it etc., all instantly, with no need to exit the VR scene.
My predictions for the Future of VR
So, the future… well anyone that has worked in VR for several years will know that predictions, especially precise ones, are error-prone, so let me give you my 2018 generalisations:
Firstly, hardware will get smaller, lighter, have fewer cables and become easier to install and use. We are already seeing a big move to “wireless” with gloves, headsets, haptic controllers etc. More of that will happen as we continue to lose the cables in 2018.
Secondly, “More Touch, More Feel” will become more prevalent, as better haptic devices come to the market. These really enhance the VR experience by giving feedback that really starts to not only trick the mind into thinking what the eyes are seeing is really there, but also allow engineers to interact and touch in the same way they do with objects in the real world.
Thirdly, the drive towards more data, and therefore more understanding, will pick up pace. We will see a move to include not only 3D geometric data, like CAD, but also 3D non-geometric data, like heat, stress, airflow movement, as well as enhanced metadata. We will soon routinely touch on, say, a door and bring up who the manufacturers were, whether it is fire-rated, how many identical doors we have in the building, what it cost, when is it next scheduled for maintenance etc. This level of detail allows the user to make decisions with all the pertinent facts in their possession.
Lastly, things will look better and/or more real. Advances in processing power and graphics power, like we have seen from our friends at NVIDIA, mean that now moving around in a VR scene, that has shadows, reflections and refractions, are all now possible on regular priced hardware, so VR will look more and more real. This increase in life-like experience will further increase the level of immersion.
All these advances will make VR more useable and more valuable than ever. We’ve come a long way from 1935, but we’ve still got a long way to go… Exciting isn’t it?