What do we mean by Virtual Reality?
Many people don’t realise just how old Virtual Reality (VR) is as a technology, or, more properly, a suite of inter-related technologies. The idea of being able to inhabit a virtual world and interact with objects in it goes back decades and has its antecedents in the realm of science fiction.
The History of VR – a tale of Trials and Tribulations
In the late 1980s, these dreams started to seem as if they might come true. The British Government and some blue-chip commercial partners set up a research organisation called the National Advanced Robotic Research Centre (ARRC) at the University of Salford, and the seed that was to become Virtalis began to germinate.
If you think about it, a robot that can’t see is a pretty useless robot, condemned to blindly repeat the same pre-programed moves over and over again – very limited and very limiting. The idea of dropping robots into their own virtual world and, using real-time updates, continuously altering that virtual world to mimic reality whilst sharing their viewpoint, opens up the possibility of advanced robotic working, both autonomously and co-operatively.
The creation of laser-generated models of the real world for the early advanced robots to build up their understanding of and ability to operate in progressed hand in hand with the development of CAD software, where virtual worlds were built up from scratch. Back in those early days, mainframes capable of generating them were almost the size of upright pianos and there was often lag in rendering images to keep up with movements in the scene, which made users feel disorientated and nauseous.
But still, the possibilities were recognised along with the frustrations. There wasn’t one industry or human endeavour that it was felt might not benefit from VR, from product design to entertainment, from astronauts practising complicated manoeuvres on the Mir Space Station to robots attempting to decommission contaminated plant in a nuclear power station, and from glossy retail environments to business communication spanning the globe. The possibilities then seemed endless. They still are.
As a result of this early flowering, in which Virtalis’s forerunner company VR Solutions (now spun out from the University) led the way, the 1990s was the time when the multinational blue chips invested heavily. This despite the fact that the cost of the compute alone could top £250k.
A False Dawn
The world waited. But the early promise was a little too eager too soon. Not only were the barriers to entry prohibitively expensive, forcing out all but those with the deepest pockets, but the hardware was rather temperamental to say the least. “Flaky” might be a better term. And the software? Where was the software? Well, there was some, such as SuperScape, World Toolkit or Division MockUp, but it wasn’t easy to use. The result was a few premier flagship VR suites that were extremely costly to set-up, operate and maintain and, owing to the paucity of software, had severely limited content or application. Some gathered dust and were quietly closed. The then technology editor of the FT told us assertively that “VR is dead”.
Now called Virtalis, after the successor to VR Solutions, Virtual Presence, hit the financial buffers, we kept the faith. Every six months, Boyle’s Law saw to it that the power of computers and, even more important to us, their graphics cards, grew and grew and grew. Suddenly, we could do those things we’d dreamt of doing, but that clients’ constrained budgets and technological frustrations had prevented us from achieving.
The yearning had always been there and now we appear to have the disparate tools to make it happen. Market push by enthusiastic technology companies to brave visionaries in engineering and academic markets has been replaced by market pull from video gamers and prosumers. The gamers are now being fed by some of the technology companies, which have become more sales and marketing focussed, and the giant social media and communications companies, like Facebook, Microsoft, HTC, Samsung and others. They share one trait: they are all looking for the next “big thing”.
Why VR will Soon Affect Every Part of our Lives
Low cost head-mounted display devices, such as the Oculus Rift, HTC Vive, Microsoft Hololens and Samsung Gear, are gathering a dangerous amount of hype around them, but VR, which has been viewed as “niche” for the last two decades, finally seems set to become mainstream.
VR might be scaling the massive consumer mountain today, but this happenstance hasn’t come about by accident. All through what we at Virtalis call the “Wilderness Years”, businesses in engineering and some areas in academia have been reaping the benefits. These early adopters, clustered in market sectors such as automotive, aerospace and shipbuilding, have long realised that VR’s extra dimension is a game-changer, shortening product development times and providing a multi discipline communication and understanding portal. They didn’t comment on their use of VR much either for fear of frightening their investors, or because the technology was so valuable to them that they were scared their competitors would grasp the VR mantle.
Now, lower hardware entry cost points, improved application software and a broadening of commercially available solutions, supported by a growing evidence base of successful implementations and use-cases, have all led to VR being implemented by a new wave of businesses keen to exploit the benefits.
There is a growing trend for accessible VR, which breaks away from a single large screen, corporately controlled, centralised VR room to smaller, more immersive environments in compact meeting rooms or even individual desktop-based systems. Smaller VR systems such as these negate the need for advanced booking systems and assistance from IT departments, making accessible VR a commercial reality.
However, there is no single, well-trodden path to VR success.
Successful VR is Immersive
Even today, many people think they know what VR is, but often their knowledge is limited. Watching a 3D film is a world away from having the ability to immerse yourself within a virtual environment and to connect closer than ever with your subject matter. Being able to explore in a virtual world, deploy virtual touch via haptics, register a rich sound scape and collaborate in real-time – these were the original dreams of the early VR pioneers and they are still the dreams, except that every day we are now not only achieving that dream, but that dream is easier to achieve too. More and more people are grasping that VR adds value. For example in product design, being able to interact with a virtual product concept can act as the catalyst to delivering faster and improved decision making without the need to build multiple, iterative prototypes.
Today, VR is becoming acknowledged as the new CAD, as it is widely used to enhance engineers’ and designers’ understanding of a product. This allows them to be able to take something apart and test design features without touching the actual product, at a time before the actual product even exists. The British Ministry of Defence saw this potential when it commissioned the successful Tornado Maintenance Trainer fifteen years ago. They realised that it was all about their maintenance engineers being able to deal with the numerous “what if” scenarios that life has an uncomfortable way of throwing at them.
Collaboration: It’s what we humans do
One of the ways that VR has developed in recent years is to remove the notion of a single person working environment, but instead create a virtual world that is shared by multiple users. This ability is only possible to achieve in real time, thanks to the exponential increase in computer processing power which has happened simultaneously alongside dramatic decreases in price. Collaborative and accessible VR is the new frontier because it takes communication and decision making to a new level. Being able to interact with colleagues either in the same room or the other side of the world and all share the same virtual environment and interact with the same product design is not science fiction – it’s early 21st century VR and it’s used today.
What is VR Made of?
VR systems can take a wide variety of forms. In a typical “Wall” or “CAVE”/”Cube” solution they include:
- Data – the information you want to look at
- Visualisation – the hardware you use to see the images
- Compute – the hardware that processes the images
- Software – to run / play / manage / manipulate / create the data
First the data is sourced, if necessary, from various sources; for example, CAD data (e.g. Creo, NX/JT, Solidworks, GIS) or other unstructured data (e.g. laser scan point clouds). Then it is converted into data sets with a common format using a range of adapters and converters (e.g. Virtalis Exchange). These converted data sets are then combined in a visualisation package (e.g. Visionary Render or GeoVisionary). This then outputs the scenes to a graphics card which renders the left and right frames for the projector(s) to show. In parallel, the graphics card generates a synchronisation pulse that the emitter uses to trigger the active ‘shutter’ glasses. The glasses then shutter in synchronisation with the projector and your brain is convinced that it is seeing a 3D stereoscopic image.
3D or stereoscopic projectors come in many ‘flavours’ with differing imagery technologies: LCD (liquid crystal display), DLP (Digital Light Processing), LED (Light Emitting Diode), LCOS (Liquid Crystal On Silicon) and D-ILA (Direct-Drive Image Light Amplification) which is a proprietary LCOS technology. However, one technology is used in most of the active 3D systems installed around the world; DLP.
The main difference between the units is the resolution and brightness, with the newest 4K projectors held to offer the best combination, if budget allows. Ultra portable units are low cost but are severely limited in brightness and resolution. They are single chip DLP units and have a maximum resolution of 1280 x 720 and brightness of around 3,000 lumens.
Due to cost considerations, most DLP projectors use a single DLP chip with a light source projecting through a spinning colour wheel to achieve different colours during projection. While this approach works well in many applications, the colour wheel typically does not provide the depth and accuracy of that found in projectors that use three separate chips, each projecting a primary colour of red, green and blue. Use of a colour wheel can also result in artefacts known as the “rainbow effect”. The viewer will often see “rainbows” when bright objects are displayed on a dark background or when the viewer’s eyes pan across an image. 3-chip DLP projectors boast a brightness that runs from around 6k lumens to over 30k lumens per projector with resolutions from 1400 x 1050px up to 4k (4096 x 2160px).
Virtalis recommends the use of 3-chip DLP technology as a superior projector technology. They offer extremely high contrast ratios, better colour separation and darker black levels. This technology produces highly stable images and the whitest of whites without the need for artificial white boost as typically used in single chip DLP projectors that employ a colour wheel, all contributing to the highest contrast levels and superior image quality.
The latest projectors now have the option of Mercury-based lamps, which offer similar quality images using a more power efficient lamp. This allows the projector to run cooler, using less power, and more quietly than the traditional Xenon-based units.
1:1 Scale is Intuitive
High resolution is one key factor to the suspension of belief and is a result of the combination of the number of pixels projected and the size of the screen. In order to be believable, the VR system should give your brain the least to do, so 1:1 scale or greater can add value to the viewing experience meaning that for projected VR, screens can get very large (4m wide is common and 5m wide and greater is possible).
The system then becomes more ‘immersive’ as you fill more of the viewer’s peripheral vision. The more you are immersed in the environment the easier it is to believe that environment, and belief that what you are seeing is representative of the real world increases the effectiveness of the system.
In most installations, in particular engineering, we are being asked to render ever more complex models for example: complete wind turbines, ships and even power plant. These models contain a lot of small detail, nuts, bolts, rivets, facets, cables etc., so the quality and detail apparent in these images is critical to the user. Screens have to be small enough to fit in the room, but more importantly, the image needs to be close enough to scale to be believable and large enough for details such as nuts and bolts to be selectable.
But a larger screen means larger pixels… The bigger the pixels are, the more the viewer is distracted by them and notices that they are watching a projection system and that the product they are seeing projected is not really ‘there’. A poor and unbelievable representation of the scene or product destroys the immersive benefits of VR. In the end, quality tells in a VR environment, so that the experience is more all-consuming than the technology that produces it. Adding multiple projectors and blending the projected images together is a proven way of addressing the increased screen size and pixel size dilemma. If this is done correctly, then viewers are presented with a screen that depicts a continuous 3D projected image with no noticeable blend regions.
Passive or Active?
Passive systems, typically consist of two perfectly matched projectors; one for each eye, left and right. Consequently there is double the maintenance compared with a single projector system, not to mention twice the number of lamps and maintenance. In addition, both have to be perfectly aligned to achieve the optimal image and drifting inevitably occurs over time. They also demand the use of a more expensive screen with a specially coated surface which can be inconvenient and detrimental to non-polarised projection (2D for example). More modern systems can use just a single projector, because they use a zScreen.
Passive polarised systems are not ideal for immersive tracked solutions. This is because the stereo image is dependent upon the polarising filters in the glasses and the projectors being perfectly aligned. This works well in cinema where you sit still and ‘watch’ a scene but in an immersive environment you ‘move around’ that scene.
Any misalignment of the polarising filters with the user’s glasses will cause ‘ghosting’ of the images and a breakdown of the stereo effect, ruining the immersive experience. Polarisation helps this to some extent but does not totally remove the problem.
Active systems consist of a single projector that switches between left and right eye images very rapidly. The resulting image is then viewed through electronic glasses that ‘shutter’ in synchronisation with the projector. The result is a system that can use any screen surface, or wall for that matter. It is always perfectly aligned and is therefore very easy to maintain and setup. The lack of polarising filters in the glasses and projector means that you do not lose the stereo image when you tilt your head in the tracked environment. In the past, a criticism of active systems has been the high cost of the eyewear. This has fallen dramatically over the last decade with the advent of active stereo based consumer TVs helping to bring economy of scale. 3D active stereo glasses are now produced in sufficiently high numbers that the costs are around a tenth of what they were a decade ago.
Head-Mounted Displays (HMDs)
HMDs are what many picture when they think about VR. HMDs overcome the large set-up costs of a projected image system and the space requirements too. It isn’t an “all or nothing” thing either, as lots of our clients deploy HMDs alongside projected VR, using them to collaborate in a common virtual environment. Here the “active” person wears the HMD and interacts with the virtual world, while colleagues, who might not even be on the same continent, are also immersed in the same environment. Some of these colleagues could also have their movements fully tracked, with their HMD-wearing colleague represented by an avatar in the virtual world.
HMDs come in a variety of sizes, resolutions, fields of view and prices. They can also be wired or wireless. Picking the right HMD for you will depend upon what you are hoping to achieve. The Field of View (FoV) depends on how immersive your application or environment needs to be. In general, a 35-60 degree FOV is considered good enough for most situations. However, a 100 degree plus FOV can be required when an application requires a great deal of peripheral information, as with driving simulators for example.
The new low cost HMDs, such as the Oculus Rift and the HTC Vive bring this technology within the budgets of most people for the first time, but they require suitable stereo supporting graphics cards, so, for most people, it isn’t quite “plug and play” just yet. However, professional VR users are better placed to hit the ground running with these new products. The buzz these low cost pioneers have created means more manufacturers will enter this marketplace and we can confidently expect HMD technology to shift up a gear in the years ahead. The same is true of augmented reality devices which, like HMDs, have been around for a while, but might now become mainstream, combining as they do the virtual and the actual, making them ideal for a huge number of applications, from operating in hazardous environments to flying high speed fighter jets.
There are a baffling number of screen manufacturers and material types, both solid and flexible, and each is custom-made to the size required. The most cost-effective are those made from a flexible material. These are securely held in a non-ferrous frame and mounted within an aperture in the projection room wall; the void in front of the screen for the audience/user, and the void behind for the projector and hardware.
Solid screens are manufactured from either a sheet of cast acrylic or glass. These are far more expensive than soft screens for a number of reasons. Firstly the material is more expensive, they are heavier and have to be shipped flat (obviously they don’t rollup or fold), shipping costs are therefore much larger (which can be as much as the cost of screen itself) and the installation needs more time and more people.
Typically the solid screen is suspended in the frame to prevent it from bowing under its own weight whilst gravity helps to maintain a perfectly flat projection surface. It is also generally clamped along its lower edge to maintain its flat shape.
We configure our projected systems as a range of VR systems, from portable and transportable to multi-wall “Cube” environments to walls, both flat and curved, to the increasingly popular, wall and floor combinations.
Because of the complexity of an installed system, an easy-to-use method of controlling the system and switching between display modes is required. We use our StereoWorks control system which normally includes a touch panel display system, programmed with a custom user interface enabling the switching between common functions with a single button press.
It also supports advanced features such as secure remote web access to the panel, so the system can be controlled by any PC with a web browser. Apple iPads can also be used to give full wireless control.
Here the ‘VNC’ functionality is provided over Wi-Fi, allowing ‘remote desktop’ control of any networked workstation.
We believe tracked movement to be essential to the immersive nature of VR whether in an HMD or when using a projected system. Without tracking, the intuitive nature of exploration of the virtual world is lessened and the level of immersion severely adversely affected. There are three commonly used tracking technologies from which to choose; magnetic, optical and hybrid inertial/ultrasonic. Wireless tracking is clearly the best, because you can interact with the virtual world unencumbered and unfettered.
The immersive stereoscopic effect for a single user can be considerably enhanced with the addition of a tracking system and interactive “wand”. Tracking sensors communicate the position of the user’s hand (holding the wand) and head (attached to the LCD glasses, providing an effect more akin to looking out of a window rather than at a picture on a wall). This “immersion” mode means that the user is surrounded by the data and actually loses the belief that he is looking at a projected display at all.
Optical tracking systems use that ‘see’ reflective markers. These systems can work very well and are commonly seen in mocap for filming, but issues and limitations do occur due to occlusion (when the cameras can’t see the markers). Optical systems are ideal for tracking large areas where line of sight is not a problem.
These tracking systems use a device that generates a magnetic field by passing an electric current simultaneously through three coiled wires which set up perpendicular to one another. The system’s sensors then measure the relationship between the magnetic fields that are generated. This then indicates the orientation, position and direction of the sensors. The responsiveness of an efficient electromagnetic tracking system is excellent and the latency is quite low. The drawback is that large masses of metal or other electrical systems in the infrastructure of a building can create a magnetic field that will interfere will the system and therefore introduce disturbances and inaccuracies.
Inertial motion tracking is used to track both position and orientation, while ultrasonic range measurements are used for drift correction purposes.
Inertial technology can provide fast updates with low latency and advanced motion prediction algorithms. This ensures that the 6-DOF data generated is very smooth, precise and free from jitter. The ultrasonic transmitters’ 40 kHz ultrasonic signals are picked up by tiny microphones in the tracked devices. By having several ultrasonic emitters in the tracking environment, forming a constellation, redundant measurements are available to provide optimal updates for the inertial sensors. Transponders are strategically mounted around the room in what is referred to as a ‘constellation’ in order to ensure optimal tracking throughout the volume. This technology is ideal for both accurate Head Tracking and navigation and manipulation of the virtual world via a wand.
Virtalis’ predecessor company developed the first “data glove” over 20 years ago and the current range of gloves are not a million miles different in how they work. A virtual hand is presented in the immersive environment and, for the manipulation of individual fingers and joints, tracking sensors are added at each finger joint. However, one fundamental issue is that one size does not fit all i.e. different users have different hand sizes and so multiple gloves are often required. We also find the latest gloves are susceptible to faults if heavily used, owing to the complex wiring contained within the glove. If persevered with however, there is no question that gloves offer precision and life-like interaction with virtual objects.
This is a relatively new technology, but it overcomes all the problems associated with VR gloves. High-end optical tracking systems can now be integrated into VR hardware. Relying on infrared optical tracking technology, this solution has been enthusiastically adopted by the automotive and aerospace industries, and by research institutes and universities. We have recently added Fingertracking to Visionary Render. The Fingertracking device allows you to track the orientation of the hand and the position of the fingers wirelessly. It works for one, as well as for both hands, and is available as a three or five finger version enabling collaboration.
Haptics, or virtual touch, adds another dimension to VR – the element of touch. We’re involved with Touch & Discover Systems’ Probos system, that allows museum visitors to virtually “touch” and examine precious artefacts recreated in Virtalis’ Visionary Render software and The Haptic Cow and Haptic Horse veterinary training systems in collaboration with Professor Sarah Baillie at Bristol University School of Veterinary Sciences.
We are convinced that the use of haptics will move into the mainstream and this technology really has to be experienced to be believed. Your brain is telling you there is nothing there, but your fingers are telling you that they are exploring and feeling the virtual image your eyes are seeing. It’ll surely be not too many years before all virtual models are haptically enabled as a matter of course.
Virtalis Developed Software
All the VR hardware in the world is completely redundant without VR software that helps create realistic, interactive 3D virtual environments. This is where people commissioning VR often come unstuck – they have a fabulous VR set up and little or no content to show on it. It’s like buying a great television set, but having no programmes to watch!
Our newest software addition, Visionary Render, allows users to access and experience a real-time, interactive and immersive VR environment created from huge 3D datasets, usually from CAD, but it can also handle other data sources, including point clouds / laser scans. Users can work alone, in small groups, or collaborate with distant colleagues in a common virtual environment to perform detailed design reviews, rehearse in-depth training tasks, validate maintenance procedures or verify assembly and manufacturing processes. Visionary Render delivers advanced rendering of huge models in real-time with ease of importing from a range of data sources, maintaining naming, hierarchies and the all-important metadata.
We have created a range of translators as part of our Virtalis Exchange to get CAD and other data into Visionary Render. Typically, an entire CAD model can be ready to view immersively in under 30 minutes.
GeoVisionary is specialist software for high-resolution visualisation of spatial data. The initial design goal was to ensure that data sets for large regions, national to sub-continental, could be loaded simultaneously and at full resolution, while allowing real-time interaction with the data. One of the major advantages GeoVisionary offers over other visualisation software (3 & 4D GIS) is its ability to integrate very large volumes of data from multiple sources, allowing a greater understanding of diverse spatial datasets.
ActiveWarp software is a suite of three applications for calibrating the output of multi-projector systems into a seamless, distortion-free display. It counteracts effects like keystoning and overlap by warping and blending the image on a compatible NVIDIA Quadro GPU.
Our Visionary Cluster architecture delivers extremely high performance rendering encompassing blending, geometric distortion correction and image compositing by taking advantage of the latest in graphics subsystems technology.
The ActiveView system is capable of simultaneously providing up to eight inputs from different sources whether they are 2D or 3D stereo, that can subsequently be rendered as Picture in Picture (PiP) windows overlaid upon the visuals of the principal application, which itself may be a monoscopic or stereoscopic application. For example, a 3D stereo capable laptop showing a 3D stereo scene with a second laptop showing a Word document and an iPad showing a YouTube movie.
Each PiP window may be rendered upon the ActiveWall’s visual channels, or split across more than one channel. Each PIP window can be scaled in X and Y such that its aspect ratio is variable. It is also possible to zoom in to the content of a window in order to perceive greater detail and pan around accordingly.
For many, when embarking on their first VR journey, the natural starting point when developing a VR software environment is to look at gaming engines. After all if you can produce photo-realistic games, then developing a small VR application to view an engineering design, for example, must surely be straightforward… However like many journeys in VR, it’s as much how you get there that is as important as the end of the journey itself. Whilst using gaming engines can often get you partially along the road to success, they are not focussed on handling large CAD models, point cloud data, multi-user collaboration and developing immersive VR environments. They just weren’t designed with this kind of data in mind and many a user has reached a VR cul-de-sac using this route. We created Visionary Render software from the ground-up to meet commercial VR users’ needs to overcome the frustrations encountered by those that had tried games engines to generate virtual worlds, but had become frustrated by their limitations.
The VR Compute
A Virtalis cluster solution would consist of one master node and a number of slave nodes. These rack-mounted workstations come configured with high clock speeds, large HDs, memory and high-end graphics cards which will deliver the performance necessary to drive the high resolution system and any data you choose. They will also come configured for stereo output (although mono is also supported).
To keep transportable systems, such as the ActiveMove, as simple as possible, a 2.1 speaker system would deliver sound. On installed systems, more complex options can be delivered, such as 5.1 surround sound and specialised sound – a computer-controlled, 24 channel ultra-realistic spatialised ambisonic system. This incorporates integrated low-frequency effects comprising subwoofers and floorshakers, providing ‘tactile’ feedback for wholly realistic VR environments. There is no question that high quality sound adds a great deal to a truly immersive VR experience.
In projected VR suites, fluorescent lighting should not be used. The flicker generated interferes with the visual quality of the system. We recommend an alternative or additional incandescent lighting that may be controlled (turned on and off and dimmed) from the StereoWorks’ control system.
In order to benefit from the excellent contrast qualities of the proposed projectors, we recommend that blackout blinds are fitted to any windows and painting the walls and covering the floors in a dark colour (dark blue is commonly adopted) minimises reflections, reducing washout on the screen and enhancing contrast further still, thereby resulting in optimum image quality.
The power consumption of some multi-channel projected VR systems can generate considerable heat. If the windows are blacked out as recommended, it is likely that they will be closed. The environment would inevitably rise in temperature and become uncomfortable, so air conditioning is essential.
In rear-projected systems, careful consideration should be given to ‘balancing’ air movement either side of screen. If this is not done then the screen can oscillate/wobble whenever air-conditioning systems are activated or doors opened. This is particularly noticeable on soft screens but even hard screens have been known to warp and bend to such an extent that they never straighten.
Appropriate balancing of all “other aspects” of the installation, such as lighting and air conditioning, can make a significant beneficial effect on the effectiveness of the VR system as well as how the Virtual World is perceived by the users.
In the End, all this Technology Shouldn’t Even Be Noticed…
As the line between the reality we know now and VR gradually becomes more and more blurred, VR is completely changing the way industry manages business processes. What has come before in this article is a peek into the enabling world of VR and how some of the parts work. We’ve explained some of the myriad considerations that come into play when a VR system, large or small, is being considered. There is huge potential to get it badly wrong and have an unsatisfying, not very usable, VR set up that gathers dust as a result. We’ve come across many of these over the years – money wasted – and we are often called in to put them right. We believe that, done properly, VR technology shouldn’t even be noticed. How you interact, what your interface is, none of that should matter. All that matters is that it feels natural and believable. That, as you’ve read, is quite hard to achieve.
In the next decade, VR might not be viewed as “specialised tech” at all, rather as a necessary advanced communication and understanding tool, accessible to all to work solo or collaboratively, enabling businesses and organisations to improve decision-making and to respond quicker to dynamic changes. Our prediction for 2020 is this: because VR tears down barriers, it will be viewed as holistic, natural and enabling.
Channel News Asia have published a great video and article about the exciting new A*STAR facility at the ARTC in Si… https://t.co/xOUIP7S6tf
Great video from @Raytheon showing the many ways they use our #VisionaryRender software. Our software helped them f… https://t.co/Fj7y8FHtGF
Our Rear-Cabin #VR Trainer & Helicopter Crew Reality product have been commissioned by @ThalesAustralia for the He… https://t.co/FVIlZ0TPdt
ViGMAS, a research programme by @UNITEN, is replicating near real-time conditions in real world sites virtually.… https://t.co/qleVKg4hqE
The VR facilities @Raytheon Boston and Tucson have documented metrics validating the impact of VR at Raytheon, deli… https://t.co/xWYFosGQHa
Olivia Hartley Virtalis PR
on +44 (0) 161 969 firstname.lastname@example.org