eXtended Reality (XR) is a ‘catch-all’ term for technologies that enhance or replace our view of the world. This is often through overlaying or immersing computer text and graphics into real-world and virtual environments, or even a combination of both.
XR encompasses augmented reality (AR), virtual reality (VR) and mixed reality (MR). While all three ‘realities’ share common overlapping features and requirements, each has different purposes and underlying technologies.
Augmented reality (AR)
AR enhances our view of the real world by overlaying what we see with computer-generated information. Today, this technology is prevalent in smartphone AR applications that require the user to hold their phone in front of them. By taking the image from the camera and processing it in real-time, the app is able to display contextual information or deliver gaming and social experiences that appear to be rooted in the real world.
While smartphone AR has improved significantly in the past decade, its applications remain limited. Increasingly, focus is on delivering a more holistic AR experience through wearable smart glasses. These devices must combine an ultra-low-power processor with multiple sensors including depth perception and tracking, all within a form factor that is light and comfortable enough to wear for long periods.
While this device category was first demonstrated by Google Glass in 2012, the technology required to deliver a lasting solution is reaching a more mature level, but further progress is still needed. However, in enterprise and industrial applications, Microsoft HoloLens 2 is already demonstrating how useful AR can be to the workplace.
Virtual reality (VR)
VR completely replaces a user’s view, immersing them within a computer-generated virtual environment. This type of XR technology has existed for a while, with gradual improvements. It is used primarily for entertainment experiences, such as gaming, concerts, films, or sports but also accelerating into the social domain.
VR is also used as a tool for training and in education and healthcare, such as rehabilitation. To make these experiences possible (and also seamless) for the end user, the focus of VR technology is often on high-quality video and rendering and ultra-low latency.
Finally, VR devices are now enhancing video conferencing experiences through platforms like RecRoom that enable virtual meet-ups in different virtual worlds. RecRoom, which now supports the Oculus Quest, was featured in episode three of Arm’s New Reality series in 2020 that discussed immersive experiences with VR.
Mixed reality (MR)
MR sits somewhere between AR and VR, as it merges the real and virtual worlds. There are three key scenarios for this type of XR technology. The first is through a smartphone or AR wearable device with virtual objects and characters superimposed into real-world environments, or potentially vice versa.
The Pokémon Go mobile game, which took the world by storm back in 2016, overlays virtual Pokémon in real-world environments via a smartphone camera (this was also demonstrated on a HoloLens 2 as shown at Microsoft Ignite 2021). This is often touted as a revolutionary AR game, but it’s actually a great example of MR – blending real-world environments with computer-generated objects.
Mixed reality is also beginning to be used to enable VR real-world players to be superimposed into video games to bring real-world personalities to game streaming platforms such as Twitch or YouTube.
What do eXtended reality (AR, VR and MR) devices look like?
Whether they’re designed for VR, MR or AR, XR devices share common requirements, yet they are likely to diverge significantly as use cases and form factors for wearable devices continue to evolve. A key differentiation between VR and AR head-mounted devices is the need to either deliver a view of the real world, such as the Microsoft HoloLens 2, or block it out – as with the Oculus Quest VR gaming headset. For both, the future is likely to be untethered, with no reliance on a ‘host’ device be that a smartphone, laptop or server.
Smartglasses are likely to be a fundamental driver in the adoption of AR in the coming years. Arm’s own consumer research shows that 58 percent of consumers are positive about the prospect of wearing everyday AR smartglasses. While this is the toughest to achieve in engineering terms—largely due to the power and performance considerations for the small and lightweight form factor— it will undoubtedly play a significant part of AR’s future adoption.
Exploring extended reality (xR) use cases
The future of the xR market will be defined by its many current and potential use cases. These include navigation and location, entertainment (such as high-end gaming), virtual events and video content, training and guidance, language translation, sensing and tracking (e.g. health monitoring), information and notifications (e.g. for news and social feeds) and telepresence (e.g. avatar calling).
These use cases differ across consumer, enterprise, education and healthcare markets. For example, consumer training and guidance is likely to involve DIY guidance, but for medical this could be surgical training. Already we are seeing some of these use cases in action on today’s AR and VR head-mounted wearables, particularly training and guidance in the enterprise, education and medical sectors.
Key features of any extended reality (xR) device
A core part of the xR vision is the ability to use the visual input methods of object, gesture and gaze tracking to navigate the world or view context-sensitive information. Perception and mapping are also required through the depth and location features. Where AR and VR diverge is in relation to the different experiences that both technologies provide. For VR, the immersive entertainment experiences will require capabilities like HD rendering pipeline, volumetric capture, 6DoF motion tracking and facial expression capture.
Meanwhile, for AR, particularly through the adoption of smartglasses, the always-on, intuitive and secure navigation while users are on the move needs to be enabled. This will require key advancements in features such as depth, occlusion (when one object in a 3D space is blocking another object from view), semantics, location, orientation, position, pose, and gesture and eye tracking.
Performance and efficiency demands on extended reality (xR) devices
The level of compute performance and distribution in XR devices will vary based on the type of AR and VR wearable device and the complexity of use cases it is designed to enable. High-performance compute is already found in contemporary standalone VR headsets used primarily for high-end gaming experiences, such as the Oculus Quest.
In the future, similar types of compute will find its way into smaller, lighter devices such as smartglasses. At the same time, there are also devices that sit between these two spectrums of performance and efficiency, such as AR all-in-one devices (e.g. the Microsoft HoloLens 2), and tethered VR devices for consumer markets, such as the HP Reverb G2. Therefore, system on chip (SoC) solutions will need to be able to scale to fit the different use cases, workloads, and form factors.
Future growth of extended reality (xR)
All the different xR device types are predicted to grow between 2020 and 2025. However, the biggest growth is predicted for AR smartglasses (215 percent CAGR). Solid growth is also anticipated for all-in-one AR devices for Enterprise use cases (92 pecent CAGR) and VR standalone devices (51 percent).
Currently, VR standalone devices have the most units shipped, but this is expected to be rapidly overtaken by the AR devices in coming years. This matches our own consumer research, which shows a strong appetite for the next wave of consumer AR smartglasses. Indeed, we are likely to see a future where VR is for specialist entertainment and training use, but AR becomes slowly adopted as part of our everyday use through the highly popular AR smartglasses of the future.