The metaverse, that evolving combination of the physical and the digital worlds, has the potential to be a multi-trillion-dollar opportunity that will transform society. In the second of our two-part series, futurist Matt Griffin and Arm Director of Innovation Remy Pottier talk about the technological foundations that are evolving the metaverse and what will need to be created to drive this opportunity forward. This involves 5G, hardware and software innovation, improved human-machine interfaces, but also the perspectives of a younger generation of technology users. Moderator Geof Wheelwright teases out some amazing insights from our guests.
Matt: Welcome back to part two of this Viewpoints podcast, all about the metaverse. In part one, we attempted to define what it is and lifted the lid on some of the metaverse use cases. In this part, we’ll explore the enormous potential and opportunities from the Metaverse in the. I’d like to start by asking how the metaverse is going to get adopted.
Every wave of technology has had early adopters, and you could argue that adults were the early adopters of PCs with a younger cohort driving the early adoption of smartphones. So how do you see different generations embracing the Metaverse? Remy, you can kick us off, followed by Matt.
Remy: Yeah, I really love this one. I I’ll tell you what, because I got this question very early, and is a metaverse demographic thing. And the answer that came up was certainly it is. Cause if you try to map the different metaverse-related activities from, wherever you want to name here as a company, whether it’s in gaming, playing social or work, you quickly find out that the personas for this leading player are very different.
And the more actually I’ve been looking at their definition of the metaverse, their targeting experience and the more diversity I found till I finally found a way to map this metaverse strand into one slide, and its evolution over the next 10 years. So one slide evolution over 10 years, and the only way to get there, the answer was yeah, look at the demographic, yeah. Everyone will have an experience in the metaverse. There will be no one left behind, but clearly not in the same way. So pick for instance, the seniors. There are many use case actually for them, which are around, social health or wellness related. You can think about, you can do brain stimulation. So you, from the metaverse and the digital experience you’ve been storing, you, stimulate the brain from these people. Or you can make them, a virtual travel because they cannot, travel anymore, for instance. Or you can have some experience around assisted living. So this is very unlikely that they would want or actually even could where virtual reality headset said during few hours per day.
So you need a much more simple and attainable technology and more natural interface. I will say to enable them to live some metaverse experience. And what about audio? And ambient computing. So it could be just, through a digital assistant, voice assistant that will be then able to bring them into this world and, pick to this platform, which is this digital assistant so voice assistant and extrapolate, you know what, it’ll be in 10. That’s pretty interesting exercise because you can think, why not? This digital assistant will not be a humanoid robot. This humanoid robot will just walk around and, it could ultimately be the best hardware platform to deliver this digital experience and true and ambient assisted living, for instance, to seniors and, assist them in their day-to-day task.
So yeah, very specific platform, that could become the key. The key one. And now pick the younger generation. They have these guys that are already on Roblox, for instance. So Roblox, they think they’re 50% of the less-than-16-year-old people in the US that have, already been on Roblox and playing on Roblox.
And they do that today through smartphone. Okay. Okay. But now if you think about this gaming experience and what this type of company you want to get to, cannot forget, you to think about Ready Player One. And this Ready Player One is not just one device, it’s a virtual reality headset. Set the omnidirectional treadmill. Or maybe you need smart boots, but something which will enable you to have this kind of infinite work capabilities that you have in the metaverse. In metaverse there is no wall unless you want to create them. And you need the smart optic suites to get the full body touch experience and. And yeah, this probably will need a step change in technology to bring the price down of this, the full platform so that this kind of generation will use it.
But certainly this younger generation will drive the demand for this kind of new gaming immersive experience. Interesting point here is that, will these be reliant on all-in-one device, or will this be reliant on a group of devices that will be rendering them. And work together in an ambient way. And that’s a pretty interesting question for us.
Geof: Yeah, I think so. Matt, your thoughts on early adopters?
Matt: To unpack some of things revenue already talked about. When we actually have a look at rendering, obviously with 5G we see renders directly done from the cloud, but then we’ve also gone when we actually have a look at Codex, which are so the pieces of software, basically, that are responsible for packaging and unpackaging the media streams? We’ve seen development, especially from Samsung as well as Facebook. That means that we can actually reduce the amount of bandwidth that virtual reality headsets actually have to stream by up to 80%.
So Facebook, for example, has something called Foviated rendering where it just renders in your eye line, but it does it so effectively that the entire image that you are surrounded with is ultra-high definition. When we have a look at Samsung, Samsung are using artificial intelligence to create what we call adaptive AI codex basically.
So yeah, we won’t necessarily need the high net high performance networks that we. expect that we need basically to deliver some of these really immersive, what I call M XR experience, because we’ve got augmented reality, whatever mixed reality. And then we’ve got virtual reality. But mxr experiences are where we combine sensory data in with the virtual reality streams.
For example, if I’m, if you are wearing a haptic suit and I throw a fireball at you in virtual reality, you. That heat on your chest, you know when I’m actually throwing it at you. So when we have a look at things like mxr, that sort of gets a little bit more interesting. And so Remy was talking about haptics.
Now, in addition to that, Remy also talked about behavioral interfaces and that kind of stuff. We’re seeing artificial intelligences today that you can talk to within these virtual reality kind of constructs and say, build me a room in the metaverse that looks like. And just using your lang, just using your voice, and by being able to interpret natural language, artificial intelligences will start building these immersive rooms and spaces.
Because again, a lot of the, a lot of today’s virtual reality experiences are based on things like the unreal engines, the gaming engines, which is where we see that. But in terms of adopters, I think the metaverse could actually be rather unique in terms of, there are three primary sets of adopters.
We’ve got the younger generations, my kids are already going to a virtual reality school, for example. Their learning is off the charts. Their retention is about 65% better in a virtual reality school than in the real school. So we’ve got those, we’ve got younger generations that are early.
We have enterprises basically, that are using virtual reality in the metaverse, basically to either create products or to do training or to for collaboration. If we have a look at Accenture, Accenture created something called the 11th floor during the pandemic. Which would let people who were working from home come together in a virtual space just to have those water cooler moments, so that’s an interesting thing there as well.
But for the first time, maybe. We could actually see older generations the baby boomers and above actually embracing virtual reality. Because when you have a look at the use of virtual reality in care homes, let alone assistive care and let alone chronic pain therapy and healthcare and so on and so forth.
Yeah again, basically we’ve been able to demonstrate that elderly people who are using virtual reality, actually feel more included, less lonely. They actually feel more mentally stimulated and so on and so forth. So that means that for the first time almost ever, we could actually have three sets of early adopters.
The younger generations, which are always there. Enterprise, which is always there as well. But the third one, the elderly. And then there’s the rest of us, of course, us younger whipper snappers.
Geof: Speak for yourself, Matt.
Matt: I have lots of anti-aging drugs. I’m a futurist. That’s it. So I know all the best labs.
Geof: So we’ve been talking a bit about some of the technology building blocks that are going to need to be in place. In terms of the haptic suits and in terms of what the hardware looks like. But I also wanted to think a bit about what we’re going to need in the near term. To make the metaverse something other than people walking around with bulky headsets and maybe Remy, you could kick this off.
Remy: I think we need to, and from a technology standpoint we probably can first values, we actually to look at what are the technology pillar for the metaverse that your ability will have hardware, software, and service layer.
Now, I mean, the best way we found that to, to actually look at the technology building blocks here, so near term and even longer term of course, is first there are the hardware and infrastructure block, which include, the access device, the cloud and edge infrastructure. The network infrastructure and of course you know all the low-level software that goes with it, like operating system and so on, and the artificial intelligence that Matt was referring to.
So which is going to be a national technology to bring intelligence in all of these devices that really the infrastructure on hardware piece of the metaverse and basically the worst track of the metaverse is reliant on this hardware to exist to make, experience. So that’s the first big block.
The other big block is creation platform for content and digital twin and 3D world creation. And then this, this set of software and tools that enable to easily create application and experience or, and even you distribute them or discover them if you search for them later on. But yes, Matt was mentioning is, if you want to create an experience everyone should be able to create an experience.
So it’s if you able to talk to your machine and tell, okay, I want to create this landscape, and there will be the white and the forest and earth running. So why will you have to design all of that so an I can do that for you and this creation platform. We enable, more and more developer to actually create next generation of content here.
So it’s a very important piece. And for us it’s very interesting because it creates stickiness. There’s a lot of stickiness with the hardware to be sure that, we get the right hardware to enable this kind of creation platform to work properly. Then the third block is the, my putting that into an enablement technology bucket,
And we, that’s everything that touch transaction management. So micropayment payment, blockchain, if in case of payment, decentralized architecture or digital wallets, these kinds of things. And of course everything that touch, security, privacy, and trust, like identity and access management.
So all these kinda link technology are very important. And interestingly, even if it’s three or four level above, for instance, Arm technology is today there is very strong link to us because if you want to make it secure and trusted, you potentially need a root of trust down to the hardware. So that’s, and that’s why there one more time, there’s stickiness between this high-level services or enablement technology in the hardware.
And finally the last block is the, the most abused one is the M world. So the content place where. Create content and experience that you for your user. But when, if you think near term, the good news is that this metaverse is more an evolution than a revolution. So lot of what is needed today to build the metaverse exist so you can build already certain version and certain vision of the metaverse.
Of course, if you start to look a little bit further away and you really want to deliver the full vision there, and you start to do your technology composition, that’s where you find out there a number of key technology domain you need to look at. And in this key technology domain, there are probably technology that yet need to be invented.
And if you think five, 10 years from now, we won’t be able to train every single system, and the AI of each system. This system will have to learn by themselves with, minimal, help from the developer or the user. So it’s self-learning, ai. Another one is brain computer interface.
You want to have high-bandwidth access to the brain, to go faster on interfacing with kind of work. So existing technology, we can build metaverse, metaverse today, the longer-term vision, we’ll need some more technology and potentially step change in some of the hardware that is needed to build it.
Matt: So I agree with everything Remy said, but also I’m going to be slightly boring cause I’m going to throw this one in here as well. From a near term perspective, what we actually need that we really don’t have versus is we need the appropriate laws and legislation and regulation basically for the metaverses that we’re building today.
Because, for example, if I’m going to be building my metaverse and I choose a particular platform to build it, and bear in mind that can incorporate an entire city, as some countries are trying to do. Now, what happens when the company hosting that platform goes bust?
So this is where we also have a look, have to have a look at the kind of the boring things, the regulation, the litigation, who’s responsible when things actually go wrong. We’ve seen a lot of virtual reality trademarks already being registered for company, for third parties that have nothing to do with the original trademarks.
In addition to that, when we start having a look at building worlds, we’ve also got to remember that as humans, we think linearly. I think that I go onto a virtual reality platform, I build my virtual reality world and that’s it. But what happens when somebody else then goes onto that same platform goes into my world and then starts building their own virtual reality world in my world. Where we end up with this kind of this Disney multiverse madness. So when we start thinking about the metaverse, we don’t really just think about, you shouldn’t think about it in terms of linear thinking.
I build one virtual reality world on one platform. You can keep building infinite number of worlds in infinite number of worlds. Which then when it takes us back to things like regulation and litigation basically is just crazy when we start talking about compliance, it’s very difficult. When we talk about things like auditing I get a lot of, or org organizations, particularly in those CFOs, the Office of Finance saying, how do we actually audit what’s going on in the metaverse because we’re doing all these different things, we have no idea how to keep track of them, how to monitor them, how to report on them, adidas for example, has been selling NFTs and things, how do you actually report on that?
But more so the basic, from a near term perspective, any new thing, whether it’s a technology or product or service or whatever, any new thing has to has to have benefits over the old, so this is always brings us back to what problems are the meta is the metaverse trying to solve today?
Once we know what those are, and then once it’s able to demonstrate it can solve those problems more effectively, then we start using it in the near term. And these are things like training, for example, where rather than sending you into the heart of a nuclear power station to get irradiated to teach you about, nuclear rods or whatever it happens to be and how to decommission them properly, I can just do all that in virtual reality. Sort of backing all that up, it really comes down to what’s the problem that you’re trying to solve first? Can you solve it in a better way than anyone else has managed to solve it today using the Metaverse, and then once you’ve solved it, what about litigation regulation, compliance, reporting, safety and security, as Remy mentioned, and so on and so forth.
Geof: You bring up so many good points and it really starts me thinking about, both the near term and the long term of Metaverse based applications. And you’re talking about this whole scenario where if somebody owns the Metaverse and they go ba store in the case of Ready Player One if they die, what happens? I’m wondering both of you and maybe Remy you can kick us off, do you think the longer term’s going to be more Ready Player One or the Matrix??
Remy: I love that one. If we let’s be very practical here, how can we end up in the Matrix scenario, what is the step and the fork in the road?
You need to find out that, when you say, okay, we are really going in the Matrix where, and to actually get into the Matrix scenario, I think first we need to actually already agree to live in the Matrix in a virtual actual reality. At least part of, maybe most of our time. So it means that we are already living in some kind of ready play or one kind of world, and we already agreed to do that.
It’s part of most of your life or a lot of our life is there. But that’s not enough. In fact, if you want to be in the Matrix, yes, you also need to have trained autonomous systems around you to take care about everything you will do normally if you were not in the virtual world. So all your day-to-day things to do just simple thing to eat.
So it means that we have created, we will have created a machine that feed you and, and we’ll be living in the virtual world, but still be fed by this machine. So this need to exist and all the similar kind of things that will take care of your day-to-day task. So this needs to exist, which means at some point we have created.
now, once we will have created them and they will exist this, all this autonomous system, then maybe that’s where an AI that, control this autonomous system will suddenly, take control over, this world, this autonomous system, and start to blur the line between you know what’s real and what is not real, so that you don’t understand really in which world finally you are.
So it’s more like total recall in that case. And once you get at that stage, yeah. So if you are autonomous intelligence system around you and end up having some kind of robot, which autonomously build other robots, that’s a fork in the road. But you don’t want to go after real and after that fork it mean we can certainly end up in the metric scenario.
But this, the good news is that it’s something we can monitor in some way and maybe control hopefully.
Matt: So I’m actually going to go practical as well, but I’m actually going to show you how the Matrix already exists. Yeah, in parts anyway. So I think really, when we have a look at the metaverse basically in the shorter term really, it’s that kind of Ready Player One space.
We go in, basically we do stuff, we’re wearing haptic suits, basically, we are in full joy ride mode, and we are off basically in different virtual worlds doing whatever it happens to be. Whether we’re solving puzzles like in Ready Player One, taking out giant monsters in Ready Player One, and so on and so forth.
So that just looks like a crazy world. And actually Ready Player One probably looks like less of a crazy world than the world that we are living in today. So who, they’re just putting that on the table. Now when we actually have a look at the Matrix, now we already have all the different technological components that we actually need to live in the Matrix.
Unpacking this full futurist mode. Over in China, they created something called a Triboelectric Nano Generator. Now, tengs, as they are known can actually go into the human bloodstream and they generate electricity a little bit like a regular turbine Does. From the flow of blood in your body, now we can wirelessly transmit that energy directly to any kind of machine.
So while the Matrix, everyone was actually plugged in, basically like a human battery into the big giant towers, we actually don’t need to be plugged into the giant towers because shouldn’t the aliens actually just have wireless electricity transmission? They’re not that advanced, we’ve got that.
So anyway, we’ve already proved, basically scientifically science fiction becomes science fact that we can actually turn the human body into a battery. Now when we have a look at the Matrix, on the one hand, we could actually use a creative artificial intelligence to automatically create the world for us, but then push that into our brains.
Now we’ve actually seen an example. Most people when we talk about brain machine interface, most people are used to the idea of having a brain machine interface, like a skull cap that reads your brain signals and converts it into text or images. I’ve got some great videos of that. So we’re all used to this concept of we can use technology to read what you are thinking.
Three years ago, we managed to prove that you can actually use artificial intelligence and brain machine interfaces to push information into people’s heads. So when we talk about the Matrix, we talk about Trinity. You know when Trinity goes up to the Huey and she goes, I need to learn how to fly a Huey now like that. And all of a sudden, the knowledge is uploaded to her brain, and she gets in and then flies her and Neo out, over the skyscrapers and the skyline. We’ve already done that. And the reason that we’ve done that is because the brain is plastic.
Now, for anyone that thinks knowledge uploading is impossible. I’m going to tell you that two plus two equals four. I’ve just uploaded knowledge to your brain, but I’ve just done it in a biological way. What scientists have figured out in the labs is how to upload or transmit knowledge to your brain using technology, not using language or body language or text or video, whatever it happens to be that we use today.
So when we actually have a look at that Matrix, we are way beyond that already. And then we also have holograms as well. So Remy basically went off reservation. They bought him re total recall. I can download your memories. So we’ve got a Neuroprosthetic chip that is used in Alzheimer’s, and what it does is it’s able to read your biological brain signals and convert them into ones and zeros.
Now for Alzheimer’s patients, this improves their memory retention by 30. But if I can convert your brain signals into ones and zeros and store it on a computer chip in your head as a memory, isn’t that memory downloading? And then couldn’t I take those ones and zeros and push them into the cloud?
Done it. Have a look at that one. It’s a healthcare tech. When we have a look at the Matrix, Yeah, I’ve got holograms from Blade Runner 2049. We’ve got BYU that have recreated Star Wars layer hologram using femto lasers. No augmented reality, no glasses, nothing like that.
So I will take your science fiction day in day out and I will trump you with science fact day in, day out.
Geof: Thank you both. Wow, that has been very thought provoking. But now I want to bring it all back to our current podcast Reality, lacking The Power to Bend Time and Space, we’re sadly out of.
Thanks so much to you both, but I am feeling a little bit more in tune with the future as a result of the insights we’ve heard today. Speaking of the more immediate future, we look forward to bringing you further glimpses of it soon in the next episode of Arm Viewpoints. Thanks for listening today.