Advancements in artificial intelligence (AI), machine learning, and the management of patient data are continually helping the medical industry deliver better medical approaches and improve patient experiences. But what will the healthcare industry look like in the next 10, 15, or 20 years?
In this episode, Geof Wheelwright is joined by industry futurist Matthew Griffin who will use his crystal ball to outline where he sees healthcare changing and what future developments and trends to expect. Joined by Arm Fellow, Rob Aitken, the conversation covers a number of factors, such as the future landscape, the opportunities for hospitals, the potential challenges along the way, and the key ethical and security concerns that may arise.
In the recently published 2022 Arm Ecosystem report, Dipti Vachani describes how enabling technologies in hardware and software are sparking partner innovation in healthcare, where, for example, researchers want to be able to more quickly and accurately spot partners in data that can lead to improve diagnoses.
Geof Wheelwright: Welcome back to the Arm Viewpoints podcast. We have a bit of a special episode today that looks out into the future. Our guests today, or at least part time residents of the future, with us today are Rob Aitkin, Fellow and Director of Technology at Arm and Matthew Griffin, CEO of the 311 Institute. In an episode we call the Futurist and the Fellow. Rob is responsible for technology direction at Arm research, he works on exciting things like distributed systems, low power design, technology, roadmapping, and next generation memories. Matthew, meanwhile, is founder and CEO of the World Futures Forum and the 311 Institute, a global futures and deep futures consultancy, working between the dates of 2020 to 2070. And as an award-winning futurist and author of Codex of the future series. Matthew’s work involves being able to identify, track and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society. Welcome to you both. Given everything that we’ve all been through in the last 18 months, it should probably come as no surprise that I want to start the conversation by talking about the healthcare industry in the next 10 to 15 years. And Matt, perhaps you can kick us off?
Matthew Griffin: Yeah, sure. So if you have a look at the global pandemic, the one thing that it has really done is it’s put all of the trends that were previously here, so here and present on steroids. It’s also pushed a lot of emerging technologies into the spotlight. So for example, we saw 3D printing come to the fore. To help people get ventilators back online in Italy. We sit we’ve seen genomics, artificial intelligence and supercomputers being used to help create the new vaccines, which were created easily 90 to 95% faster than previous vaccines that we’ve seen. So we’ve seen digital twins coming online, but we’ve seen regulators pushing autonomous vehicles out into the real world much faster in order to deliver essential goods and supplies. We’ve seen drones being used to sanitise and identify and track people who are sort of walking around the streets basically with COVID. So we’ve also seen robots, going around different hospital locations, helping serve patients, etc, etc, etc. So a lot of the emerging technologies that were already sort of ticking along and making relative inroads into the healthcare space as the same sort of being put onto steroids. And when we start looking out to the next 10 to 15 years, when you have a look at vaccine development, typically it would take anywhere between five to 10 years to develop a vaccine for say, other sort of epidemics like Ebola. H5N1, SARS, MERS, those kinds of epidemics that we’ve seen in the past. With Coronavirus, we managed to create a vaccine a viable vaccine in three months. But what really slowed down some of the approvals was it was not just the regulation, but the fact that we had to do human clinical trials. So when we have a look at the vaccine space, we’ve actually had quite a lot of new innovations coming through. So on the one hand, we’ve had artificial intelligence, supercomputers and new genomics technologies being used to develop the vaccines in the first place. When it came to clinical trials, we had what we call humans on labs. So humans on labs, humans on chips, this is where, for the sake of about $1, you’re able to create a sort of replacement biological system that mimics the human biology and mimics human biology. But you can do it literally in just sort of something that looks a little bit like something you see on a microscope slide. So in addition to that, when we have a look at General vaccine technologies, we’ve got mRNA technologies, we’ve got contagious vaccines coming through. We have vaccines in pill form vaccines in aerosol form. So for example, quite a lot of the newer sort of second generation COVID vaccines are also available in aerosol form. So we get away from the deployment and the distribution issues that we’ve actually seen, for example, with the first series of vaccines that are being rolled out. And then when we started looking at sort of post pandemic, one of the biggest issues that a lot of regulators and a lot of pharmaceutical organisations had was trying to get their vaccines through clinical trials trying to prove that they were safe in real humans. Now, before the pandemic we saw the rise of what we call digital twins. So a digital twin is where you take a physical thing, and you digitise it so you kind of create a digital replica now with humans, that’s very, very difficult to do. But we’ve now seen quite a number of digital twins for humans being developed that allows different pharmaceutical organisations and biotech organisations as well as healthcare organisations, test different things on those digital copies of real people. So rather than testing, say, for example, a vaccine on a real person and then finding out that it’s harmful, you can test that vaccine in digital form in a digital human. And if it doesn’t go so well, then you just scrub it all, and you start again. So there’s a huge amount of innovation that’s come through, not just in spite of the pandemic, but actually because of the pandemic.
Geof Wheelright: So I’m gonna turn to our another resident, a non-digital human, Rob, to give us your perspective?
Rob Aitken: Thanks, Jeff. Yeah, I agree with Matthew has painted a great picture of just what we as humanity can do when faced with a massive challenge and general agreement that the challenge needs to be solved. So we were able to develop vaccines, as Matthew said, extremely quickly, get them tested and incorporate a whole bunch of new technologies to do that. Despite that, we still see some interesting cultural resistance, say to vaccines. And so humans, humans remain humans. And even in the presence of a life-saving technology, some people are still reluctant to try it for whatever reason, so that I think those kinds of challenges will remain. But the technical parts, I think, are really quite interesting one picking up on the digital twin example, in order to make a digital twin of anything work, you effectively have to choose which parts of the system you’re going to model on which parts of the system you’re going to declare to be unimportant. And what we’ll see over time is refinement of that. So in the vaccine case, if your digital twin says this vaccine is a bad thing, it probably is and away you go. But if it says it’s a good thing, it might not be you might actually have to do additional tweaking. So there’s a lot of opportunities ahead in this space. But there’s also a lot of potential pitfalls that we have to be wary of as we move forward.
Geof Wheelwright: So when you start to look at the areas, the healthcare areas that would particularly benefit from these technologies, where do you see that the kind of the one or two biggest immediate benefits maybe from around heart disease or cancer, or, you know, perhaps some other area? Maybe you want to start this one off? Rob?
Rob Aitken: I think I think there’s a couple of places I think genetic engineering of various sorts, as we saw in the vaccine development is a really important and promising area. And that gene splicing, gene editing, and all of these things are really technologies that are on the verge of taking off. And the pandemic has accelerated people’s awareness of them, and will continue to, you know, accelerate what we do in them. I think robotic surgery, again, is another area that’s poised for growth in the next few years. And there’s also in sort of a less sexy version of it. There’s just plain automation in healthcare. I broke my elbow a couple of months ago, and I was sitting there in the emergency room looking at the equipment that they had, and some of it was running Windows XP. And it’s like, okay, maybe we can advance technology just by getting everybody kind of up to the same page and getting all of the equipment running, latest and greatest, more secure systems, and so forth. So there’s a lot of there’s a lot of really cool stuff that can happen. But there’s also just a lot of really mundane things where there’s great opportunity going forward.
Geof Wheelwright: And I’ll pass that to you, Matthew, for your thoughts on that?
Matthew Griffin: So I think one of the most exciting things in healthcare basically is the concept that increasingly emerging technologies are able to decentralise primary and secondary healthcare services. Now this again, but it was something that was supercharged by the pandemic, but again, it was a trend that was already present before the pandemic actually hit. Now, a lot of us are going to be for now familiar with the concept of telehealth, you know, when you have a look at telehealth usage during the pandemic, it increased over 40-fold basically from sort of pre-COVID baseline levels. So, for example, when we talk about telehealth, a lot of us really think of telehealth in terms of it’s a video call with a doctor with a professional healthcare worker on the other end basically have a screen who asks us a series of questions and we walk through those, but you know, in that particular case, it’s not too dissimilar to being in a GP surgery. Now, when we have a look at the benefits of telehealth, for example, the average GP surgery or the average GP consultation and from leaving your house to getting back to your house typically takes about 120 minutes.
So firstly, telehealth basically knocks that down to about five minutes because that’s typically the amount of time that you have for each appointment. So that’s a huge benefit to individuals personal productivity, the environment, you know, in terms of reduced travel and all that sort of stuff. However, you know, when you consider that you can use artificial intelligence, the cameras in your devices, the sensors in your devices, machine vision, and all these other kinds of things. Increasingly what we have in our smartphones in our hands, is not a smartphone, it’s increasingly a tricorder device. Now, this is what I mean by that. So for example, if an artificial intelligence was listening to this podcast, it could detect the different inflections in my voice. So you know, when you get ill, you know, your voice changes, you know, it gets a little bit deeper, a little bit sort of Barry White-like if you’ve got a cold. Artificial intelligence is increasingly able to sense those patterns and those changes and say, well, actually, I think you have a cold. AI when combined with your smartphone’s microphone can increasingly detect PTSD, dementia, anxiety, depression, and all kinds of things, which is why we saw tele-psychiatry increase in such a big way during the pandemic size from sort of just other obvious reasons. When we have a look at the use of artificial intelligence and machine vision. Increasingly, if you’re doing a telehealth consultation via a webcam, that webcam that video feed can be analysed by an artificial intelligence to look for signs of genetic abnormalities. You can find pancreatic cancer signals because when you get pancreatic cancer, your eyes go slightly yellowy, and your skin flashes in a particular way. You can read people’s heart rates, their blood pressure, all just from a webcam feed, let alone before when we start talking about using machine vision to analyse different types of skin cancers, and all these sorts of other bits and bobs.
So when we start when we think about telehealth today, it’s very much this, I have a video call with my doctor. As we start moving over the next five, let alone 10 years, increasingly, we will have be able to use all these new technologies to really create something that we call the quantified self, where these different technologies will be able to capture, aggregate and then analyse a whole variety of different biomarkers and biometrics from the individual to analyse your health and wellness. You know, when you have dementia, your language that you use, the vocabulary that you use, the tones that you use the frequency of words that you use changes. So a lot of these newer technologies are kind of 86% plus accurate, which is surprising a lot of healthcare professional. So that’s the sort of the first thing I say, you know, artificial intelligence puts healthcare on steroids. And then we have a lot of new ways to decentralise primary and secondary healthcare in new ways. And then similarly when we have a look at things like 5G with robotic surgeries, again, we can decentralise surgeons in a whole variety of new ways. So on the one hand, I can use 5G, I can have a surgeon based in New York, a patient in a theatre in California, and the surgeon in New York can operate on the patient in California across a 5G network. When we start talking about the impact of COVID on how health professionals actually performed their duties and their work we had over in London, we actually had people at Barts (St Bartholomew’s Hospital) who were using HoloLens, mixed reality technologies, and they’d be based at home because they were in lockdown. They’d been told to quarantine, but they could actually still conduct surgeries from home remotely. When you start having a look at all of the different technologies that we are starting to combine together to fundamentally revolutionise healthcare. It’s kind of no wonder that by 2028, a lot of the healthcare and biotech organisations that I’ve talked to have said that we will reach something called escape velocity. Now escape velocity is the point in time where new technologies and new medical advances will add more than a year’s worth of life to everyone for every actual year that passes. So 2028 is quite a pivotal moment. And again, when we start talking about being able to extend people’s lives, this is where all these emerging technologies, especially artificial intelligence, and the usual kind of suspects like 3D printing, when we start talking about being able to 3D print human hearts, so if you have a heart attack now we have 3D printed human hearts out of Israel, which is small, but then in the future, they get bigger and bigger and bigger in about a decade’s time if you have a heart attack, you simply go into a hospital, they will take your stem cells, they will 3D print your new human heart, and then implant that into you. There’s no fear of rejection, and you walk out one or two days later. You don’t have to wait for someone else to die. So we are fundamentally changing the dynamic of what’s possible with healthcare and life extension using a lot of these new tools and technologies.
Geof Wheelwright: Rob, in listening to all the fascinating possibilities that Matt just outlined, I’m wondering about your thoughts on having technology analyse humans instead of doctors? Do you feel that technology could be in a position to actually replace human ability? Or is it something complement humans? How realistic is that vision?
Rob Aitken: It’s an excellent question. I think compliment is where I would go. As an example, I worked in the 80s, back when expert systems were hip that did liver disease diagnostics. It effectively operated within a space of systems and would pop up what was the most likely disease that you had based on your symptoms. It was unable to extrapolate, it was a very primitive system, but it led me to my observation on artificial intelligence, which is that whatever it is this, isn’t it. And that’s where we’re at always. So we find that when some new technology comes along, so say you have an artificial intelligence that’s able to identify that you have dementia. What do you actually do with that? What does a compassionate doctor do with that information or a compassionate family, I know that when my dad suffered from dementia, and he suffered from it for years before he would admit to himself that he may in fact have had that, before that he was just in a stage of denial, his vocabulary was so good that if he forgot a particular word, he would come up with another word that meant roughly the same thing and go with it. So there’s the complexities of being human, I think, still exceed the capabilities of artificial intelligence technology. But that’s not that doesn’t mean that that technology isn’t capable of doing some extremely interesting things. And effectively broadening the space of the medical professionals who can now say, “Alright, I’ve got this information from this array of sensors that Matt was talking about, from the this programme that’s analysed, vary all of that sensor data and this now gives me a broader space to operate in to think about and to apply a kind of human creativity over and above what the systems are able to do” so to me, all of this is about augmenting human capability and just allowing medical practitioners to do things that they wouldn’t have been able to do otherwise.
Geof Wheelwright: And you get me thinking as well about security. So a lot of this, there’s going to be patient information. And you’ve got systems that are going to be interacting with it. And a lot of work has been done to develop regulatory frameworks, for example, around that patient information. But you’re going to have a lot more information coming in through AIs that are analysing things, and perhaps coming up with inferences, conclusions, that kind of thing. So what do you do to safeguard all of that data? And I’ll make this a kind of a free for all question. So either one of you can jump in on that?
Matthew Griffin: When we have a look at privacy, all of this data that we’re actually capturing, if you you know, traditionally all of that would be aggregated, thrown back basically into the edge, and then sent back to a data centre to be analysed. But this is where the concept of federated artificial intelligence becomes interesting, because federated artificial intelligence kind of aggregates all that data together anonymously, so that we can still use it. Actually, I suppose basic. From a data security perspective, this is probably sort of worth throwing in there as well. When you have a look at the dark web, which is where the vast majority of criminals operate, you know, these data warehouses, these data marketplaces, the most valuable kind of data in the world today is health data, because criminals want health data, because they take that health data, and then they can go and scam a load of insurers using it. And in fact, if you have a look at the US basic medical fraud, in terms of sort of that kind of thing, actually represents about $365 billion a year, it’s a very big part of serious organised crimes revenues now. So there’s lots there’s, typically, there’s a lot left to do. And then from a security perspective, just throwing this nugget in there as well, if I connect your pacemaker, so you now have a connected pacemaker, if I can actually hack that because your security is lax, can now encode and encrypt your pacemaker and then ransom your life. So as opposed to ransoming your city or your business, I now go and attack a healthcare organisation, I say I’ve now encrypted all of the different sort of firmware data sets, etc, etc, on people’s implanted medical devices. And if you don’t pay us in Bitcoin or Monero, I’m going to do something you don’t want me to do. So, It’s a whole wormhole in itself. And it’s when we have a look at the way that this data can be exploited for nefarious gain. There’s a lot of upsides of criminals.
Rob Aitken: Absolutely, the interesting example of your surgeon as well, so your surgeon in New York operating on a patient in LA, someone intercepts the transmission, and now they get to operate on the patient in LA. And depending on who that patient is, then interesting consequences could flow from that as well.
Matthew Griffin: When you actually have a look at the state of, for example, US healthcare-based security. The FDA, I think it was basically pushed out some reports a little while ago saying there are very few hospitals whose cybersecurity or security processes and procedures and technologies are anywhere near what they should be.
Rob Aitken: No, absolutely. I mean, my example with XP a while ago, I was looking at these machines and thinking I could hack this.
Geof Wheelwright: So I wonder when you have information about, you know, how somebody might be feeling based on the tone of their voice? Where does that information go? And how do you protect it? So if I’m talking in a way that suggests maybe I’ve got a, I don’t know, a psychiatric break coming? And that information that might be useful to me and a healthcare provider, but you know, maybe you don’t want your employer having that information?
Rob Aitken: I think there’s an interesting piece to on that with we were talking a minute ago about the AI can analyse, you know, any various things, one of the things AI can analyse is anonymized data, and anonymize it. So there’s, there’s ways to extract what from legit perfectly legitimate sources, there’s ways to extract legitimate purposes out of it.
Geof Wheelwright: Yeah, well, I think too, about devices that people voluntarily are using, you’ve got this data that you’re collecting for yourself. But you don’t necessarily have control of where that data is stored. Maybe perhaps what’s being done with it, even on an aggregated fashion. You may, you know, tick a checkbox when you’re when you’re installing the software, and all of a sudden, you know, that data is, is available. And it’s one thing for some software to tell you personally, and only you, well, you’re too fat, and you need to exercise more, and you need to eat less of these kinds of foods, but it’s another for that information to be shared more broadly. But what are your thoughts, Rob?
Rob Aitken: I think there’s, there’s a couple of issues there that are really interesting. So one is just the actual data itself. So my watch collects data about me, and I amuse myself by seeing if I exercise for half an hour, how much credit I get. So some days, I get 30 minutes credit, some days, I get five minutes credit, some days, I get no credit at all. And it just happens to be how the watch, some combination of the watches ability to sense what I’m doing and what I’m actually, you know, engaged in. And that leads to an interesting point which I think comes from Matthew mentioned that we’ve got this idea of, we’re going to regulate healthcare extensively. And then we’re going to have a thing, an area of stuff that’s healthcare adjacent that isn’t really regulated at all. So there’s an incentive for companies to push as close as they can to that boundary to get something that’s as close to healthcare as they possibly can, without actually jumping into the place where suddenly they’re subject to a whole bunch of regulations. The technical not just the ability of human beings to conjure up lines very close to the barrier but the ability to actually automate that and to say “alright, this is an encoding of this particular regulation I’m now going to build a machine learning algorithm that just basically attacks it” and finds out where the surface of that decision is and now I will just live always on the other side of that decision point and so my object is no longer subject to any kind of healthcare regulation but it’s awfully close.
Matthew Griffin: Well, I mean so one of the good examples by see is say for example, Under Armour or us you know, as we start having a look at smart clothing bearing in mind that increasingly as technology evolves, it gets smaller it gets cheaper and more ubiquitous. You know when we have a look at the electronics space, we’ve got you know, flexible electronics now flexible computing basically flexible communications that you can sew into the fabric basically of any kind of sports apparel. So if organisations like Adidas, Under Armour and Nike are gathering quantified self-information about your performance you know you are running this is your heart rate this is your perspiration rate. This these are the chemicals that we found basically in your sweat which means you’ve been eating this etc, etc, etc. Yeah, these are your cortisol levels. So this is your stress level. They can collect information but they can get it get a lot of information about your health and wellness from that information. So is Nike, Under Armour and Adidas health care organisations? because they can still give you a very good guide and a very good guide of how healthy, how well or how fit you are. But which side of the regulations do they fit on? Are they a healthcare organisation, bearing in mind that probably the only way for them to get across that line is for them to say, as your healthcare provider or doctor of choice, we recommend that you go and run to reduce your diabetes risk.
Rob Aitken: Absolutely. And I think that’s, that’s a good example of something that’s healthcare adjacent, because they’re, they’re deliberately avoiding becoming a healthcare provider, but they are effectively a de facto health care provider, because they’ve got enough information to make these decisions. But they want to avoid the regulatory regime because it’s complicated and expensive.
Geof Wheelwright: So given all of those, and the pros and the cons around this, how do we ensure the trusted and secure experiences around all this healthcare information? You alluded to a couple of things, Matt, but maybe you can kind of talk in a bit more detail. And Rob, you can offer some thoughts, too?
Matthew Griffin: So the first thing that we need basis, we need a group of regulators who thoroughly understand what is possible with today’s technologies, you know, so I work with a lot of governments and a lot of regulators typically G7 and G20. And when you go and have a conversation with them, and say, by the way, just from your voice, I can tell with a relatively high level of confidence, your likelihood, or the likelihood that you either have or are going to get dementia, or all these other sort of things that we’ve actually talked about, you know, the inevitably they get no way that’s impossible. And then I sort of go and point them at a bunch of university students in Northern Europe or the US who’ve actually done this kind of stuff. And then they go, “Oh, we didn’t know that’s possible”. So on the one hand, I was sort of say that technology is this rocket ship, you know, when you have a look at what’s possible today, you know, the use of virtual reality, for example, to help us improve and accelerate drug development, all kinds of things. Yeah, technology is this rocket ship that is just accelerating and accelerating along this exponential curve. Regulators, on the other hand, if I see are trying to follow that rocket ship, except they’re generally on a broken bike, and if a regulator or a government, policymaker or stakeholder doesn’t have a thorough understanding of what technology is capable of today, and then what it’s going to be capable of tomorrow, bearing in mind that a lot of regulations typically take one to three years to actually develop anyway. You know, a lot of the regulator’s I talk to basically they say, you know, we’re lawmakers, we’re not technologists. And so there’s a kind of gap there. And then you need to have proper conversations basically, about data sovereignty, and how all this data is actually shared, and ultimately, who owns it. And ultimately, when we talk about healthcare data, I get almost guarantee that nobody on this who’s listening to this podcast, has control over their healthcare data. And if they did want control over their healthcare data, there’s no way they can get it and there’s no one they can do go to, to get it. If they want to see their healthcare data. It’s exactly the exactly the same information. Or the same, the same scenario, which is where I do tend to think things like blockchain based sovereign ID systems for EHRs has become increasingly interesting. Because if I have my patient record on a blockchain system, I can see exactly what you’ve collected a little bit like privacy tags for apps, I can see what you’ve collected, who have you shared it with what they’ve done with it. And if I don’t like that I can rescind access to it.
Rob Aitken: I think that’s that that’s feasible for a small number of things. But I don’t think it’s feasible in general, just because of the complexities of blockchain began, when you then wind up into the aggregation problem of “okay, I’ve legitimately aggregated a bunch of anonymized data from a variety of sources that’s now off that blockchain that it started, and it’s now in my own new space. And now I just reverse engineer the original data from that”. And you bring up an excellent point on the regulation, which is that it almost by definition, it’s going to be years or decades behind the actual technology. And you know, as anyone who’s ever dealt with a three-year-old knows that whatever rule you conjure up, that rule can have exceptions generated at a much faster clip than you can deal with them. So there’s a. I think we will actually have to rethink our whole idea of regulation that that there can’t be a regulatory regime that covers all this stuff because it’s evolving too quickly. You’ll actually wind up having to figure out as a society, some kind of guiding principles that aren’t codified into law but that we just operate on it’s the equivalent of the rule that I finally made for my children, which was don’t do anything you know, is stupid. And then when they went to dinner, Something I would say did you know that was stupid? “No” “Really?” “Yeah”. So we need kind of meta rules.
Geof Wheelwright: Yeah, absolutely. And you know, I think where we get to, and maybe that will kind of make this the last question, what you’ve described, I think, Matt and Rob is more an ethical framework, because the regulations are never going to keep pace with the technology. So where we’re going to need to end up is for there to be trust and trust has got to be the bedrock of all of this. There has to be an alignment to a set of ethics. So how do you how do you see the establishment about of that, and dealing with ethical concerns?
Matthew Griffin: Well, I suppose for me, I mean, you’re absolutely right, you know, it’s, the future is coming at us so fast that as soon as you write one set of regulations, or policy documents, they’re pretty much out of date, and, or being superseded by, you know, new things. So I mean, this is almost where, you know, just throwing this one in there, you can almost hard to kind of Asimov’s rules, you know, Thou shalt not kill, Thou shalt not do harm, and those kinds of things. And we’ve started seeing some of those very rough guidelines being used for artificial intelligence in the UK, and in Europe, where the general principles of building an artificial intelligence are, if you build an artificial intelligence, it must not do harm, or cause harm directly or indirectly to people or persons, etc. So you can use that kind of language. So you’re absolutely right, when we talk about these sort of high level guiding principles or guidelines, it’s an interesting way to go. But again, from the healthcare space, there isn’t really much there.
Rob Aitken: I think the ethics question is interesting, because it’s one that’s obviously humanity is considered forever, essentially, for 1000s of years, we’ve wondered about it. And people have evolved elaborate ethical systems. And to a large extent they’re compatible, but they have interesting incompatibilities as well. But I think it points to what has to be the solution to this, which is that whatever ethical framework we derive, we have to create a system where people agree to use it, and it can’t be enforced by the existing mechanisms that are just too slow and too complicated to actually enforce it. We could theorise that maybe we could have an autonomous ethical AI somewhere that solved all this for us. But there are plenty of bad Sci-Fi movies about such things as well. And there, there are interesting trolley problem ethical dilemmas that can show up as well in any system where something that seems in isolation to be either good or bad, when looked at in a larger context is the reverse. So these are very difficult, challenging problems. And we’ll see them going forward. But I think at least as a starting point, we need to educate ourselves and our children and everyone around us and just what general ethical good behaviour is and why it’s important and why you can’t rely on some external force to solve all your problems. So if we, if we could get society to that point, I think we would be a lot further ahead than we’re at right now.
Geof Wheelwright: Great discussion. Thanks to you both, Matt and Rob, and thanks to everyone who joined us to listen into our podcast today. And to paraphrase an old saying, I’m feeling that I now know how technology will help us all feel more healthy, wealthy and wise in the future. We look forward to bringing you more conversation in the next episode of Arm viewpoints and thanks again for listening today.