The ethics of androids and autonomous systems has fascinated me since childhood. Asimov’s Four Laws of Robotics unfolded in a series of classic cautionary tales describing just how badly those simple rules could and would go wrong. I have been searching for effective ethical rules for robots ever since.
Before we talk about the ethics of robotics and AI, we must understand the goals and limitations of any public discussion into ethics. Ethics are like emotions: everyone has some, yet they aren’t always positive. Or, for that matter, equal, informed, or appropriate. We often try to fix this by oversimplifying; trying to find solutions for all robots and all people at the same time.
One such example is media favourite the trolley problem. But the trolley problem is a philosophical tool for exploring hypothetical situations, not a prescription for real-world action. It’s deliberately light in context, and to solve real-world ethical challenges, context is vital.
Who, What, When, How, Why
That context comes from the questions all good journalists are trained to answer. The first is ‘who’ – who is affected, who is responsible, and who was in the room making the decisions. Then we need to compare that ‘who’ with the rest of the world. Are these ethics inclusive? Are they just? Are they comprehensive? And are they empowering? For example, should developed countries whose profit from the depletion of earth’s natural resources now affords them the privilege of carbon neutrality get to control the actions of the developing world? Should male judges get to make decisions about women’s bodies? You can see how important the ‘who’ becomes.
We must then follow ‘who’ with ‘what, when, how and why’? In my experience, many published whitepapers, reports or policies restrict themselves to the ‘what’—if we’re lucky, perhaps the ‘why’. That may be informative, even educational. But they stop short of suggesting any actionable outcome, and can even come across as simple virtue signaling or corporate theater.
Growing preoccupation with AI ethics
Since 2010, investment in robotics and AI has skyrocketed. According to CB Insights approximately $80Bn has been invested in AI and robotics startups across more than 5000 deals in the last five years, with a high of $26.6Bn invested in 2019. Along with the exponential increase in funding, the quarterly earnings calls of the Fortune 500 show an increasing preoccupation with the ethics of AI, autonomous systems and robotics.
Attempting to steer CEOs through the adoption of these technologies, PricewaterhouseCoopers (PwC) conducted a recent study of 59 AI ethics or principle documents from across the world. There are certain topics that are raised in almost all. And there are some topics that are not raised at all. And, even though accountability is cited in more than 75 percent of all documents, we’re only talking about the accountability of the robots or AI. Not the accountability of ethical principles.
To put it another way, it doesn’t matter what sort of ethics principles, oaths, or guidelines you have if they don’t include calls to action and accountability metrics. Arm’s 2019 AI Trust Manifesto is perhaps better than many similar documents I’ve seen because it includes calls to action. But, by what metrics will we know the results of any actions?
Taking action is what matters right now
As a roboticist, I am particularly concerned about AI based ethics frameworks because in the last couple of years, AI regulations and policies have subsumed robotics as a subset of AI. And yet robotics brings its own very specific physical issues and challenges. On the one hand, robotics is simply embodied AI, and inherits all of the issues of AI. But on the other hand, robotics turns some of the ethical issues into extremely dangerous safety issues, and so consider robotics an early warning system for significant threats. A robot canary in an AI coalmine.
If you’re still unclear on the difference between ethics of AI and ethics of AI robotics, I recommend reading the EPSRC Principles of Robotics. In 2010, the Engineering and Physical Sciences Research Council of the UK Research Institute started holding workshops with experts across many disciplines in order to minimize issues and maximize the social benefit of these new technologies. The EPSRC five simple principles for robotics and AI can be related to existing social and legal frameworks, and are intended for the roboticists, not the robots. Or for purely software-based AI, then for the builder not the ‘brain’.
These principles are the closest we get to actual advice on taking action. And taking action is what matters the most now. Actions will almost certainly differ from place to place, with different cultures, consumer laws and commercial regulations. But we can still get started right away—for example, the fifth of the EPSRC’s principles ‘that robots should always be identifiable’ could be introduced immediately. Around the world, the vast majority of vehicles must have some form of registration number, license or identification plate. This alphanumeric ID is usually required to be publicly visible.
License plates for robots?
The reasoning is obvious: vehicles are capable of doing harm to those in the vicinity around them. Fleets of robots are still quite a novelty, but already found in some hospitals and supermarkets, as well as increasingly as ‘cobots’ in manufacturing environments. Each deployment often consists of several robots from different manufacturers, yet these robots have no distinguishing features or markings. How can we expect to report an accident or an issue if we can’t identify the robot?
And then, how can we be sure that that robot, or its software, has not been hacked or hijacked? Before we can even ask who is responsible, we have to be able to identify which robot we are dealing with. Requiring robot registration and a visual ID gives us a good starting point.
That’s just one action out of many we can start taking now. In my experience from studying waves of innovation entering our society, each new industry moves forward when companies proactively address issues that enhance the reliability, interoperability, accountability and quality of their products. The vaunted entrepreneur who ‘asks for forgiveness, not permission’, and ‘moves fast and breaks things’ does no one any good, and almost certainly doesn’t build anything that lasts.
Silicon Valley Robotics is rewarding robotics companies that push the robotics industry forward with our inaugural Industry Innovation and Commercialization Awards. The deadline to submit an entry is September 22 and we will announce the first winners on October 22 2020. Companies who exemplify good practices and who build good robots should be recognized and rewarded.
I am also campaigning for the development of a global ‘ethical ombudsperson’ network for new technologies like robotics. The network would hear the complaints of ordinary people, collect evidence of the use and misuse of technologies and then can both inform people about best practices and hold people accountable for bad practices based on local regulations.
One of the biggest challenges that an ethical approach to a new technology faces is uncertain jurisdiction, alongside of lack of evidence of potential issues. Hence the proposal to create a global ombudsperson network, which can collect and share information about ethical issues.
New technologies are moving rapidly, and they are very powerful. That means our approach to ethics has to move equally rapidly and be effective. We need to walk the walk, not just talk.
Join Andra at Arm DevSummit 2020
Since Asimov’s 3 Laws of Robots, there have been hundreds of ethical guidelines developed by well-meaning groups and global initiatives. I’ve been part of some of those initiatives and want to bring you the signal in the noise. Join my free session at Arm DevSummit on Tuesday, October 6 and hear me discuss my Five Laws of Robotics, based on many inputs but in particular the EPSRC Principles of Robotics, from UK experts Alan Winfield, Joanna Bryson and more.
I’ll also be chairing a session with the Arm Gen 2Z ambassadors on Thursday, October 8. Join the 4IR: Designing an Ethical Future for the Next Generation session and hear how these optimistic architects of tomorrow hope to influence next-generation technologies.