I joined Arm because of its amazing people and world-class technology. But while I’m constantly excited by the possibilities of what we can achieve, as Arm’s General Counsel I must also consider the potential harm our designs might cause if they don’t perform in the way we expect, or were put to a use we did not intend.
That dilemma comes to the forefront when I think about artificial intelligence (AI) and it’s why two years ago, I formed a working group on AI Ethics at Arm. It’s also why we have now produced a guiding Arm AI Trust Manifesto to shape our thinking and practices around AI design for the foreseeable future. And, why we chose to launch it at Web Summit 2019 to build industry-wide support for the principles of the Arm AI Trust Manifesto as a first step in defining and standardizing practical ways to operationalize ethics standards.
Our role in ethical AI
Before getting into the details of the Arm AI Trust Manifesto, let me first talk about why Arm cares about ethics and the role we can play in bringing about a world built on trusted AI devices.
“We’re calling for a vigorous industry-wide effort to take responsibility for a new set of ethical design system principles.”
Arm technology is already enabling AI processing in billions of advanced products, including the latest mobile devices. But while our engineering is fundamental to the AI revolution, we do not exert direct control over all critical elements of AI systems. So, as trust in AI must be global, we need to join with others to achieve a strong and sustainable framework.
We recognize that time is limited so we’re now calling for a vigorous industry-wide effort to take responsibility for a new set of ethical design system principles. These principles must be debated, agreed and adhered to as a foundational building block on which all AI systems can be built.
Technology will always move faster than regulation. Therefore, industry must work closely with regulators, universities and society to define the right baseline standards that are comprehensive enough to meet our agreed ethical objectives, yet not so onorous that good AI entering the market is held up unnecessarily by fear. This will require creating a universal ethical framework to avoid the regulation fragmentation that might impede the global adoption of trustworthy AI.
First, to understand this engineering basis for ethical decision making, we need to describe the root of an ethical AI device.
The building blocks for ethical AI devices
People—philosophers, politicians, religious leaders, members of society—have argued over ethics for thousands of years. We’ve sought to engineer ethics into human society through debate and, once agreement is reached, we have codified the ‘rules’ in written or verbalized ways. This is both similar and entirely dissimilar to what we are attempting to do with machines.
By comparison, our vision of an ethical AI machine is a device programmed to always make decisions perceived as fair by most right-minded people, with those decisions concluded objectively from data that is clear from detectable bias.
However, the human vs machine ethics comparison is dissimilar in more ways: First, because machine rules must be universal while human ethical agreements tend to be local or regional. Second, society will naturally tolerate some errors in human decision making, even against defined rules, but will not accept errors from a machine built to be ‘ethical.’ So, questions of absolute accountability (in legal terms, liability) must always be answered.
The higher performance level we expect from machines was backed up when we worked with analyst firm Forrester to survey 50 global autonomous driving experts in 2018. They told us that carmakers expect they’ll have to prove that future self-driving vehicles are at least 10x better than humans in performance and AI decision-making to be acceptable as mainstream devices by the public.
So, the challenge ahead is clear. We have to get to a position of near-zero casualties when it comes to machine-made decision making. It means we have to build the most robust technology framework every conceived of; covering all aspects of AI design and delivery, including how engineers are taught to think as well as code and build. Starting the debate on exactly what that has to look like is the precise objective of the Arm AI Trust Manifesto.
The Arm AI Trust Manifesto
The above is an abridged overview of the Arm AI Trust Manifesto’s guiding principles. Read the full Manifesto here or by clicking the front cover image above.
What happens next?
We will now seek to bring technology partners together to build a coalition of parties who can influence AI ethics thinking, and the engineering to support the creation of more ethically robust devices. We already partner, or have relationships with, the broadest cross section of influencers we need to reach – inside industry and public bodies.
Great work is already being done to advance AI ethics thinking but we think the task now is to bring influencers together in practical ways. For example, I personally would like to see us build prototype devices we think are ethical, explain why we think they pass the test, and then try to break them. In effect, come up with a new form of ethics hacking to test the security, design ethos, data sets and interrogability limits of an AI device. This won’t happen immediately but this sort of leap can’t wait too long.
This is similar to the Digital Security by Design project work we’re involved in on security where we’re currently designing a new test board to run a prototype architecture which has been designed to be inherently more robust to cyber attacks. It’s a partnership project with the UK Government, several UK-based universities and major industry partners including Microsoft and Google.
This level of cooperation is exactly what we need now to start laying a solid foundation for AI machines that are born ethical.
Discover more about Arm AI solutions for intelligent computing and how Arm is bringing AI and machine learning to the network edge, endpoint devices, and applications everywhere.