A photograph of a ship, heavily iced over in a marina, circulates widely on social media captioned “USS Al Gore Global Warming Research Vessel.” A national politician appears drunk at a press conference caught on camera. Former President Barack Obama appears on camera talking in part about his successor, Donald Trump, in uncharacteristic, profane terms.
All of these pieces of content appear genuine at first glance: genuine enough to go viral. Yet the United States has no vessels named after the former vice president and the video of the “drunk” politician was edited to make her seem inebriated.
And in the case of Obama, actor Jordan Peele appears split screen halfway through the video, speaking the same words in his own voice. Peele has participated in a deep fake video and leaves viewers with this message: “Moving forward, we need to be more vigilant with what we trust from the Internet. It’s a time when we need to rely on trusted news sources.”
Inauthentic content harms public affairs, corporate business decisions, elections—even people’s health
In a world of increasingly accessible and powerful digital technologies, the rise of disinformation and misinformation in various forms is alarming. Content can be deemed inauthentic if it is deliberately misleading, fabricated or manipulated in various ways or comes from apparently genuine sources that have in fact been impersonated. In the wrong hands, inauthentic content harms public affairs, corporate business decisions, elections—even people’s health.
The number of content moderators on social media and fact-checking organizations has exploded in recent years and our collective ability to keep up with disinformation is stretched to its limits.
But while technology has enabled the rise of misinformation and disinformation in the digital age, it can and must help to minimize its spread and harmful effects, while ensuring that responsible technology through ethics, safety and security can enable sustainable impact in a connected world.
Arm has been at the forefront of defining many industry standards through the years and we understand that content authentication starts at the silicon level.
Arm co-founds new cross-industry coalition to fight misinformation and disinformation
That’s why we’re excited about Arm co-founding the newly formed Coalition for Content Provenance and Authenticity (C2PA). Arm joins Adobe, BBC, Intel, Microsoft and Truepic in forming this cross-industry coalition. Together, we’ll address issues of misinformation and disinformation and work to establish guidelines and technical solutions toward that goal. This includes developing an end-to-end open standard for tracing the origin and evolution of digital content and ensuring an accurate record of any changes made to original content.
C2PA will work closely with organizations such as Project Origin and the Content Authenticity Initiative (CAI), which focus on implementation of content provenance standards and technologies. The Adobe-led CAI focuses on media capture, editing tools and the creative community to ensure content coming from any source can have provenance at its core.
Project Origin focuses on establishing and maintaining the provenance of content from trusted points of origin, including authentication of content through its travel and transformation through the news publishing ecosystem. Origin will champion the adoption of interoperable workflows between publishers.
Recognizing this common goal, the organizations formed the C2PA in 2021 to unify technical specification efforts under a single entity. The C2PA strives for a technology standard to be widely adopted across the web, client devices, and more broadly, anywhere people create or consume content, from professional cameras to newsfeeds on smartphones.
Our role in ensuring standards apply wherever Arm technology is used
Arm will provide its perspective from the bottom of the hardware stack—the point of capture if you will—to ensure global standards are inclusive and adequate. We will look to ensure that these standards will apply wherever Arm hardware is deployed and add more specific technical expertise around security needs at the silicon level.
We’re attempting to minimize the flow of misinformation by offering the opportunity for anyone publishing or accessing media via the internet to be able to demonstrate that it a) comes from where it says it has come from and b) is in the state the publisher intended.
It’s important to be clear that we are not making judgements on the relative reliability of the content, journalist or publisher: others are working in this space. The standards that will emerge will ensure cryptographic integrity of the claims and assertions regarding provenance and authenticity and provide the means to embed those claims/metadata into the content items themselves.
The consequences of inaction
We’ve all chuckled at some early examples of fake content, whether it’s the Obama segment or this clever deepfake of actor-impressionist Bill Hader’s face morphing into Arnold Schwarzenegger as he impersonates the actor on a late-night talk show.
But we’ve also read stories that don’t match the headlines or seem to come from a reputable source but actually don’t.
With each chuckle, however, comes some anxiety. We see how easy it is to spread fake content and mislead millions of people; we understand that there are serious societal consequences for this abuse.
We’re excited about our participation and encouraged by the broad industry activity in this area, from C2PA to CAI and Origin and others.