Artificial intelligence (AI) and its subset, machine learning (ML), are expanding into more applications and changing the way we interact with devices and machines forever.
These devices and machines are getting smarter as machine learning algorithms process and analyze large quantities of data to learn and make decisions autonomously with human-like intelligence.
Arm is bringing AI to trillions of edge devices by adding ML capabilities to processor technology and software, making them smarter, more energy efficient, and more affordable.
Arm technologies enable the world’s most popular AI platform — the smartphone — supporting machine learning (ML) features like predictive text, speech recognition, and computational photography.
We’re also enabling newer AI platforms and AI applications, like voice assistants and consumer robots that are revolutionizing how people interact with technology and the way technology interacts with the world.
As we move from the era of web services and apps to an era of AI-based systems, AI is becoming pervasive in all computing applications. Autonomous driving is changing how we use cars. Smartphones are becoming more intuitive and proactive. Radical new business models are set to change our lives in ways we can’t imagine.
Much of this is built with workhorse Arm CPUs and MCUs that today handle much of the AI and ML workloads at the edge.
Arm’s Project Trillium is driving the AI revolution, redefining device capabilities through a thriving design and development ecosystem that has extended its reach from smartphone CPU design and app development to ML.
Project Trillium provides flexible, optimized support for ML workloads on edge devices across all programmable Arm IP – from CPUs to GPUs and the Arm ML processor – as well as partner IP. Open-source software, Arm NN, allows seamless integration with existing neural network frameworks, such as TensorFlow, Caffe, and Android NN.
With support from a vibrant and diverse AI ecosystem, Project Trillium is driving innovation and choice.
Distributed Intelligence from Cloud to Edge
Distributed intelligence will eventually be in every part of the system, from the datacenter to the devices in our hands.
Arm’s common architecture supports diverse AI applications and their specific requirements everywhere they need to run: from highly capable cores and interconnects for mega compute power in the data center, to extremely power-efficient microcontrollers for AI algorithms in highly constrained, battery-powered edge devices, such as wearables and sensors.
The Arm Pelion IoT platform, combined with Arm’s new class of advanced and ultra-efficient machine learning processors, is transforming the IoT into a global network of securely managed devices woven into the fabric of our digital world.
Pelion supports machine learning in IoT by enabling the capture of metadata generated by IoT devices and their environment. ML engines process metadata from several areas - connectivity, operating systems, and applications - to detect cybersecurity threats, and optimize operations of the devices and the networks in which they operate.
AI at the Edge
Research Byte from GigaOm
With insights from Arm, Google and leading universities that focus on data science, GigaOm explores the requirements for artificial intelligence to deliver on its promise of improving our lives through the devices around us.
Embedded Machine Learning Design for Dummies
Looking to add machine learning to your device?
Explore platform configuration, hardware, software, and ecosystem significance. Grasp the basics of ML, explore opportunities and challenges, and learn how to get started.
Smart Security for A Trillion Intelligent Devices
AI systems often operate in hazardous environments. Applications such as autonomous vehicles, healthcare, and robots require the highest levels of reliability and a fail-safe ability.
Yet security continues to present an unprecedented global challenge. Arm has responded to this challenge with the Platform Security Architecture (PSA), an architecture-agnostic framework for securing the next AI-infused one trillion connected devices, from endpoint to cloud.
Social Robot ElliQ Alleviates Loneliness Among Elderly
ElliQ is an engaging robotic companion that learns its owner’s behavior patterns to proactively suggest activities, play music, videos, and ebooks, and connect to family and friends through social media. Intuition Robotics uses Qualcomm Snapdragon 820 SoC built on Arm Cortex technology, with machine learning functionality.
With so many applications for artificial intelligence emerging, it can be difficult to know where to start. Talk to an Arm expert about the right machine learning solution for your AI project.
Cortex-A Processor Series
The Cortex-A processor series powers more advanced user experiences and richer interfaces, and provides the high-performance computing needed for complex healthcare applications like genomic sequencing.
Cortex-M Processor Series
The Cortex-M processor series from Arm is the smallest and lowest power processors that enable wearable sensors and implantable chips; they provide the computing power to run real-time machine learning algorithms.
Mali Graphics Processors
Including both graphics and GPU Compute technology, Mali GPUs offer a diverse selection of scalable solutions for low-power to high-performance smartphones, tablets, and DTVs.
Machine Learning Processor
Based on a new, class-leading architecture, the Arm ML processor provides best-in-class performance and energy efficiency. Its optimized design enables new features, enhances user experience and delivers innovative applications for a wide array of market segments including mobile, IoT, embedded, automotive, and infrastructure.
Arm NN bridges the gap between existing NN frameworks and the underlying IP. It enables efficient translation of existing neural network frameworks, such as TensorFlow and Caffe, allowing them to run efficiently – without modification – across Arm Cortex-A CPUs, and Arm Mali GPUs and the Arm Machine Learning processor.
This software library is a collection of low-level functions optimized for Arm CPU and GPU architectures targeting image processing, computer vision, and machine learning. It is available free of charge under a permissive MIT open source license.
- Always-on Face Unlock
- The New Voice of the Embedded Intelligent Assistant
- Optimizing Machine Learning Workloads on Power-efficient Devices
Northstar Research Reports
- Optimizing ML Performance for any Application
- Machine Learning on Arm Cortex-M Microcontrollers
- Machine Learning on Arm Cortex-A
- Why Google’s TF Lite Micro Makes ML on Arm Even Easier
- Living on the Edge: Why On-Device ML is Here to Stay
- Arm NN: Build and Run ML Apps Seamlessly on Mobile and Embedded Devices