Industry-Leading Performance and Efficiency for Inference at the Edge

Based on a new, class-leading architecture, the Arm ML processor’s optimized design enables new features, enhances user experience and delivers innovative applications for a wide array of market segments including mobile, IoT, embedded, automotive, and infrastructure. It provides a massive uplift in efficiency compared to CPUs, GPUs and DSPs through efficient convolution, sparsity and compression.

Features and Benefits
Outstanding Performance

Delivers >4 TOPS of performance, scaling to 100s of TOPS in multicore deployments.

Highly Efficient

Internally distributed SRAM memory stores data close to the compute elements to save power and reduce DRAM access.

Optimized Design

An innovative architecture drives high MAC utilization, improving convolutional efficiency.

Futureproof

Supports future innovation in network architecture and algorithms through programmable engines.

Artificial Intelligence

The new class of ultra-efficient machine learning processors is purpose-built to redefine device capabilities and transform our lives.

Learn More
Talk with an Expert

Discover how the ML processor can create a better user experience for your products.

Contact Us
Related Products and Services
Arm NN

Arm NN

Arm NN bridges the gap between existing NN frameworks and the underlying IP. It enables efficient translation of existing neural network frameworks, such as TensorFlow and Caffe, allowing them to run efficiently – without modification – across Arm Cortex-A CPUs, and Arm Mali GPUs and the Arm Machine Learning processor.

Compute Library

Compute Library

This software library is a collection of optimized low-level functions for Arm Cortex-A CPUs and Arm Mali GPUs targeting popular image processing, computer vision, and machine learning. It offers significant performance uplift over OSS alternatives and is available free of charge under a permissive MIT open source license.