Industry-Leading Performance and Efficiency for Inference at the Edge
Based on a new, class-leading architecture, the Arm ML processor’s optimized design enables new features, enhances user experience and delivers innovative applications for a wide array of market segments including mobile, IoT, embedded, automotive, and infrastructure. It provides a massive uplift in efficiency compared to CPUs, GPUs and DSPs through efficient convolution, sparsity and compression.
How are Companies Building Machine Learning Devices for a Data-Rich World?
As the number of connected devices explodes, machine learning (ML) on the cloud could soon become expensive and slow. Developers are moving ML inference to the edge for increased power, efficiency, and flexibility. This Arm white paper reveals how the Arm ML Processor supports the optimal user experience.
Discover how the ML processor can create a better user experience for your products.
Arm NN bridges the gap between existing NN frameworks and the underlying IP. It enables efficient translation of existing neural network frameworks, such as TensorFlow and Caffe, allowing them to run efficiently – without modification – across Arm Cortex-A CPUs, and Arm Mali GPUs and the Arm Machine Learning processor.
This software library is a collection of optimized low-level functions for Arm Cortex-A CPUs and Arm Mali GPUs targeting popular image processing, computer vision, and machine learning. It offers significant performance uplift over OSS alternatives and is available free of charge under a permissive MIT open source license.
Arm Machine Learning Processor Resources
- The Arm ML Processor: Powering Exciting User Experiences on Edge Devices
- Project Trillium webinar: Optimizing ML Performance for any Application
- Machine Learning solution brief