Industry-Leading Performance and Efficiency for Inference at the Edge
Based on a new, class-leading architecture, the Arm ML processor’s optimized design enables new features, enhances user experience and delivers innovative applications for a wide array of market segments including mobile, IoT, embedded, automotive, and infrastructure. It provides a massive uplift in efficiency compared to CPU, GPU and DSP through efficient convolution, sparsity and compression.
Discover how the ML processor can create a better user experience for your products.
Arm NN bridges the gap between existing NN frameworks and the underlying IP. It enables efficient translation of existing neural network frameworks, such as TensorFlow and Caffe, allowing them to run efficiently – without modification – across Arm Cortex-A CPUs, and Arm Mali GPUs and the Arm Machine Learning processor.
This software library is a collection of optimized low-level functions for Arm Cortex-A CPUs and Arm Mali GPUs targeting popular image processing, computer vision, and machine learning. It offers significant performance uplift over OSS alternatives and is available free of charge under a permissive MIT open source license.
Arm Machine Learning Processor Resources
- ML processor datasheet
- Arm ML Processor: Powering Machine Learning at the Edge
- Project Trillium webinar: Optimizing ML Performance for any Application
- Machine Learning solution brief