A Software Library for Machine Learning

The Arm Compute Library is a collection of low-level machine learning functions optimized for Cortex-A CPU, Neoverse and Mali GPU architectures. The library is open source software available under a permissive MIT license.

The Arm Compute Library provides superior performance to other open source alternatives and immediate support for new Arm technologies e.g. SVE2.

Key Features:

  • Over 100 machine learning functions for CPU and GPU
  • Multiple convolution algorithms (GEMM, Winograd, FFT and Direct)
  • Support for multiple data types: FP32, FP16, int8, uint8, BFloat16
  • Micro-architecture optimization for key ML primitives
  • Highly configurable build options enabling lightweight binaries
  • Advanced optimization techniques such as Kernel Fusion, Fast math enablement and texture utilization
  • Device and workload specific tuning using Open CL tuner and GEMM optimized heuristics

Download the Compute Library

Features and Benefits

Performance and Efficiency

Arm Compute Library provides a comprehensive set of functions and superior ML performance. Deployed in over a billion devices, Arm Compute Library is trusted by developers worldwide, enabling them to focus on differentiation and reducing time to market.

Operating System Agnostic

The library is truly OS agnostic and is portable to Android, Linux and ‘bare metal’ systems. Arm Compute Library is used today in smartphones, DTVs, smart cameras, automotive applications and many more.

Optimized for Arm-Based Processors

Arm Compute Library contains a comprehensive collection of software functions specifically optimized for Arm Cortex-A CPUs and Arm Mali GPUs.

Talk with an Expert

With any complex software system it is critical to understand the interworking of different modules and the capabilities of the underlying hardware. If you have any questions about software on Arm-based processors, talk to an Arm expert. 

Learn More

Explore More Options and Features

Arm NN

Arm NN

Arm NN bridges the gap between existing NN frameworks and the underlying IP. It enables translation of neural networks from frameworks such as TensorFlow, Tensorflow Lite and Pytorch, allowing them to run efficiently across Cortex-A CPUs, Mali GPUs and the Ethos-N NPUs.

Cortex-A Processor

Cortex-A CPU

The Cortex-A processor series is designed for complex compute tasks, such as hosting a rich operating system platform and supporting multiple software applications.

Mali Graphics & Multimedia Processors

 Mali GPU

Including both graphics and GPU Compute technology, Mali GPUs offer a diverse selection of scalable solutions for low-power to high-performance smartphones, tablets, and DTVs.

Arm Compute Library Resources

Everything you need to know to make the right decision for your project. Includes technical documentation, industry insights, and where to go for expert advice.


Helpful Documentation: