Designed for unmatched versatility and scalability, Project Trillium is enabling a new era of ultra-efficient machine learning (ML) inference. Providing a massive efficiency uplift from CPUs, GPUs, DSPs and accelerators, Project Trillium completes the Arm Heterogenous ML compute platform with the Arm ML processor and open-source Arm NN software.
Specifically designed for inference at the edge, the ML processor gives an industry-leading performance of up to 4 TOPs, with a stunning efficiency of 4 TOPs/W for mobile devices and smart IP cameras.
- Scalable, ground-up design drives industry-leading performance and efficiency
- Massive uplift over CPUs, GPUs, DSPs and accelerators
- Unmatched performance in thermal- and cost-constrained environments
Arm NN enables efficient translation of existing neural network frameworks, allowing them to run efficiently – without modification – across Arm Cortex CPUs and Arm Mali GPUs. The software includes support for the Arm Machine Learning processor; for Cortex-A CPUs and Mali GPUs via the Compute Library; and for Cortex-M CPUs via CMSIS-NN.
- Supports leading NN frameworks, including TensorFlow, Caffe, Android NNAPI and MXNet
- Graph and kernel optimizations for each IP type
- Available free of charge, under a permissive MIT open source license