Designed for unmatched versatility and scalability, Project Trillium is enabling a new era of ultra-efficient machine learning (ML) inference. Providing a massive efficiency uplift from CPUs, GPUs, DSPs and accelerators, Project Trillium completes the Arm Heterogenous ML compute platform with the Arm ML processor, the second-generation Arm object detection (OD) processor and open-source Arm NN software.
Specifically designed for inference at the edge, the ML processor gives an industry-leading performance of up to 4.6 TOPs, with a stunning efficiency of 3 TOPs/W for mobile devices and smart IP cameras.
- Scalable, ground-up design drives industry-leading performance and efficiency
- Massive uplift over CPUs, GPUs, DSPs and accelerators
- Unmatched performance in thermal- and cost-constrained environments
The OD processor is the most efficient way to detect people and objects on mobile and embedded platforms. It continuously scans every frame to provide a list of detected objects, along with their location within the scene.
- Detects objects in real time running with Full HD at 60fps (no dropped frames)
- Object sizes from 50x60 pixels to full screen
- Virtually unlimited objects per frame
- Can be combined with CPUs, GPUs or the Arm Machine Learning processor for additional local processing, significantly reducing overall compute requirement
Arm NN enables efficient translation of existing neural network frameworks, allowing them to run efficiently – without modification – across Arm Cortex CPUs and Arm Mali GPUs. The software includes support for the Arm Machine Learning processor; for Cortex-A CPUs and Mali GPUs via the Compute Library; and for Cortex-M CPUs via CMSIS-NN.
- Supports leading NN frameworks, including TensorFlow, Caffe, Android NNAPI and MXNet
- Graph and kernel optimizations for each IP type
- Available free of charge, under a permissive MIT open source license