Technology That’s Shaping the Future of Compute

Arm is the world's leading technology provider of silicon IP for the intelligent system-on-chips at the heart of billions of devices. Our portfolio of products enable partners to innovate and get-to-market faster on a secure architecture built for performance and power efficiency. Find the right processor IP for your application.

Product Filter
Showing: 7 Items
X
Selected filters:
Ethos NPUs
Product Families
Product Families
Clear All | X
Supreme performance at optimal power
Powering the most energy-efficient embedded devices
Robust real-time performance
Highest performance for machine learning inference
Advanced automotive vision system image processor
Higher graphics performance combined with power efficiency
Scalable and flexible for cloud to edge infrastructure
Powerful solutions for physical security applications
Technologies
Technologies
X
Licensing Plan
Licensing Plan
X
Use Cases
Use Cases
X
Clear
Product Filter
Showing: 7 Items
Selected filters:
Ethos NPUs

Ethos - NPUs

Ethos-N78

Scalable and efficient second-generation ML inference processor
  • 2x faster inference with 40% lower bandwidth, 25% increased efficiency
  • Multiple markets from 1 to 10 TOP/s and up to 90 unique configurations
  • Develop once, deploy anywhere with online and offline compilation

Ethos-U65

Powering innovation in a new world of AI devices at the edge and endpoint
  • Delivers 1.0 TOP/s ML performance in about 0.6 mm2
  • Partner configurable from 256 to 512 8-bit MACs
  • Unified toolchain supports Cortex-M and Cortex-A based systems

Ethos-U55

Configurable and efficient embedded ML inference
  • Delivers up to 0.5 TOP/s, a 480x ML up lift and 90% energy reduction 
  • Partner configurable from 32 to 256 8-bit MACs in around 0.1mm2
  • Rapid development with a single tool chain for Cortex-M and Ethos-U

Ethos-N77

Highly efficient and performant ML inference processor
  • Ideal for premium mobile, AR/VR, smart cameras and adjacent markets
  • High efficiency up to 5 TOPs/W
  • Up to 225 percent convolution performance uplift using Winograd

Ethos-N57

Balance of ML inference performance and efficiency
  • Enabling premium AI in mid-range mobile, STB/DTV and other markets
  • Delivers up to 2 TOP/s with 1024 8-bit MACs
  • Up to 225% convolution performance uplift using Winograd

Ethos-N37

Low-footprint ML inference processor
  • For most endpoint designs, entry level mobile as well as mid-low STB/DTV
  • Up to 1 TOP/s in less than 1mm2
  • Up to 225 percent convolution performance uplift using Winograd

Arm NN SDK

Bridges the gap between existing neural network frameworks and the underlying IP
  • Free of charge
  • Supports Arm Cortex CPUs, Arm Mali GPUs and the Arm Machine Learning processor
  • Arm NN for NNAPI accelerates neural networks on Android devices
Show more - 1 items