Products for Network Infrastructure

Arm is the world's leading technology provider of silicon IP for the intelligent system-on-chips at the heart of billions of devices. Our portfolio of products enable partners to innovate and get-to-market faster on a secure architecture built for performance and power efficiency. Find the right processor IP for your application.

Product Filter

Showing: 24 Items
X
Selected filters:
Network Infrastructure
Product Families
Product Families
Clear All | X
Supreme performance at optimal power
Powering the most energy-efficient embedded devices
Robust real-time performance
Custom-designed for ultimate performance and specific market needs
Highest performance for machine learning inference
Highest graphics performance for flagship gaming experiences
Higher graphics performance combined with power efficiency
Advanced ISPs for human display and computer vision applications
Scalable and flexible for cloud to edge infrastructure
Fully verified, PPA-optimized, customizable Neoverse compute subsystems
Powerful solutions for physical security applications
Licensing Plan
Licensing Plan
X
Use Cases
Use Cases
X
Technologies
Technologies
X
Clear

Product Filter

Showing: 24 Items
Selected filters:
Network Infrastructure

Cortex-A

Cortex-A710

First-generation Armv9 “big” CPU that offers a balance of performance and efficiency.
  • Addition of Armv9 architecture features for enhanced performance and security.
  • Optimal for mobile compute use cases such as smartphones and smart TVs.
  • 30% increase in energy efficiency compared to Cortex-A78.

Cortex-A520

First Armv9.2 high-efficiency “LITTLE” CPU.
  • Most performant, high-efficiency CPU with improved power efficiency (up to 22% vs Cortex-A510) for DoU/real-world use cases.
  • New QARMA3 PAC algorithm lowers the performance cost, strengthening PAC deployment in the consumer technology market.
  • AArch64-only CPU for building big.LITTLE clusters across the consumer technology market.

Cortex-A510

First-generation Armv9 high-efficiency “LITTLE” CPU.
  • Large performance increases for a highly efficient CPU.
  • Innovative microarchitecture upgrades.
  • Over 3x uplift in ML performance compared to Cortex-A55.

Cortex-A78

The fourth generation high-performance CPU based on DynamIQ technology. The most efficient premium Cortex-A CPU.
  • Built for next generation consumer devices.
  • Enabling immersive experiences on new form factors and foldables.
  • Improving ML device responsiveness and capabilities such as face and speech recognition.

Cortex-A78C

Providing market-specific solutions with advanced security features and large big-core configurations.
  • Performance for laptop class productivity and gaming on-the-go.
  • Advanced data and device security with Pointer Authentication.
  • Improved scalability with up to 8 big core only configuration and up to 8MB L3 cache.

Cortex-A77

Third-generation high-performance CPU based on DynamIQ technology.
  • Leadership performance and efficiency for 5G mobile solutions.
  • Improved responsiveness for on device machine learning.
  • Built for next-gen smartphones and laptops.

Cortex-A76

Second-generation high-performance CPU based on DynamIQ technology.
  • Designed for devices undertaking complex compute tasks.
  • Greater single threaded performance and improved energy efficiency.
  • Enables faster responsiveness and at-the-edge support for machine learning applications.

Cortex-A75

First-generation high-performance CPU based on DynamIQ technology.
  • Flexible architecture provides a broad ecosystem of support.
  • Executes up to three instructions in parallel per clock cycle.
  • Broad market use covers smartphones, servers, automotive applications and more.

Cortex-A73

Highly power-efficient CPU that maintains high-performance.
  • Increased power efficiency of up to 30 percent over predecessors.
  • Smallest Armv8-A processor.
  • Designed for mobile and consumer applications.

Cortex-A72

High-performance CPU that has multiple uses including mobile and embedded technologies.
  • Advanced branch predictor reduces wasted energy consumption.
  • Gain significant advantages in reduced memory requirements.
  • Suitable for implementation in an Arm big.LITTLE configuration.

Cortex-A65AE

Arm’s first multithreaded Cortex-A CPU with Split-Lock for functional safety.
  • Best-in-class throughput efficiency for memory intensive workloads.
  • Highest levels of safety with Dual Core Lock-Step for demanding safety-critical tasks.
  • Supports Split-Lock for improved cost efficiency in mixed-criticality applications.

Cortex-A55

Highest efficiency mid-range processor that can be paired with a high-performance CPU in a DynamIQ configuration.
  • Flexible design meets requirements to support broad market application.
  • Ideal for smaller devices with constrained environments.
  • Designed for compatibilty with DynamIQ configurations.

Cortex-A53

The most widely-used mid-range processor with balanced performance and efficiency.
  • Available in Arm Flexible Access.
  • The choice for high single thread and FPU/Neon performance.
  • Supports a wide range of applications across automotive and networking and more.
  • Most widely deployed 64-bit Armv8-A processor.

Cortex-A7

Smallest and most efficient 32-bit Armv7-A processor.
  • Enhanced hardware virtualization provided by Armv7-A extensions.
  • Improved memory performance of up to 20 percent over its predecessors.
  • Supports 32-bit, rich operating systems, including Linux.

Cortex-R

Cortex-R82

Highest performance real-time processor.
  • Offers efficient, high-performance compute for complex storage applications.
  • Supports Arm Neon technology for ML acceleration.
  • Implements MMU for rich OS support.

Cortex-R8

High performance processor suited for storage controllers and modems.
  • Offers low latency.
  • Configurable ports support flexible design options.
  • Delivers the responsive power needed for high-performance mass storage applications.

Cortex-R4

Smallest, real-time performance processor.
  • Offers excellent energy efficiency and cost effectiveness.
  • Prioritize reliability and error management with built-in error handling.
  • Ideal for embedded applications including automobiles and cameras.

Ethos - NPUs

Ethos-U85

Enabling edge AI use cases with generative AI capabilities.
  • Delivers up to 4 TOPs scalable ML performance.
  • 20% improvements in energy efficiency than previous Ethos-U NPUs.
  • Native support for transformer networks.

Ethos-U65

Powering innovation in a new world of AI devices at the edge and endpoint.
  • Delivers 1.0 TOP/s ML performance in about 0.6 mm2.
  • Partner configurable from 256 to 512 8-bit MACs.
  • Unified toolchain supports Cortex-M and Cortex-A based systems.

Ethos-U55

Configurable and efficient embedded ML inference.
  • Delivers up to 0.5 TOP/s, a 480x ML up lift and 90% energy reduction.
  • Partner configurable from 32 to 256 8-bit MACs in around 0.1mm2.
  • Rapid development with a single tool chain for Cortex-M and Ethos-U.

Arm NN SDK

Bridges the gap between existing neural network frameworks and the underlying IP.
  • Free of charge.
  • Supports Arm Cortex CPUs, Arm Mali GPUs and the Arm Machine Learning processor.
  • Arm NN for NNAPI accelerates neural networks on Android devices.

Neoverse

Neoverse N3

Performance-per-watt optimized for hyperscale, 5G, enterprise networking, and infrastructure edge workloads.
  • 20% greater performance-per-watt efficiency compared to Neoverse N2.
  • 2MB L2 cache option offers nearly 3x performance gains on ML workloads.

Neoverse Compute Subsystems

Neoverse CSS N3

The Arm Neoverse N3 platform, validated and optimized by Arm, is helping reduce time-to-market, cost, and risk.
  • Highly configurable to target 5G, enterprise networking, and infrastructure edge use cases.
  • Supports from 8 to 32 Neoverse N3 cores per die.

Neoverse CSS N2

The market leading performance-per-watt of the Arm Neoverse N2 platform, delivered as a fully verified, customizable compute subsystem.
  • Up to 64 Neoverse N2 cores in a 5nm advanced process.
  • Up to 1MB L2 private cache per core and up to 64MB shared system-level cache.
  • Up to 8x DDR5 40b or LPDDR5 channels.
  • Up to 4x x16 PCIe/CXL Gen5 lanes.
Show more - 18 items
A Comprehensive Guide to Understand AI Inference on the CPU thumbnail

Understanding AI Inference on Arm CPUs

Demand for running AI workloads on CPU is growing. This comprehensive guide provides a deep dive into CPU inference and the use cases for which this may be the practical choice. Explore the industries that are already benefiting from AI on CPU and learn about real-world examples.

Download Guide
Arm Licensing Models

Unlock the Power of Arm Technology

Arm’s cutting-edge solutions are easily accessible through our subscription-based licensing options. In just a few clicks, find out if your company has an active subscription to the technology that’s shaping the future of computing.

Once you confirm your subscription, explore a treasure trove of IP, powerful tools, and innovative models—all designed to help elevate your projects and get you started on building the future of computing on Arm.

Plus, our experts are here to guide you every step of the way. With seamless access and guided excellence, learn how to harness the full potential of Arm technology. From IP integration to advanced modeling, we’ve got you covered.

Let's Find Out