Arm and NVIDIA: Building the Future of AI

Arm and NVIDIA logos

Arm is the leading power-efficient compute platform. Nowhere is this more visible than in our technology collaboration with NVIDIA. During this ten-year collaboration, Arm processor technology has become a key element of NVIDIA’s world-leading products for accelerated computing, including the NVIDIA Grace CPU, NVIDIA GH200 Grace Hopper, and NVIDIA GB200 Grace Blackwell platforms.

Arm and NVIDIA at GTC: Accelerating Computing on Arm

Thanks for visiting us, we hope you had a great GTC!

We demonstrated how to accelerate cloud workloads and enhance performance with the NVIDIA Grace CPU for data centers. There were also exciting announcements from NVIDIA CEO Jensen Huang's keynote on NVIDIA's roadmap, featuring several Arm-based technologies:

NVIDIA Blackwell Ultra on Arm Neoverse

Neoverse-based NVIDIA Grace™ CPUs in a rack-scale design, acts as a single massive GPU built for test-time scaling.

NVIDIA DGX Spark (formerly Project DIGITS) on Arm CPUs 

The new Arm-powered NVIDIA DGX Spark personal AI supercomputer available soon from leading vendors.

NVIDIA Vera Rubin with Arm-Based Vera CPU Architecture

With 88 custom Arm cores, Vera Rubin is designed to drive performance gains and efficiency improvements in AI datacenters.

Access the Podcast and On-Demand Recordings

Arm and NVIDIA Are Shaping the Future of AI and Data Centers

Arm Viewpoints Podcast

The datacenter landscape is undergoing profound transformation as the industry delivers unprecedented performance and efficiency for AI and high-performance computing workloads. In this episode, Ian Finder from NVIDIA and Robbie Williamson from Arm share valuable insights into their collaboration, highlighting how this partnership is reshaping enterprise computing.

Unlocking Performance: Developer Resources for Arm-Based NVIDIA CPUs

On Demand Available Now

Tools, resources, and software to optimize your workflows and achieve efficient accelerator utilization. Learn how Arm-based CPUs power AI development on the NVIDIA Grace Hopper, Grace Blackwell, and Grace CPU Superchips.

Maximizing Workload Potential on Arm Neoverse Cores with NVIDIA CPUs 

On Demand Available Now

We covered a range of use cases, including data analytics, databases, web and application servers, media processing, and AI/ML workloads including Retrieval Augmented Generation (RAG) for Large Language Models (LLMs), enabling chatbots, and implementing voice transcription on NVIDIA Grace Hopper, Grace Blackwell, and Grace CPU Superchip architectures.

NVIDIA Superchip with dual GPUs on a circuit board, set against a dark background.

Arm-Based NVIDIA Grace CPU: High Performance, Power-Efficient Compute for the AI Data Center

Powered by 72 Arm® Neoverse V2 cores, the NVIDIA Grace CPU delivers 2x performance per watt and the highest memory bandwidth compared to today’s leading servers.

Try the NVIDIA Grace CPU and interact with demos of its memory bandwidth and software environment with LaunchPad.

Explore NVIDIA Grace

Migration Resources

Seamless Cloud Migration with Arm: More Performance, Less Energy

Migrate to Arm for better performance, energy efficiency, and multicloud support, with the tools and expertise to scale, cut costs, and future-proof your cloud deployments.

Learning Path to Migrate Applications to Arm Servers

An introductory Learning Path for migrating applications to Arm servers.

Software Packages Compatible with Arm Servers

Discover software packages that work on Arm (along with their supported versions), download links, and find quick-start resources.

NVIDIA Jetson Nano development board with cooling fan and various ports.

Jetson Orin Nano Super Developer Kit: The Platform for Generative AI Development 

NVIDIA Jetson Orin Nano Super Developer Kit offers up to 1.7x higher performance through a JetPack software update based on cost-effective Arm platform, focused on generative AI, LLMs, Vision Language Models (VLMs), and Vision Transformers (ViTs).

Run models up to 8B parameters, like Llama 3.1 8B, seamlessly on this edge computing platform.

  • Integration with AI Frameworks: Integrated with popular open ML frameworks (HuggingFace, TensorRT-LLM, llama.cpp, vLLM), enabling developers to move AI workloads between cloud, edge, and PC environments.
  • Gen AI Use Cases: Create applications, including robotics, chatbots, and vision-based AI, with support for platforms like HuggingFace’s LeRobot for AI robots or Ollama for AI chatbots with RAG.
  • Optimized Edge Computing: Arm’s CPU design enables the Jetson platform to manage AI inferencing workloads efficiently between the CPU and GPU. The Arm CPU, NVIDIA GPU plus software deliver a powerful, scalable solution for edge AI applications.
Explore Developer Kit
Digital rendering of an AI-powered neural network in blue and red.

NVIDIA DGX Spark: The New Arm-Powered NVIDIA AI Supercomputer at Your Desk

NVIDIA DGX Spark (formerly Project DIGITS) brings high-performance AI computing to developers at the edge. Now, every type of AI workload—language, visual or multi-modal—can be run locally.

Achieve power-efficient, high-performance AI with the NVIDIA GB10 Superchip which combines 20 Arm cores in an NVIDIA Grace CPU with an NVIDIA Blackwell GPU. Using NVIDIA AI software, you can run large models locally and scale AI applications, previously only possible in cloud-based supercomputers. Available May 2025.

Explore NVIDIA DGX Spark