Arm and NVIDIA: Building the Future of AI

Arm is the leading power-efficient compute platform. Nowhere is this more visible than in our technology collaboration with NVIDIA. During this ten-year collaboration, Arm processor technology has become a key element of NVIDIA’s world-leading products for accelerated computing, including the NVIDIA Grace™ CPU, NVIDIA GH200 Grace Hopper™, and NVIDIA GB200 Grace Blackwell™ platforms.
Arm and NVIDIA at GTC: Accelerating Computing on Arm
Thanks for visiting us, we hope you had a great GTC!
We demonstrated how to accelerate cloud workloads and enhance performance with the NVIDIA Grace™ CPU for data centers. There were also exciting announcements from NVIDIA CEO Jensen Huang's keynote on NVIDIA's roadmap, featuring several Arm-based technologies:
Access the Podcast and On-Demand Recordings
Arm and NVIDIA Are Shaping the Future of AI and Data Centers
Arm Viewpoints Podcast
The datacenter landscape is undergoing profound transformation as the industry delivers unprecedented performance and efficiency for AI and high-performance computing workloads. In this episode, Ian Finder from NVIDIA and Robbie Williamson from Arm share valuable insights into their collaboration, highlighting how this partnership is reshaping enterprise computing.
Unlocking Performance: Developer Resources for Arm-Based NVIDIA CPUs
On Demand Available Now
Tools, resources, and software to optimize your workflows and achieve efficient accelerator utilization. Learn how Arm-based CPUs power AI development on the NVIDIA Grace Hopper, Grace Blackwell, and Grace CPU Superchips.
Maximizing Workload Potential on Arm Neoverse Cores with NVIDIA CPUs
On Demand Available Now
We covered a range of use cases, including data analytics, databases, web and application servers, media processing, and AI/ML workloads including Retrieval Augmented Generation (RAG) for Large Language Models (LLMs), enabling chatbots, and implementing voice transcription on NVIDIA Grace Hopper, Grace Blackwell, and Grace CPU Superchip architectures.

Arm-Based NVIDIA Grace CPU: High Performance, Power-Efficient Compute for the AI Data Center
Powered by 72 Arm® Neoverse™ V2 cores, the NVIDIA Grace CPU delivers 2x performance per watt and the highest memory bandwidth compared to today’s leading servers.
- Based on next-generation Armv9 architecture, the NVIDIA Grace family of processors deliver up to a 10x performance leap in AI tasks, such as RAG, data processing or graph neural networks.
- Workload migration is a key stepping stone to NVIDIA Grace Hopper Superchips. Learn how 11 different teams evaluated HPC workloads across scientific domains and programming languages.
Try the NVIDIA Grace CPU and interact with demos of its memory bandwidth and software environment with LaunchPad.
Migration Resources

Jetson Orin Nano Super Developer Kit: The Platform for Generative AI Development
NVIDIA Jetson Orin Nano Super Developer Kit offers up to 1.7x higher performance through a JetPack software update based on cost-effective Arm platform, focused on generative AI, LLMs, Vision Language Models (VLMs), and Vision Transformers (ViTs).
Run models up to 8B parameters, like Llama 3.1 8B, seamlessly on this edge computing platform.
- Integration with AI Frameworks: Integrated with popular open ML frameworks (HuggingFace, TensorRT-LLM, llama.cpp, vLLM), enabling developers to move AI workloads between cloud, edge, and PC environments.
- Gen AI Use Cases: Create applications, including robotics, chatbots, and vision-based AI, with support for platforms like HuggingFace’s LeRobot for AI robots or Ollama for AI chatbots with RAG.
- Optimized Edge Computing: Arm’s CPU design enables the Jetson platform to manage AI inferencing workloads efficiently between the CPU and GPU. The Arm CPU, NVIDIA GPU plus software deliver a powerful, scalable solution for edge AI applications.

NVIDIA DGX Spark: The New Arm-Powered NVIDIA AI Supercomputer at Your Desk
NVIDIA DGX Spark (formerly Project DIGITS) brings high-performance AI computing to developers at the edge. Now, every type of AI workload—language, visual or multi-modal—can be run locally.
Achieve power-efficient, high-performance AI with the NVIDIA GB10 Superchip which combines 20 Arm cores in an NVIDIA Grace CPU with an NVIDIA Blackwell GPU. Using NVIDIA AI software, you can run large models locally and scale AI applications, previously only possible in cloud-based supercomputers. Available May 2025.
Arm and NVIDIA: Developer Resources to Advance AI
Learning Paths:
- Migrate Applications to Arm
- NVIDIA Jetson Orin Nano Object Detection
- Understand KleidiAI and Matrix Multiplication for Arm-based SoC NVIDIA Grace
- NVIDIA GPU that supports CUDA to Train ML Agents
- Find Arm Hardware: NVIDIA Jetson Developer Kits for AI projects
Blogs and Briefs:
- NVIDIA Grace CPU Integrates with the Arm Software Ecosystem
- Arm NVIDIA Project DIGITS High Performance AI for Developers
- NVIDIA Grace CPU for High Performance Computing
- Expanding Arm on Arm with the NVIDIA Grace CPU
- NVIDIA Jetson Orin Nano Developer Kits
- NVIDIA Grace Hopper for Apache Spark
- Arm Neoverse NVIDIA Grace CPU Superchips Setting the Pace for the Future of AI

Arm Developer Program
The Arm Developer Program brings together developers from across the world, providing advanced tools and resources, a network of like-minded members, and live sessions from leading experts. Join today and get the support you need to build your software applications.