Cloud-Class AI, Now on Your Desk
AI Summary
From hyperscale cloud to the desk beneath your monitor, a new class of AI workstations is emerging, powered by 20-core Arm configuration and the NVIDIA GB10 Grace Blackwell Superchip. Delivering petaflop-scale compute, unified memory, and support for models up to 200 billion parameters, these systems are redefining what’s possible on an AI workstation. With leadership from top OEMs and native software support from ecosystem partners, Arm-powered AI workstations bring performance per dollar, architectural efficiency, and developer readiness to the forefront of large-screen compute.
From Cloud to Desk: Unmatched Performance with Arm-Powered AI Workstations
Purpose-Built for Desktop-Class Performance
Arm architecture delivers workstation-grade performance for today’s most demanding workloads.
From Cloud to Desktop Without Compromise
The same architecture powering the cloud now drives AI workstation innovation.
Performance Per Dollar that Scales
Arm-based desktops break the cost-performance ceiling.
Trusted by Leading OEMs
The world’s leading OEMs are launching Arm-based AI workstations powered by the NVIDIA GB10, offering up to 200-billion-parameter support, petaflop-scale performance, and industry-ready designs.
Acer Veriton GN100 AI Mini Workstation
ASUS Ascent GX10
Dell Pro Max with GB10
Arm Technology Powers High-Performance AI Workstations
Arm Cortex-X925
The Cortex-X925 core drives compute-intensive workloads, such as AI training, simulation, and 3D rendering, managing data flow and orchestration to prevent GPU starvation and sustain high throughput across large models.
Arm Cortex-A725
The Cortex-A725 core handles preprocessing, tokenization, and inference, working with Cortex-X925, to operate within a unified memory pool to reduce bottlenecks and enable larger models to run locally.
Armv9 Architecture
Armv9 delivers scalable performance from cloud to edge. Its advanced vector processing, memory tagging, and security extensions make it ideal for multi-threaded workloads on modern desktops.
Built for Modern Workloads
AI Researchers
Arm-powered AI workstations enable researchers to train and fine-tune large models locally, with support for up to 200 billion parameters. By bringing cloud-class performance to an AI workstation, they allow faster experimentation cycles, reduced infrastructure costs, and greater control over sensitive datasets.
Healthcare
Arm-powered AI workstations enable local model fine-tuning for diagnostics and medical imaging, ensuring patient data stays secure while accelerating time-to-insight. By eliminating reliance on the cloud, healthcare organizations can achieve both compliance and faster iteration in life sciences and clinical research.
Architecture and 3D
From real-time rendering to large-scale simulations, Arm-based AI workstations provide uncompromising performance for architects, designers, and engineers. Unified memory and high-core efficiency accelerate workflows in 3D modeling, CAD, and visualization without cloud queue bottlenecks.
Enterprise AI
Enterprises can prototype, train, and deploy AI models securely on -device, reducing cloud costs while maintaining control of sensitive data. With NVIDIA GB10 performance and native software compatibility, Arm-powered workstations bring agility and scalability directly into enterprise environments.
Latest News and Resources
- NEWS and BLOGS
Key Takeaways
Key Takeaways
- Arm-based AI workstations deliver petaflop-scale AI performance with support for models up to 200 billion parameters, offering high efficiency and performance per dollar.
- Top OEMs including Dell, HP, Lenovo, and Asus have launched Arm AI workstations, powered by NVIDIA GB10 and optimized for large-screen compute.
- Arm Cortex-X925 and Arm Cortex-A725 CPUs with unified memory drive demanding workloads, from AI training to 3D rendering and simulation.
- The NVIDIA AI stack runs natively on Arm, enabling seamless development with PyTorch, TensorFlow, Docker, and Hugging Face.
- Ideal for AI research, healthcare, and enterprise use, Arm-based AI workstations enable fast iteration, data privacy, and local development of AI models.
FAQs
What are the key features of the Arm-based NVIDIA GB10 Grace Blackwell Superchip?
- 20-core Arm CPU configuration, combining Arm Cortex-X925 and Arm Cortex-A725 cores, supports diverse AI tasks, including orchestration, tokenization, and reinforcement learning.
- Unified memory architecture eliminates separate CPU and GPU memory, reducing overhead and enabling AI models with up to 200 billion parameters at the edge.
- Native toolchain portability allows developers to use NVIDIA's AI software stack and CNCF tools like Docker and Kubernetes across cloud and edge environments.
Why would developers or enterprises want to run AI models locally on an AI workstation?
Running AI models locally helps improve responsiveness, data privacy, and operational independence. For developers and enterprises, local inference reduces latency, enhances control over sensitive data, and ensures reliable performance in environments with limited or no connectivity.
What enables AI workstations to run large-scale AI models?
AI workstations such as GBX are built around a fundamentally new system design that unites CPU, GPU, and memory into a single, coherent architecture. The GBX platform—powered by the Arm-based NVIDIA Grace Blackwell Superchip—combines 20 Arm cores with a next-generation Blackwell GPU, delivering over 1,000 trillion operations per second (TOPS) and 128 GB of unified LPDDR5x memory.
This unified memory allows massive models—up to 200 billion parameters on one system—to fit entirely into local memory, eliminating the data transfer bottlenecks that limit traditional x86 workstations . GBX also runs NVIDIA’s full AI software stack (CUDA, TensorRT, PyTorch, TensorFlow, and DGX OS) on Arm, the same stack used in DGX data centers and cloud infrastructure.
In essence, GBX shrinks what used to require a server rack into a 6-inch, 200 W USB-C-powered cube, turning local AI development into a real alternative to renting GPUs in the cloud.
What’s the real-world value of running AI on workstations for researchers, scientists, or enterprises?
AI workstations like GBX make it practical to run and fine-tune very large models—something once limited to the cloud. They deliver enough on-device performance for real experimentation and iteration, while keeping data local and costs predictable. It’s not about matching cloud scale, but bringing meaningful AI capability directly to the desk.
Can I run PyTorch or TensorFlow natively on Arm-powered AI workstations?
Yes. The NVIDIA AI software stack, including PyTorch, TensorFlow, Docker, and other widely used frameworks, runs natively on Arm without emulation. Developers can build, train, and deploy models on Arm-powered AI workstations with the same toolchains they use today, ensuring seamless workflows and faster iteration.