The AI compute platform for smartphones and PCs
AI Summary
Mobile and PC devices are becoming always-on AI systems, expected to deliver richer experiences without sacrificing battery life, privacy, or launch schedules.
Arm turns CPUs, GPUs, and system IP into a coherent compute platform that maximizes performance per watt, hardens security from silicon to software, and simplifies integration so partners can ship with shorter time-to-market. With the Arm CSS platforms, device innovators, semiconductor partners, and developers get a pre-integrated system architecture to build and scale AI-first devices across every tier.
Why choose Arm for mobile and PC
AI-first
Deploy AI-first experiences across devices with Arm’s unified architecture and software ecosystem.
Industry-leading performance
Maximize on-device AI and graphics performance with Arm’s Scalable Matrix Extension 2 (SME2) and neural technologies.
Security built in
Protect models, data, and OS with Armv9 security features and secure system IP.
Faster time-to-market
Ship AI-first phones and PCs sooner with Arm’s integrated system architecture.
Arm Lumex CSS: An AI-first compute subsystem platform for the next mobile era
Built on the latest Armv9.3-A architecture, Arm Lumex CSS delivers the performance, efficiency, and developer-ready integration needed for next-generation smartphones.
With industry-leading IPC, a new flagship GPU, and day-one software support through Arm Kleidi and SME2, Arm Lumex CSS empowers SoC designers and OEMs to accelerate AI innovation, faster, smarter, and across all device tiers.
Build on Arm
Featured partners building AI-first devices on Arm
Leading chipmakers, OEMs, and software ecosystem partners build their mobile and PC roadmaps on Arm. They rely on Arm’s platform approach to achieve industry leading performance per watt, meet increasing security expectations, and compress design cycles on advanced nodes, turning ambitious AI and gaming roadmaps into deployable products across tiers and regions.
Talk with an expert
Talk to an Arm expert about your next mobile and PC platform and take your success to the next level.
Latest news and resources
- News and blogs
- Guide
AI supercharging the future of mobile graphics
See how mobile AI workloads create advanced computing capabilities and performance, improving graphics, intelligent interactions, and immersive gaming experiences.
Key takeaways
-
The Arm Lumex CSS platform delivers leading performance per watt for on-device AI, gaming, and everyday workloads.
-
Armv9 security and secure system IP protect AI models, user data, and overall device integrity.
-
Pre-integrated compute subsystems with software enablement to reduce integration risk and speed time-to-market for silicon partners and OEMs.
-
Developers get day-one access to AI acceleration, gaming, and graphics tools, allowing them to build once and deploy broadly across devices.
FAQs
What Arm products are included in the Mobile and PC category?
This category covers Arm’s compute platforms and IP for smartphones and PCs, including the Lumex CSS platform for mobile, C1 CPU clusters, Mali GPUs, system interconnects, DSUs, and supporting software libraries such as Kleidi. Together, they form a foundation platform for AI-first, power-efficient, and secure devices across tiers.
How does Arm improve performance per watt for mobile and PC AI workloads?
Arm co-designs CPU, GPU, and system IP with physical implementations to optimize data movement, cache hierarchies, and workload placement. The result is higher IPC, more performant ray tracing, upscaling, neural graphics, and better AI throughput at a given power budget, allowing sustained AI and gaming workloads on battery-powered devices and lightweight PCs.
How is security addressed across Arm-based Mobile and PC platforms?
Security is built into the Armv9 architecture, giving OEMs and SoC partners a trusted foundation for on device AI workloads and end-user experiences.
How do Arm platforms reduce integration complexity and time-to-market?
Arm CSS provides a pre-architected, validated platform of CPU, GPU, and system IP, along with foundry-optimized physical implementations and software enablement. This reduces the amount of custom plumbing, shortens bring-up and validation cycles, and helps partners align silicon delivery with OEM launch windows.