We launched Arm Neoverse exactly one year ago, laying out a vision for secure, high-performance, flexible cloud-to-edge computing built for a world of a trillion connected devices. We said that without the changes we were proposing, existing infrastructure would start to creak, groan and die under the coming data deluge.
Why? The days of sending almost every scrap of data to vast remote server farms powered by a single, legacy, general-purpose compute architecture were over. Instead, compute had to evolve to provide distributed processing at optimal points, as well as heterogeneous compute from cloud to edge to endpoint. In this way, the right resources can be layered in at the right points along the spectrum.
Arm experts backed up our vision at Arm TechCon 2018 and in the months afterwards. We spoke about a diverse set of workload-optimized solutions from base stations in 5G networks to network, storage, and security servers. We made it clear that by rethinking system architectures, and optimizing around workloads, the infrastructure could dramatically improve efficiency, capacity and total costs.
Any doubts about our vision vanished in February 2019 when we shared technical details and roadmap insights into the Arm Neoverse Platform family (Arm Neoverse N1 for cloud to edge compute and Arm Neoverse E1 for the data plane). In October 2018, we said customers would see a 30 percent improvement in performance over our previous generation. In February 2019 we admitted we’d got that wrong: Arm’s engineers smashed the target, achieving a 60 percent performance increase, and for some workloads the performance was even better.
But it’s one thing to present performance and efficiency claims or show slideware of product roadmaps; it’s an entirely different thing when ecosystem partners join you on the journey. Here are some highlights:
- Amazon Web Services (AWS) unveiled Graviton. The Arm-based Graviton processors, built by Amazon and based on 64-bit Arm Neoverse CPUs, power all new Amazon EC2 A1 instances and cut costs for end users by up to 45 percent.
- HPE announced the world’s largest Arm-powered supercomputer, Astra, deployed at Sandia National Laboratories. Astra is powered by the HPE Apollo 70 High-Performance Computing platform running on the Marvell ThunderX2 Arm-based processor.
- Huawei released its Neoverse-based Kunpeng 920 server chip to power its TaiShan server family. Since then, Huawei has said it will continue making major investments in Arm-based server chip development and the related ecosystem over the next five years.
- Fujitsu announced design of the Arm-based Post-K supercomputer had been completed. The system uses the A64FX, the world’s first CPU to adopt the Scalable Vector Extension (SVE), an extension of Armv8-A instruction set architecture for supercomputers.
- Arm extended its partnerships with Ampere and Marvell to accelerate their future Arm-based infrastructure processor roadmaps.
- Xilinx announced shipments to multiple tier one customers of its Versal adaptive compute acceleration platform (ACAP) designed for heterogenous compute and leveraging 64-bit Neoverse cores.
- Mellanox launched Bluefield-2 solutions that use Neoverse CPUs for workload-optimized network, storage and security platforms.
- NXP partnered with Altran and Arraycom to make available a flexible, software-defined 5G RAN solution based on the NXP LX210A Neoverse-based SoC.
Creating successful infrastructure solutions is about more than hardware. Developers need a robust software ecosystem, especially as the world moves to cloud-native software development. Here are some highlights:
- In February 2019, Arm became a Gold Member in the Cloud Native Compute Foundation (CNCF) specifically to accelerate work within the development community to establish Arm as a first class architecture across the cloud-native ecosystem.
- In April 2019, Arm and Docker announced a strategic partnership ensuring developers building containerized applications can target Arm hardware as easily as any other architecture. We worked together to bring cloud-native benefits to workloads running on Arm from cloud to the edge.
- In May 2019, Red Hat announced RHEL 8 with full support for Arm – the first time Arm was supported in mainstream Red Hat Enterprise Linux.
- In June 2019, NVIDIA said it would make its full stack of AI and HPC software available on Arm. That includes all NVIDIA CUDA-X AI and HPC libraries, GPU-accelerated AI frameworks and software development tools such as PGI compilers with OpenACC support and profilers.
- Then, on the keynote stage at VMworld in August 2019, VMware demonstrated four hypervisor instances on a single appliance by running ESXi on Arm on multiple SmartNICs. This demo was proof of how Arm and the world’s largest enterprise software company together can change where and how virtualized compute can be deployed.
- Finally, at NGINX Conf 2019, Arm and NGINX showed how companies can achieve significant cost savings (up to 40 percent) with Arm Neoverse‑based solutions for a wide range of applications, running on Amazon EC2 A1 instances in the AWS Cloud.
Into the future
The past year has been amazing, but there’s always more we can do. At Arm, we don’t just invest in IP; we invest across the whole infrastructure platform. We dedicate vast resources to develop software and compiler optimizations, SDKs, reference designs; we develop partnerships that stretch deep into key areas like design automation, development tools and partnerships. The infrastructure group alone has more than 100 open source and commercial software partnerships.
Now and going forward, technology partners at all levels have more choice, scalability and greater cost-efficiency. Companies will be able to analyze, filter and react to data at multiple points and to take advantage of new compute capabilities inside endpoint devices and at the network edge.
Just imagine what the next 12 months will bring!