Skip Navigation (Press Enter) Skip to Main Content (Press Enter)

Arm DevSummit 2021: Ten Takeaways from Cloud to Edge

From data centers to 5G, HPC to confidential compute at the edge, there was more happening in the cloud to edge space than ever at Arm DevSummit 2021.

Arm Blueprint staff Headshot
Posted on 4th November 2021 By Arm Blueprint staff
Cloud Computing Edge Network Infrastructure
Reading Time: 6 mins
Arm DevSummit 2021: Ten Takeaways from Cloud to Edge

From data centers to 5G, HPC to confidential compute at the edge, there was more happening in the cloud to edge space than ever at Arm DevSummit 2021. If you’ve yet to check out the sessions on-demand, read on to discover what was covered and how to watch them for free now.

1. Tencent lays the groundwork for Arm in its Cloud

Li Chengdong, Hardware and Software Co-Optimization Architect in Tencent’s Cloud and Smart Industries Group, showcased how Tencent has ported its TDSQL database and KonaJDK over to Arm. He also provided some interesting benchmarks. In some applications, the Arm Neoverse N1 processor provides a 28 percent performance boost over traditional processors and a 100% increase in the power/performance (performance per watt) ratio.

Unlike some other cloud providers, Tencent has not developed its own chips or servers. It is evaluating the technology from the Arm ecosystem. (Note: Tencent announced trial availability its first instances for Arm in China in October.)

2. Start building DPU expertise

Data Processing Units (DPUs) will increasingly become an integral part of data centers, edge systems and 5G networks, said Gilad Shainer, SVP of Networking at NVIDIA, who explained how devices like NVIDIA’s Bluefield-2 and Bluefield-3 will be used to boost HPC performance.

“DPUS are a full datacenter on a chip…DPUs will become the default NIC,” he said. “(With DPUs) we are not just offloading workloads, we are accelerating them.”

Pensando Systems Fellow Mario Baldi, meanwhile, outlined how Arm-based programmable DPUs and its Distributed Service Platform can reduce latency at the edge while giving carriers the ability to add or change features.  DPUs can also reduce East-West traffic, which can be 90% of datacenter traffic. (Side note: during the same week, Aruba announce it is adopting Pensando’s technology.)

3. The 5G revolution will be customized

Radisys’ Rishi Maulick said one of the appeals of private 5G networks will be customization: industrial customers, who might be connecting many assets for the first time, can achieve savings by tailoring systems to their needs for cost savings. Cities, he added, will also bundle emergency services and consumer-facing ones through network slicing.

Open networks are also taking flight. In three years, Jio has gone from having no customers to the largest in terms of revenue in India by leveraging software and general-purpose data center hardware, said Phillip Ritter, a Magma Evangelist and consultant for Facebook. Rakuten, meanwhile, build a network covering Japan in a year.

(Also check out the myth buster 5G panel with Nvidia, Mavenir, GSMA and Arm for insight on MIMO and GPUs and the Arm 5G Solutions Lab.

4. Why HPC in the Cloud? Time, money and performance.

The first Beowulf supercomputer cluster in 1994 had 16 cores. Now, workloads spanning a million processors are common, notes Brendan Bouffler, head of HPC Developer Relations at AWS. Few can afford that, but they can spin up an AWS account. AWS itself, he added, is saving money and time as well by being able to spin up HPC capacity at will. In the past, a comprehensive product testing suite required before a product launch might have taken 41 days on a traditional cluster. With the flexibility in the cloud, it takes 44 hours and it costs “a trivial amount of money.”

Similarly, computational fluid dynamics simulations with OpenFOAM on Graviton2 in the cloud can cost 37% less than it would on traditional processors. Weather forecasts? 40% cheaper and more accurate.

5. What is the key to Oracle’s Flexible Shapes? A fluid CPU to DRAM relationship

No two workloads are the same. In fact, the same workload will vary on its performance requirement minute by minute, says Oracle’s Matt Leonard. The company’s Ampere AI Flexible Shapes effectively allow customers to dynamically change the core to memory ratio. That way you can shift from performing encryption to a DRAM-intense application like an in-memory database without shifting to a different shape or getting dinged on performance. He added that one flexible shape is equivalent to 39 different shapes at rival services.

Leonard also noted that Oracle’s always free tier includes 4 cores and 24GB of memory, enough to run a Kubernetes cluster and get some real work done.

6. Confidential Compute at the Edge: Here’s how to do it.

Microsoft’s Eustace Asanghanwa delivered one of the most concise, precise and helpful explanations about the movement toward confidential compute we’ve seen. Evert Pap from Scalys also provided a demonstration of how to build an enclave at the edge for security data in use with its TrustBox Edge 201, which was also recently certified under SystemReady.

7. Building more robust silicon.

The CPUs, DPUs, and the other chips running cloud infrastructure are becoming proverbial datacenters-on-chips. Thus, it stands to reason that these complex SoCs will need embedded sensors to monitor performance and anticipate problems just like full-fledged data centers.

Brian Millar, principal engineer for R&D at Synopsys,  explained how embedded, in-chip sensors can monitor voltage and temperature for performance per watt, pinpoint troublesome hot spots and give chip designers a way to improve overall design and robustness for the long term.

Cadence and Arm’s Satheesh Balasubramanian, meanwhile discussed how designers can use Cadence’s Liberate and Tempus platforms to counter the age-old problem of transistor aging. With VDD decreasing and more devices being used in mission-critical apps like driving, degradation is growing in importance.

Additionally, Nick Heaton of Cadence and Arm’s David Koenen provided an overview of verification for Arm CMN-700 verification.

8. Power consumption: It’s in the details.

Is power consumption inside hyperscale datacenters on the verge of escalating or will innovation continue to keep electricity, emissions, and costs level? It’s one of the most important—and hotly debated–questions in computing. But the one thing everyone agrees upon is that the titanic size of these centers–which account for $100 billion of the $120 spent per year on data centers—mean that small changes can have an outsized impact.

Consider an individual chip. A 5% reduction in power could mean 2 watts of savings or a half of a cent per server per hour, said Synopsys Strategic Programs Director Stephen Crosher. Multiply that by 100,000 and you’re talking $2.4 million saved per year in one center alone.

Arm’s Hannah Peeler and Josh Randall, meanwhile, outlined how Arm is working on carbon calculators to help cloud providers, carriers, and their customers make more informed choices about their infrastructure.

9. The software for Neoverse continues to expand.

Check out the latest from Canonical on microclouds, Redis Labs on cloud-native computer vision at the edge, RedHat, VMWare, GitLab, circleCI, NVIDIA and more.  

10. And finally, what’s going on with cloud native at the edge?

Glad you asked. Mark Abrams from SuSE has the answers.

Arm DevSummit 2021: Now On-Demand, Free!

Engineers, developers, and tech enthusiasts: Arm DevSummit 2021 serves up insights into the latest technology trends, gives you an opportunity to up-level your skills in technical sessions and hands-on workshops, and offers the chance to network with like-minded software developers and hardware designers.

Subscribe

Sign up to receive the latest from Arm Blueprint
We will process your information in accordance with our privacy policy. In subscribing to Arm Blueprint you agree to receive a monthly bulletin email of new content, as well as one-off emails for launch blogs, executive communications and new Arm reports.