Network Infrastructure

Meeting the demands of data-hungry subscribers

The telecoms infrastructure has evolved rapidly over the last few years and will continue to do so to allow operators to meet the demands of data hungry “always on” subscribers. Operators and their OEM equipment suppliers are expected to support multiple new types of services that will force a high bandwidth and low latency demand on both wired and wireless networks. Many of these new deployments will require OEMs to develop platforms that can scale according to the numbers of subscribers to be connected. These new platforms will need a mix of heterogeneous CPU cores, accelerators and in some cases DSPs.


More and more functionality will be integrated onto single SoC devices and these will typically be processing multiple traffic types including payload or backhaul traffic, control traffic and in some cases scheduling of users.  In addition, in an R&D budget-challenged world, an industry standard instruction set architecture (ISA) with a well-supported software and tools ecosystem enables product managers to deliver their products faster to market and conserve R&D dollars for developing value added and differentiated application specific features. Emerging initiatives like Software Defined Networking, Network Functions Virtualization and Content Delivery at the Edge are likely to accelerate a shift towards the utilization of open source software building blocks as operators start to utilize software to rollout new services more efficiently.

ARM continues to extend its roadmap to include processor cores to meet a number of different processing needs – including A, R and M series devices.  In addition ARM is also extending its portfolio of IP to include coherent interconnect capable of supporting multiple clusters of cores and accelerators. In addition to providing high performance cores like the Cortex A57, ARM has supplemented high performance with the ability to support lower granularity performance on the same CPU subsystem.  Using Asymmetric processing capability, the leading edge performance afforded by the Cortex A57 can be supplemented by smaller, lower performance yet still extremely power and area efficient cores like the Cortex A53.  Equally important however in meeting the challenging demands of the processing on these individual cores is how they are interconnected in clusters of processors, how dedicated acceleration and signal processing capability can be accommodated and how data can be transferred on and off chip through communication interfaces and to and from memory.

Coherent interconnects to support these stringent mix of requirements will become an increasingly important aspect of next generation SoC designs. Clusters of each of these types or cores (Cortex A57, A53 and A15) can be accommodated utilizing the CCN-5xx interconnect product.  The CCN-504 for example maintains cache coherency between up to 16 cores and provides  a low latency path between the cores, caches, external memory and networking I/O.  ARM has developed a CCN roadmap that extends this capability to a higher number of cores.  Coherency is also maintained between the clusters and the accelerator IP that is interfaced through the AMBA interfaces.


We use cookies to give you the best experience on our website. By continuing to use our site you consent to our cookies.

Change Settings

Find out more about the cookies we set