Stage Three — Edge and Multicloud: On-prem/Public/Edge Clouds and Hybrid Approaches

Some carriers today have started down the path to edge deployments — an essential component of providing low-latency 5G services like ultra-reliable and low-latency communications (URLLC). Most edge deployments are expected to be container-based, but they still need to accommodate a significant number of vendor VNFs. Strategies vary, but some CSPs are starting to move towards running VNFs on containers. Also, CSPs will need NFV solutions that can scale to hundreds or thousands of edge sites. At the same time, public clouds have become a viable platform for running VNFs or CNFs. As public clouds extend their presence to the edge to provide reduced latencies, these locations can be leveraged by CSPs to host NFs; especially 5G user-plane functions (UPFs) as part of URLLC offerings. For optimal NF placement, the NFV orchestration solution needs to accommodate multiple NF types across multiple clouds.

Further, the edge will be a crucial enabler to enterprise 5G services. CSPs are looking to provide the enterprise with low-latency application platforms leveraging their 5G networks. Therefore, the NF placement functionality needs to understand the context of the enterprise service, as well as the requirements of deploying on an edge platform and make the appropriate edge placements of critical functions.

Stage Four — Achieving Autonomy: AI/ML-driven Optimization and Management

In this stage, which we'll approach soon with 5G rollout and densification, NFs will proliferate across multiple clouds — public, on-prem, core, and edge. NFV orchestration systems will face added complexity in trying to optimize across numerous NFV infrastructure stacks to meet SLA constraints — latency, throughput, reliability — while achieving cost minimization. Manual management and some level of QoS stitching might initially work, but it'll soon become clear that manual operations of such a system will be unwieldy. Even simple rule-driven, heuristic approaches, or basic optimization algorithms are unlikely to achieve the sophistication required. Therefore, AvidThink expects the extensive use of AI/ML across the orchestration landscape as 5G rollouts become pervasive. These systems will need to dynamically place and configure NFs in response to network loads, network slice constraints and available infrastructure.

Driving Towards 5G

CSPs are counting on 5G benefits ranging from lower latencies, improved reliability, quality of service, and substantially increased bandwidth capacity to drive new use cases and applications with their enterprise and consumer subscribers. More significantly, these capabilities, coupled with the ability to offer different network slices to businesses, will provide a route to monetization for their 5G investment.

Another use case that is highly attractive to mobile network operators (MNOs) is 5G fixed-wireless access (FWA) as a wireline alternative. This allows MNOs to compete against fixed network operators. Even for CSPs with wireline operations, 5G FWA can be attractive because it represents a potential way to increase their subscriber reach without expensive build-out of in-ground infrastructure, especially in hard-to-reach locations.

Likewise, deployment of a new 5G core (5GC) with cloud-native architecture is expected to provide improved scalability and allow for optimized network operations — lowering operational expenses while improving capital efficiency.

5G, the Edge and Network Slicing

A 5GC coupled with the deployment of 5G new radios (NRs) enables network slicing. It is network slicing that allows 5G networks to simultaneously provide enhanced mobile broadband (eMBB), massive machine-type communication (mMTC), and ultra-reliable low-latency communication (URLLC) services on the same physical infrastructure. Essentially, it allows for the hosting of multiple concurrent isolated networks with unique SLAs, potentially even on a per-customer basis.

A key to enabling these different QoS levels, including latency bounds, is the use of distributed locations for the UPFs/NFs that handle the packet data. Distributed compute locations allow packet handling to occur close to the associated user equipment (UE), quickly processing data packets before dispatching the packets rapidly to their next hop. This distribution of compute improves capacity handling by limiting the backhauling of data while decreasing latency through reduced hop count. It can also improve the resiliency of the overall infrastructure through distributed computing, avoiding dependence on a central telco cloud.

To achieve distributed packet-handling functions requires the orchestration of the UPFs and relevant NFs across multiple locations. Regardless of whether these NFs are per slice or multi-tenanted across many slices, we shall see why it's a particularly complicated problem and one that isn't solved by first-generation NFV solutions.

To achieve distributed packet-handling functions requires the orchestration of the UPFs and relevant NFs across multiple locations. Whether these NFs are per slice or multi-tenanted across many slices, they create a complex challenge.

Page 4 of 12

Table of contents