The challenge of data management in 5G networks
Edge processing – unlocked by CUPS – is set to surge, as enterprises adopt new low-latency services. This changes network boundaries, as data processing may move inside the enterprise domain. How can analytics keep pace? In this blog, we consider key issues.
Edge processing demand is set to skyrocket
We explored the challenges of data management in 5G networks in an episode of our webinar series (“Towards autonomous networks”). We noted that data management – already a complex topic – is going to become even more so, due to the introduction of slicing with 5G standalone networks and a diverse set of services with different characteristics. These include URLLC, massive IoT and, in the future, new services optimised for vehicle connectivity, for example. In this blog, we’ll expand on this point, primarily from the perspective of the edge.
That’s because the edge has been brought sharply into focus recently, with a flurry of announcements of both direct edge cloud investments made by operators, as well as partnerships between operators and distributed cloud-scale providers. This activity is set to continue, and it has significant repercussions for future data management strategies.
Network boundaries are shifting…
Figure 1 provides a reminder of why this is so important. Traditionally, data has been largely confined to the core and access domains, with the boundary set by the last-mile technology deployed to reach subscribers. All of this has been under the control of mobile (or fixed) network operators.
Figure 1: Data processing domains in 5G SA
With the introduction of control and user plane separation (CUPS), data processing can take place wherever it is most required – and, for applications that leverage new URLLC capabilities, this means that processing resources need to be located closer to the edge and closer to the source of demand. With latency being a key requirement, distance matters.
Indeed, as Figure 1 also shows, it can take place on the premises of specific business customers. Here, we need to think about Industry 4.0 cases, not classical business communications – although they may not be unrelated.
What this means is that the boundary of the operator network has shifted, with key functional resources (e.g. the UPF, as illustrated above) potentially being located in customer facilities. In addition, while processing entities, such as the UPF, might be provided by the operator, it’s also conceivable that external providers may deliver this infrastructure and simply enable the operator to connect to them.
A UPF might be dedicated to a single tenant or shared between many. As a result, data management will become ever-more complicated – with issues such as ownership, protection and location to be resolved between all stakeholders.
…Creating new challenges for service monitoring
Service monitoring is going to be similarly complex – and the scope will also extend to cater for different data management models. We’ll explore this in a moment, but let’s not forget how quickly this is happening. The most recent edition of the Ericsson mobility report wryly noted that “mobile network traffic growth remains steady” – if, that is, 46% year-on-year growth can really be called steady!
So, this flood of data – soon to be fuelled by traffic from new distributed resources – is surging at a colossal rate. In this context, there are three key issues to consider. First, volume – we’ve already seen how that is set to rise dramatically.
Variety, velocity and veracity
Second, variety. As we noted earlier, new services will be unlocked. In turn, these will bring entirely new kinds of data that will need to be processed, filtered and sorted in order to ensure optimum service delivery and that customer experience can be maintained. Note here that ‘customer’ is a loose term – the customer could be a factory, for example.
Third, velocity. The dynamic nature of 5G SA – with, not just multiple slices, but the potential to activate new slices, or to boost scale to meet short term demand, among other capabilities – means that new applications and sources of data can be created (or removed) and that this will happen with real-time speed, to meet ultra-low latency applications. This has implications for all resources required to process this data. Of course, we’re building an architecture to support this, taking each domain into consideration (Figure 2, below).
Figure 2: Distributing monitoring and service assurance to meet edge processing requirements
These ‘Vs’ of data will probably be familiar to many. There are also two others – veracity and value. While the value of data is for data users and processors to consider (although, that said, if some data is more valuable than other data, then that will definitely come into the purview of service monitoring and assurance, but that’s another story), veracity is very much within the domain of assurance and monitoring. That’s because, with so much data of different kinds, all streaming at unprecedented rate, the ability to validate and determine its quality, rapidly, will be essential, adding yet more complexity to the picture.
Dynamic monitoring and assurance for dynamic, real-time networks
This new monitoring architecture must also be as dynamic as the network it serves. If a new slice is allocated, then the appropriate monitoring resources must also be instantiated at the same time.
So, there are many challenges ahead. What’s certain is that this new network architecture is not just slideware anymore. It’s being deployed and it’s only a matter of time before edge processing is pushed out from the current operator domain boundaries into the enterprise.
After all, each week brings news of new private network deployments – yes, some of these are independent from operators, but a significant number will be built on operator investments. The edge has always been important, but these new demands and service requirements are set to create a host of new opportunities. Operators must be ready to expand their monitoring and assurance footprint to meet them.