Edge Computing Infrastructure: The 5 Key Factors

When paramedics arrive at the scene of an accident or emergency, resources and equipment can be very limited. Paramedics work to stabilize the patient as best they can to quickly transport them to the hospital, while ambulances act as an expensive taxi service, contributing little in the way of equipment for in-depth medical diagnosis or treatment. With the centralization of hospitals and increasing traffic congestion, the average time to get to the hospital is increasing, costing valuable time and lives. But what if that ambulance was connected to the edge?

What if a small device, connected to a mobile phone, could perform remote diagnostic tests such as ultrasounds, saving precious time and allowing the hospital to make the necessary preparations and have rapid treatment ready upon arrival? What if a specialist could provide remote support over the shoulder, guiding clinicians to direct the probe and interpreting images right then and there in real time? Such a use case demands higher compute and network features (latency and bandwidth) that cannot be achieved with a central or regional cloud.

This is exactly the conversation we had a couple of years ago with a medical device company looking to improve the capabilities of ambulances. This kind of use case is not only exciting, it saves lives. And it’s just one use case example in the 5G-enabled enterprise market that is expected to be worth up to $700 billion by 2030.

Welcome to the innovative emerging world of edge computing.

Cloud and edge: the technology that enables 5G to deliver on its promises

Using the capabilities of the cloud, edge computing brings computing power and storage closer to where data is generated and consumed. Whether that means an on-premises enterprise deployment or a mobile network will depend on the requirements of the application, but like real estate, it’s all about location, location, location. It is also a complete change from the trend towards centralization that we have seen in recent years to reduce costs and maintain control.

To be clear, edge computing is not a completely new concept. Distributed cloud and other similar technologies are already used by players, including major streaming media providers around the world. But with the arrival of 5G comes a whole new level of networking features and, in turn, a whole new world of opportunity. And without an edge bringing power and processing capabilities closer together, the full promise of 5G for customers and consumers simply cannot be realized. As we often say: “Without edge computing, 5G is just faster 4G.”

So how can CSPs deliver end-to-end capabilities, bridge the network and edge, and position themselves to make the most of these early opportunities? To get you started, here are five key interdependent areas to consider when it comes to defining and implementing your edge computing solutions.

Infrastructure

When it comes to the edge infrastructure layer, we’re talking about where the compute, storage, and application hosting are, about bringing the cloud to the edge. Unlike other network infrastructures, the edge infrastructure is not about building in a set of servers or static machines on which you can install your application. It’s about introducing a way to manage things, similar to how you would manage cloud capabilities today, but at the edge, in a distributed environment.

Since reliability will be crucial for edge applications, the infrastructure must be flexible, efficient, and automated. Depending on application requirements, the infrastructure could be on-premises or in CSP networks, hosting your telecom workloads and OTT 3PP applications with limited local management. To support diverse applications, it is vital that the infrastructure also supports multi and hybrid clouds.

Orchestration

Edge orchestration boils down to resource distribution and configuration. You can’t just have hundreds of places in a country running edge workloads and have all applications deployed to all sites at all times; it would require a lot of resources and would be too expensive. Of course, being smaller than a centralized location, the edge is a resource-constrained environment. This makes it vital to map the topology, considering the capabilities of all the different sites across the network, identifying the best location to place an application, and continually monitoring it for optimal use.

We call this “Intelligent Workload Placement” – we use algorithms to weigh how best capabilities can be provided where they are needed most, and find that sweet spot where the cost of deploying an application on a multi-cloud infrastructure is offset by the benefits. would provide. Dynamic resource allocation and ensuring data and information flows to the right places are crucial for the effective operation of applications at the edge, especially in a multi-cloud environment.

Learn more about business service orchestration and how it can transform technology.

user plane

User and control plane separation of functions (CUPS) was a technology introduced in 4G that has since become more advanced with the advent of 5G, forming the three main functions of the package. While the control plane deals with access, mobility, and session management functions and can be centralized, the user plane function is essentially the gateway between the network and the application, the connection point where the network meets the Internet.

Therefore, the user plane is a key function that must be distributed at the edge. If you’re pushing the app to a certain location, you need to make sure that that gateway is nearby, as well as tell the network to grab the data for that app. To do this successfully, operators need a highly agile user plane function that is scalable to meet the demands of an application and can be deployed on-site with plug and play solutions.

Learn more about the role of the edge user plane and how Ericsson’s Local Packet Gateway fits in.

traffic routing

Traffic routing is an important area, as this is where the network itself comes into play. While infrastructure and orchestration focuses on the hosting and application environment, traffic routing brings the information and awareness that lies within the network and that CSPs have: the location of the user and what the user is trying to consume. For example, if a user submits a request to consume data through a streaming video application, that information is within the CSP network; you will see that the user’s IP session request is routed from the user’s location to the user plane function near where the streaming service server is located. However, at the edge, where all routing traffic is not always contained within the carrier’s network, different options are available to route the user’s IP session to the edge.

We can bring all the traffic to the edge and then decide where it goes from there, or just bring some of the traffic to the edge and manage the rest more centrally. Three main mechanisms are emerging as the relevant technology for edge traffic routing: distributed anchoring, session breaking, and session multiplexing. So how do CSPs decide which technology is best for them?

Ultimately it will depend on the application and intended use. Being a very simple mechanism that can be delivered on top of existing 4G and 5G networks, Distributed Tethering would be a good option for many looking to implement on their existing networks. Session breaking is a more complicated option, requiring complex features to be developed across multiple products, and is very specific to 5G: the mechanism does not exist at all in 4G. Multi-session, which is quite promising, however not yet an established technology due to dependency on the device ecosystem, will likely be 5G-specific as well. By that time, however, 5G is likely to be firmly established as a mainstream technology.

At the end of the day, we want to achieve that traffic separation. For those who have not yet invested heavily in split session technology, the distributed anchor should be easy to implement directly to evolve to the multi-session mechanism in the future, avoiding split sessions altogether. But you should thoroughly understand the costs and benefits of each technology before making a decision, or speaking with an expert partner who can help.

edge exposure

There are two angles when it comes to edge exposure: edge exposure and edge exposure. Exposure for the perimeter includes the exposure of assets such as EU IP and perimeter discovery information to network identity translation information – vital information for systems that need to find where sites are and how to connect. to them.

Edge exposure is about exposing capabilities at the edge to the applications that reside there. These could include location information, quality of service information, or user equipment information. Exposing these capabilities means that in low latency dependent scenarios, you don’t have to go back to a central location to access those capabilities. It is also important to note that all this information must be exposed in a format that is identifiable by the network and can be translated into a useful and consumable format by applications.

Ericsson Edge Exposure Server: Part of the existing Ericsson Cloud Core Exposure Server offering.

Ericsson Edge Exposure Server – part of the existing offering Ericsson Cloud Core Exposure Server

With 25% of all emerging 5G use cases expected to be edge-based over the next year, the business potential for early CSPs to gain an advantage in this emerging area is huge, particularly in enterprise use cases. for sectors such as gaming, industry, healthcare. and more. The questions will simply be, what will your strategy look like for multi-cloud edge deployment? What role will you play in this new ecosystem? And who will you choose to help you on that journey?

read more

learn more about edge computingstrategies for a successful implementation and related offerings from Ericsson.

See what Ericsson CTO Erik Ekudden and Bharti Airtel CTO Randeep Sekhon had to say about cloud innovation and edge opportunities in India and around the world in this CTO Focus blog post.

Find out how Ericsson drives openness for ecosystem innovation.

Leave a Reply

Your email address will not be published.