What is edge computing?

Edge computing is a buzzword that has been buzzing around more and more recently and seems to be carelessly thrown into discussions. But what exactly is it? And what do we expect from this new technology? It’s time to take a closer look at the topic.

Rapidly increasing amount of data

We are living in a time in which we are all constantly producing data. This amount of data is constantly increasing, in fact it is exploding. At the same time, however, the relevance of live data is increasing, which leads to the following problem:

Produced data is usually processed in the cloud today. This means that the data is sent from a device to a conventional data center, processed there and then sent back again. As these data centers are too far away from the place of generation and use to guarantee low latency times, this data processing is an important bottleneck in the introduction of new technologies.

Self-driving cars are a very vivid example: the countless sensors around the vehicle produce an incredible flood of data. Sending this data to a data center and processing it again takes far too long to make driving safe.

The solution to this problem is “edge computing”, i.e. processing this data at the edge of the network.

How does edge computing work?

Put simply, a network of smaller data centers (known as “cloudlets”) processes some of the data generated closer to the source. Only the really necessary part of the data is sent on to the cloud. This enables data analysis in real time, which saves response times and bandwidth. This results in less latency and therefore a significantly improved experience for the user.

The question remains as to what exactly the “edge” is, i.e. where exactly the organization and some of the processing takes place. To summarize: everywhere. Tasks are executed on every device, every sensor and in decentralized mini data centers

No substitute for the cloud

To avoid any misunderstandings: Edge computing is not intended as a successor concept to the cloud. Rather, it is a supplement to it, which provides important support for certain areas of application such as augmented reality or IoT. Or perhaps even makes them possible in the first place.

You could say that edge computing does not replace the cloud, but brings it closer to the user by shortening the path from the user to the cloud. Another clear distinction is as follows: The cloud processes “big data“, the edge “instant data“.

Cloud and edge therefore complement each other in terms of a division of labor.

Edge computing and augmented reality

As mentioned above, edge computing is intended to help emerging technologies overcome current hurdles. This is something that is also urgently needed in the augmented reality environment.

AR is no longer just hype but is on its way to becoming a megatrend. More and more companies are beginning to recognize the true benefits for themselves and are jumping on the bandwagon. Unfortunately, the IT infrastructure is not yet able to keep up with this development.

The prompt processing of the large amount of image data in particular is still causing problems in many places. Excessively long latency times and bandwidth problems make it difficult to use many augmented reality applications across the board. In this area, the decentralization of data processing is an important factor for the hopes of an early breakthrough of AR. Disappearingly low latency times make concepts such as AR in everyday glasses truly realistic.