I navigated the internet in search of the most common questions on Edge Computing. With this post you can move your first steps in the exciting field of Edge Computing.

As you read through my posts on emerging tech, my hope is that they will spark your interest in the topic and inspire you to learn more on your own. I will provide you with an overview of the latest developments in the field and give you examples of how these technologies are being used in the real world. My goal is to give you a sense of the potential impact that these technologies will have on our society and economy, and encourage you to explore the topic further. To help you in your journey, I will suggest some useful resources like articles, websites and other materials that can help you to explore the subject more in-depth.

What does “edge” computing mean?

“Edge” refers to the edge of the network.

In order to understand the meaning of edge computing it is helpful to start by comparing it with cloud computing. “Edge” refers to computation that takes place in the border or “the last mile” of the network and is normally opposed to “Cloud” where the data is transported to the cloud itself via the internet in order to be computed.

Edge computing means real-time local data analysis at the edge of the network, closer to the end user.

Definition of Edge Computing

According to the Open Glossary of Edge Computing edge computing is the delivery of computing capabilities to the logical extremes of a network in order to improve the performance, operating cost and reliability of applications and services.

By shortening the distance between devices and the cloud resources that serve them, and also reducing network hops, edge computing mitigates the latency and bandwidth constraints of today’s Internet, ushering in new classes of applications.

In practical terms, this means distributing new resources and software stacks along the path between today’s centralized data centers and the increasingly large number of devices in the field, concentrated, in particular, but not exclusively, in close proximity to the last mile network, on both the infrastructure and device sides.

The growing interest on edge computing

The research metrics on the search string “edge computing” are pretty clear, the topic is growing in interest.

“Edge Computing”, Global Key Search Trend, Source: Google Trends

Which concepts is edge computing based on?

As the number of connected devices exponentially increases and the features of these devices evolves over time, so does the demand for real-time analysis and optimization. In such context cloud computing may not be the right choice due to the technical challenges of latency, cyber-security and stable internet connectivity and the commercial challenges related to the bandwidth costs.

Edge computing is based on the concept of latency-critical and high-bandwidth applications.

This short and simple video from Red Hat is a good explanation for someone getting started.

What are the differences between edge computing and cloud computing?

In edge computing, the data remains onsite and the computation of it happens within the network. Edge computing is focused on processing large amounts of data generated by both legacy and IoT devices.

In cloud computing, the transmission of of data through the internet generates costs, which are typically invoiced based on the bandwidth used.

By circumventing the need to access the cloud to make decisions, edge computing provides real-time local data analysis to devices, later in this post we will see some of the industries that can benefit from edge computing.

Edge computing is a better fit for real-time critical applications compared to cloud computing.

Comparison edge vs cloud computing

How does edge computing reduce latency for end users?

Latency plays a key role in Internet-connected applications, let’s find out its role together. Latency and reliability are directly affected by distance: the longer the distance the higher the data time to travel and possibility of disruptions in the communication.

In practice the longer distances from output to reception means the data being sent has a longer time to travel, while shorter distances will have lower latency as it is received at a quicker time.

Latency can also be affected by software and hardware components in the network path, along with network congestion at traffic exchange points.

What is latency?

In the context of network data transmission, latency is the time it takes for a unit of data (typically a frame or packet) to travel from the source device to its intended destination. Latency is generally measured in milliseconds. It is a key metric for optimizing the user experience in modern applications. It is distinguished from jitter, which refers to the variation in latency over time. It is sometimes expressed as Round Trip Time (RTT).

Latency-sensitive applications

An application in which reducing latency improves performance but may still run if latency is higher than desired. Unlike an application with critical latency, exceeding latency targets does not generally result in application failure, but may result in a worse user experience.

Examples: image processing, bulk data transfers, video streaming.

Latency-critical applications

An application that fails or performs disastrously if latency exceeds certain thresholds. Applications with critical latency are typically responsible for real-time operations. Unlike latency-sensitive applications, failure to meet latency requirements often results in application failure.

Examples: autonomous vehicle, controlling a machine-to-machine process.

What describes the relationship between edge computing and cloud computing?

Edge and cloud computing are at the same time alternatives and complementary. They can be utilized together. A distributed architecture (edge and cloud) can help optimize the cost and latency of computation.

For example high-latency, critical computation (e.g. real-time optimization) can be performed at the edge and heavy-load computation can be performed in the cloud (e.g. big data analysis).

What is the relationship between 5G and edge computing?

Edge computing consists of running software at the edge of the network, closer to the end user. As explained earlier, this enables faster data processing, lower latency and lower energy consumption.

5G enables the development of new IoT (Internet of Things) applications that require high bandwidth and low latency.

Example of 5G and edge computing combined: AWS Wavelength

AWS Wavelength is an example of how to avoid the latency that would result from application traffic having to traverse multiple hops across the Internet to reach their destination, enabling customers to take advantage of modern 5G networks.

Wavelength Zones are AWS infrastructure deployments that embed compute and storage services within communications service providers data-centers at the edge of the 5G network, so application traffic can reach application servers running in Wavelength Zones without leaving the telecommunications network.

Edge computing reduces latency by bringing compute capabilities closer to the end user.

Edge computing resources

Would you like to continue reading about edge computing? Here are a few papers available from free online.

Leave a comment below if you found this post helpful and feel free to contact me to learn more about edge computing.

Thank you for taking the time to visit my website and read this post. I hope that you found the information provided to be valuable and useful. If you enjoyed this article, I encourage you to explore some of the other content on my website. If you have any questions or comments, please don’t hesitate to reach out to me. I am always happy to engage with my readers and hear your thoughts. Thank you again for visiting, and I hope to see you back here soon!

Leave A Comment

Recommended Posts