As the Internet of Things becomes ubiquitous and the adoption of smart devices becomes more widespread, there is a data bomb waiting to explode. It is estimated that by 2025, there will be over 55 billion smart devices in use. These IoT-enabled smart devices will generate ever-increasing volumes of data that will need to be analyzed in real-time. Sending all of this data to the cloud for analysis will lead to latency and delay in harnessing actionable insights—that may well prove fatal in some instances such as self-driving vehicles, or manufacturing units, or even healthcare centers. To improve computing power while optimizing the use of network resources, edge computing is the newly proposed solution.
Although still in nascent stages, edge computing—or simply edge—has already created a buzz and is a hot topic of discussion. According to Gartner, by 2025, 75% of data will be processed outside of the cloud a significant increase from 18% in 2018. This statistic assumes particular significance in view of an estimate by Cisco that more than 500 zettabytes of data—largely IoT created—will be processed at the edge.
Edge computing refers to data processing at the periphery (hence the name edge) of the network. All the data from the devices is collated, processed and analyzed close to the origin. The requisite processed data in encrypted format is then sent to the cloud. This cuts down on the need to send the entire data to the cloud, thereby preventing unnecessary crowding of the cloud and the overall wide area network. This also results in low latency, higher efficiency in terms of bandwidth use, and lower operational costs.
Edge computing will play an important role in numerous applications and use cases across industries. McKinsey has listed out energy, healthcare, retail, travel, transportation, logistics, and utilities as the likely industries with maximum use cases that can potentially benefit from edge computing. At present edge computing is being used in data centers, oil and gas industry, smart grids, video delivery, healthcare, autonomous vehicles, traffic management, public security, and so forth.
Although edge computing promises multiple benefits—including faster data processing, low latency, optimal bandwidth usage, decongestion of the wide-area network, reliability, scalability, and cost savings—there is a flip side as well. Security remains a gray area that has sparked a debate with proponents saying edge computing enhances network security, while opponents argue that it increases the attack surface. We will discuss the security debate and the challenges around edge computing in the next post.
The article has been written by Neetu Katyal, Content and Marketing Consultant
She can be reached on LinkedIn