The edge is near. Both in terms of proximity and timing, the edge looms close.
Far from a passive, near-perfunctory periphery of the network, the edge is a bustling locale for data analysis, management, and even storage. The migration of what inventor David McCrory called center of data’s gravity to the edge is transforming industries and opening up new market opportunities. In an October 2018 report, McKinsey & Company identified 107 distinct edge use cases, estimating the potential value of edge computing at USD 175B–USD 215B by 2025—and that’s just the value for hardware companies.
Most are waking up to the reality that “we need to expand our thinking beyond centralization and the cloud, and toward location and distributed processing for low-latency and real-time processing,” as Gartner analyst Thomas J. Bittman put it. Still, to those who don’t specialize in tech, the learning curve can be understandably steep.
Some rather understandable misunderstandings can, shall we say, cloud the edge. Let’s take a look at the three most common myths—and how they stack up against reality.
MYTH 1: The edge will eat the cloud.
Distributed computing has been so ascendant that venture capitalists began to shift their priorities accordingly, with some issuing drastic forecasts. One such notable prediction occurred in a 2017 talk titled “Return to the Edge and the End of Cloud Computing,” given by enterprise investor Peter Levine about two years ago. He declared that because of the machine-learning and IoT-driven shift of computing from cloud to the edge, he could see the cloud dissipating in “the not-too-distant future.” That same year, Thomas J. Bittman, Gartner’s VP and analyst issued a similar warning. “The edge will eat the cloud” was the titular prognosis of the article in which he described the shift toward “location and distributed processing for low-latency and real-time processing.”
REALITY: Edge and cloud will boost each other.
There are solid reasons why a recent IDC study predicts that by 2025, 30% of the world’s data will need real-time processing. Easy example: Take both autonomous vehicles (self-driving cars) andconnected vehicles (ones that communicate a great deal of data with other vehicles but do not make decisions for the driver). They’re intuitive edge use cases. If a connected or a self-driving car’s sensors learn that children are playing in the road and another vehicle is likely to blow through a nearby red light, this information needs to be processed quickly. We don’t have milliseconds of latency to spare to send those insights back to the cloud for processing. The data needs acting upon right this split second.
Levin is right to point out that the processing of this life-critical data—often via machine learning—will need to happen at the end points. But the title of his talk is a bit of a misnomer. Even he, in the very same presentation, admits that “important information will still get stored in a centralized cloud” and depicts the cloud as becoming a learning center of sorts to enable machine learning en masse, which requires a great deal of data and aggregating insights at the edge. Gartner’s Bittman, too, conceded that “cloud will have its role.”
So no, the edge will not overtake the cloud. Instead, it will prompt the cloud to extend its fabric to the edge.
The hyperscale data center model continues to work well for applications that benefit from centralization: large-scale archiving, content distribution, application storage, and fast prototyping, among others.
It is also true that a specific kind of cloud deconsolidation is taking place concomitantly. According to Data at the Edge, a 2019 report published by Seagate with Vapor IO, companies like Vapor IO, Edgeconnex, and DartPoints are turning to micro-modular data centers, also called edge data centers. They are small, regional, self-contained, cost-lowering, automated “micro-regional data centers at the edge of the network—in novel locations, such as in parking lots, on municipal rights of way, and at the base of cell towers.” Designed to withstand environmental and security challenges at the periphery, these edge clusters have “sufficient computing power to aggregate and process data separately from centralized data centers,” according to another micro-modular data center innovator Dell EMC. Cloud and edge computing infrastructure provider Packet calls these offerings “go-anywhere” clouds.
Paradoxically, the edge can be seen as a natural outgrowth of the cloud. While the cloud has “democratized the Internet,” enabling video streaming and gaming, according to Telefonica’s VP Patrick Lopez, “we think that edge is the next generation of that.”
“Edge computing is basically bringing the best of the cloud and the best of telecommunications together,” Lopez said. “The best of the cloud because it’s taking all these cloud services and bringing them closer to the user, and the best of telecommunications because it brings immediacy, always-on, always-connected, which is what telcos are known for.”
MYTH 2: There’s only one edge.
After all, that’s how we refer to it—in the singular person—no?
REALITY: There are many edges.
Yes and no. But it’s not quite the case of tomato, tomahto.
Meaning, there are a growing number of networks and therefore a growing number of outer network boundaries containing endpoints that run applications of interest to users. Some have even tried to quantify their possible maximum number for fun.
They can run in a barn in a field, in a connected car, or in a number of other locations.
Purpose-built edges are definitely a thing in the near term. With time, the edges will become cloudified: customization will happen, but likely only as a software layer. As Telefonica’s Lopez has noted, the ubiquity of access and the simplicity of developer applications that were part of the cloud might have to become a must in any edge. If someone develops an app that works in one edge, it ought to be able to be deployed in any network.
MYTH 3: Shrink the cloud, put it in a box, and voila—you have the edge!
We’ve already established that some storage and processing will need to take place at the edge. Certain attributes of the cloud environment would at least be desirable to replicate across a variety of edges: equal network access and compatibility of an app developed in one edge network across different edge networks. Doesn’t that make each edge a little cloud?
REALITY: The edge is not a tiny cloud.
Remember that it was data and its needs that gave rise to the edge(s)—not the opposite.
This means it’s determined by use cases that produce and process data close to end users.
And these use cases vary widely. We’re talking utility regulation in smart cities, virtual reality scenarios, monitoring of aging bridges, robots making clothes in factories through virtual assistants, etc. The data these scenarios produce, which needs processing at the edge, is also diverse. That’s why the edge infrastructure depends on the application.
As noted here already, the edge will have no room and no time for certain types of data. Archival data or data needed to churn out machine learning processes (data lakes, big clusters of data that teach ML algorithms) in the hyperscale data center, per Lavin—will be of no use at the edge.
Finally, the edge is not a mini cloud because it’s a remote lights-out automated operation marked by a physical proximity to the user. Unlike the cloud, the edge is identified by a location and how near it is to data. Contrary to the centralized, homogenous, general-purpose data center hub, each edge focuses on solving a specific problem.
For now at least.
- Mr. B.S. Teh, Senior vice president of global sales and sales operations, Seagate Technology
Disclaimer: The views and opinions expressed in this article are solely those of the original author. These views and opinions do not necessarily represent those of Deccan Chronicle and/or other staff and contributors to this site.