BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Demystifying Edge Computing -- Device Edge vs. Cloud Edge

Following
This article is more than 6 years old.

Cloud computing is going through a fundamental shift in which the traditional model of accessing highly centralized resources is replaced by a distributed, decentralized architecture. This new paradigm called edge computing, brings the core building blocks of cloud – compute, storage and networking – closer to the consumers.

Source: https://flic.kr/p/nACPVu

By moving compute closer to the origin of the data, the latency involved in the roundtrip to the cloud gets reduced. Some of the evolving use cases such as Augmented Reality (AR) and Internet of Things (IoT) benefit from edge computing. End users of these applications enjoy immersive experiences delivered by the edge.

Though IoT is the key driver of edge computing, many use cases are accelerating the pace of adoption. Artificial Intelligence and Machine Learning models rely on cloud for the heavy lifting. Typically, an ML model is trained in the public cloud and deployed in the edge for near real-time predictions. Edge computing becomes an essential component of the data-driven applications.

Edge computing is in its early days. As with many of the emerging technologies, it is a bit confusing to the customers. Vendors from multiple segments are positioning edge computing in different forms. The current market landscape consists of players from the public cloud, networking companies, pure-play ISVs and industrial automation companies. Each of these segments tackles edge computing differently, which is adding up to the confusion. I am attempting to demystify and classify edge computing based on the use cases and scenarios.

The goal of edge computing is to minimize the latency by bringing the public cloud capabilities to the edge. This can be achieved in two forms – custom software stack emulating the cloud services running on existing hardware, and the public cloud seamlessly extended to multiple point-of-presence (PoP) locations.

Device Edge

In the first model, customers install and run edge computing software in existing environments. The hardware can be dedicated or shared with other services. In many scenarios, the edge stack is run on low-powered devices running ARM processors. For example, connected trucks can carry an embedded system-on-chip (SoC) computer running the edge software. All the sensors talk to the local edge device, which manages the connectivity with the cloud. These devices running the edge stack handle machine-to-machine communication (M2M) providing an intra-sensor network, while also ingesting and storing the data locally. When the edge gains connectivity, they synchronize the current state of sensors with the cloud.

Source: Janakiram MSV

The above scenario is running a specialized device which is acting as the local IoT Gateway that mimics the public cloud capabilities. This architecture of edge computing is called device edge in which customers own the hardware running the edge software stack.

AWS Green Grass and Microsoft Azure IoT Edge are examples of device edge software. Both these services attempt to bring device registry, device twins, device communication, local storage and sync capabilities.

Cloud Edge

The second model of edge computing can be referred as the cloud edge, which is an extension of the public cloud. Content Delivery Networks (CDN) are classic examples of this topology in which the static content is cached and delivered through a geographically spread edge locations. While CDNs deal with storage to provide content, cloud Edge layer extends the scenario to include compute and network services.

Source: Janakiram MSV

Unlike device edge, cloud edge will be owned and maintained by the public cloud provider. For all practical purposes, it is an extension of the public cloud available in a highly distributed form. cloud edge will become a micro-zone, a logical extension to the existing hierarchy of regions and zones. Micro-zones will extend public cloud to thousands of new locations enabling developers to keep apps closest to consumers.

In delivering the cloud edge to developers and consumers, public cloud providers will partner with telecom players. Telcos already have a massive footprint of cell phone towers that that can double up as mini data centers running compute, storage, and networking stack. public cloud providers can host micro-zones in these cell towers, which can dramatically extend their reach.

Emerging players such as Vapor IO are attempting to build the infrastructure for cloud edge. Project Volutus from Vapor IO attempts to build the network of distributed edge data centers by placing thousands of mini data centers at the base of cell towers that are directly connected to the high-speed wireless networks.

Crown Castle, America’s largest provider of shared wireless infrastructure, is not only a partner of Vapor IO but also an investor. Crown Castle's asset mix of approximately 40,000 tower locations and large metro fiber footprint will eventually run the cloud Edge powered by Project Volutus.

The fundamental difference between device edge and cloud edge lies in the deployment and pricing models. Device edge lives closer to the origin of data tackling the near real-time processing requirements. Since it runs on customer’s hardware, it increases the CAPEX and the TCO. Cloud edge is a low-touch deployment model in which the cloud provider is responsible for the infrastructure. It enjoys the same benefits of public cloud such as OPEX and centralized management.

The next generation of applications will rely on the edge to deliver enhanced user experiences. The deployment models – device edge and cloud edge - are aligned with different use cases. Many applications may take advantage of both the deployment models.

Follow me on Twitter or LinkedInCheck out my website