Edge Computing: The Future of Cloud Computing
The last decade has seen Cloud Computing rise as the new paradigm of computing by combining two main aspects: the centralization of computing and storage, and network management in the Cloud. When leveraged, Cloud Computing empowers businesses to deliver elastic computing power and storage associated with their services, in order to support a large number of end-users.
Nowadays, businesses and individuals run an increasing number of applications in the cloud. Consequentially, end-user devices require evermore computing power to support their computationally intensive activities related with their behavior and needs. This brings us to the problem associated with network latency and its effect on major business processes.
Inside current cloud architectures, all user requests are transmitted from their devices to a main datacenter through different network connections based on specific protocols. Cloud hosting is capable of handling workloads seamlessly without any possibility of failure. Since it functions as a network, even if there is a failure in one of the components, the services are available from the other active components. Nonetheless, this network model faces certain issues.
Cloud service latency is the delay between a client request and a cloud service provider’s response. Latency greatly affects how usable and enjoyable certain devices and applications can be. Those problems can be magnified for cloud service communication, which can be especially prone to latency due to certain factors such as the number of router hops and the number of ground-to-satellite communication hops on the way to the target server, which leads to greater variability in task/service delivery.
With the current rise of IoT and the computationally heavy services and applications accompanying it, such as smart homes and Artificial Intelligence, these rapid changes need to be handled by more diverse and complicated wireless environments to support growing businesses and user groups.
Edge Computing
The roots of edge computing date back to the late 1990s, when Akamai introduced Content Delivery Networks (CDNs) to accelerate web performance. A CDN uses nodes at the edge close to users to pre-fetch and cache web content, referred to as “cloudlets”. End-to-end latency is impacted by physical proximity; a cloudlet’s physical proximity to a mobile device makes it easier to achieve low end-to-end latency and high bandwidth. This feature also provides a fallback service on a nearby cloudlet which can temporarily mask the failure, according to Mahadev Satyanarayanan at Carnegie Mellon University.
For applications that require end-to-end delays to be tightly controlled to less than a few tens of milliseconds, network delays can be detrimental to the execution of their functionalities. Many experts in the IT industry have joined efforts and are currently experimenting with edge computing to bring the idea to life.
Last year, Google was working on implementing a wearable cognitive assistance device based on Google Glass, which relies on edge computing to pull real-time data from the cloud. Many other initiatives entailed utilizing the cloudlet-based architecture to process IoT data (i.e: Cisco Kinetic platform) and run IoT applications (i.e: Cisco IOx platform) on the cloud, as mentioned by Hind Bangui in an academic paper on the advances in Edge-Cloud Technologies.
Edge computing is a major step into the 5G era that will reduce network delays significantly and accelerate the processing time in the cloud. In a time where people look for the best value out of a product or service, such technologies empower businesses to experiment with new innovative applications while providing a more robust user experience.
By taking on serious R&D initiatives to develop Edge Computing and other 5G network technologies, the IT industry offers a promising paradigm to provide cloud-computing capabilities in close proximity to mobile users, bringing us closer to the fully connected world of the 5G Era.