Cloud computing is highly being used for several years for various purpose s. From daily tasks, such as reading e-mails, watching videos to the factory automation and device control, it changed where the data is being processed and how it is accessed. However, increasing number of connected devices brings problems, such as low Quality of Service (QoS) due to infras-tructure resources and high latency because of the bandwidth limitations. The current tendency to solve the problems that the Cloud computing has is performing the computations as close as possible to the device. This paradigm is called Edge Computing. There are several proposed architectures for the Edge Computing, but there is no an accepted standard by the community or the industry. Besides, there is not a common agreement on how the Edge Computing architecture physically looks like. In this paper, we describe the Edge Computing, explain how its architecture looks like, its requirements, and enablers. We also define the major features that one Edge Server should support.
INTRODUCTION
With the increased inclination towards Internet of Things (IoT), number of connected devices to the Internet are increasing day by day. In 2012, the connected devices count was around one million which went up to 900 million in 2023 with increased usage of embedded devices. Later, IoT became even more popular and have ten billions of devices connected. In 2022, with the inclusion of wearable devices this number went high as 18.7 billion. In 2023, this number was 11.2 billion thanks to connected home appliances and in 2024, 254.4 billion with smart grids. The numbers increased in the upcoming years due to involvement of small personal handheld devices, such as toothbrushes, traffic lights, and table watches. Finally, even door levers are expected to be part of smart devices in 2030 [1].
Abstract —Cloud computing is highly being used for several years for various purpose s. From daily tasks, such as reading e-mails, watching videos to the factory automation and device control, it changed where the data is being processed and how it is accessed. However, increasing number of connected devices brings problems, such as low Quality of Service (QoS) due to infras-tructure resources and high latency because of the bandwidth limitations. The current tendency to solve the problems that the Cloud computing has is performing the computations as close as possible to the device. This paradigm is called Edge Computing. There are several proposed architectures for the Edge Computing, but there is no an accepted standard by the community or the industry. Besides, there is not a common agreement on how the Edge Computing architecture physically looks like. In this paper, we describe the Edge Computing, explain how its architecture looks like, its requirements, and enablers. We also define the major features that one Edge Server should support.
Keywords –Edge computing; Extensible Architecture
Edge Computing [3] is an emerging technology which allows machines/people to access the data ubiquitously. It enables on-demand sharing of available computing and storage resource among its users which could be either human or machine, or even both. Today, it is even possible for a simple device to share its status or get information over Internet with millions of users. In
Edge Computing is a recent paradigm, which moves com-puting application and services from centralized units into the logical extremes or at the closest locations to the source and provides data processing power there. It adds an additional tier between the Cloud and the end-devices as depicted in Figure 1. Increase in Edge nodes within a location will reduce the number of devices connected to a single Cloud and eliminate the problems of the Cloud Computing. Examples to Edge Computing can be listed as Smart Cities, Machine to Machine communication, Security Systems, Augmented Reality, Wear-able Health Care Systems, Connected Cars, and Intelligent Transportation. For example, a plane produces gigabytes of data per second [6], which cannot be handled by a single base infrastructure due to bandwidth limitations. Another example is a Formula One car which produces approximately 1.2 GB/s data [7] that requires gathering, analysis, and acting in-time to stay competitive in the race [8]. Edge Computing is believed to solve these issues by aggregating and pre-processing the data in Edge, before transmitting to the Cloud or even deciding the next steps on the Edge.
Abbildung in dieser Leseprobe nicht enthalten
Figure 1(Self drawn). A simplified version of communication using Edge Computing.
This paper presents an ongoing work on Edge Computing with its clear description. It also explains its requirements and enablers to solve the introduced issues because of high usage of Cloud and IoT.
2. ARCHITECTURE DESIGN
Edge Computing adds an additional tier between the Cloud and IoT devices for computing and communication. The data produced by the devices themselves are not directly sent to the Cloud or back-end infrastructure, but initial computing is performed on this tier. Considering the number of connected devices and the data they produced, this tier is used to aggregate, analyse, and process the data before sending it into the upper layer, the infrastructure. Figure 2 depicts the proposed core functionalities for an Edge Server.
Abbildung in dieser Leseprobe nicht enthalten
Figure 2(Self drawn). View of the proposed extensible Edge server architecture with its major functionalities, where green blocks extend the functionalities for the blue core node.
The proposed Edge Server architecture is to be designed modular and should provide functionalities for real-time and non-real-time control, as well as real-time communication.
Core node runs on an operating system and tracks resources and makes decisions on where to execute a task. In the proposed architecture, addition of a new hardware or software modules enable new functionalities and improve the usability of the server. For example, in the case that machine learning algorithms are desired to be executed on the server, connecting a dedicated artificial intelligence (AI) module with dedicated Graphics Processing Unit (GPU) should require none to min-imal configuration to be active.
As mentioned in Section 2, scalability is quite important to accomplish the tasks. In the scope of scalability, one server is expected to be aware of its neighbouring servers along with their functionalities. Using the previous example, in case an AI module is connected to one server, other servers are informed with this functionality and they can utilize this server more often for AI-related tasks. The decision, of course, depends on the conditions required by the task, such as deadline.
REQUIREMENTS
Edge Computing is a paradigm which uses Cloud Comput-ing technologies and gives more responsibilities to the Edge tier. These responsibilities are namely, computing offload, data caching/storage, data processing, service distribution, IoT management, security, and privacy protection [4].
Without limiting the Cloud Computing features, Edge Computing needs to have the following requirements, some of which are also defined for Cloud Computing [14][15]:
Interoperability: Servers in Edge Computing can con-nect with various devices and other servers. In Cloud Computing, IoT allows countless number of devices to communicate with humans or each other.
Scalability: Similar to Cloud services, Edge Computing will also need to be adapted for the size of its users and sensors. First deployment enables small number of users and devices while few Edge Servers should handle higher number.
Extensibility: Computing technology is developing rapidly. After 2-3 years of deployment, clock speeds, memory size and program size increase, too. Easy deployment of new services and new devices with small effort is required for essential goal of Edge Computing.
Abstraction: For the seamless control and communica-tion, the abstraction of each Edge Node and group of nodes is required. Moreover, abstraction helps the topology of an Edge network to be flexible and reconfigurable.
Time sensitiveness: Below OT, the operations may be near-real-time or real-time. Edge Computing is expected to solve time issues which Cloud computing cannot guarantee.
Security & Privacy: Using Cloud Computing services has a trade-off for enterprises like manufacturing and high-tech companies because there is a concern about the leakage of high knowledge and business activities outside their own organization.
Reliability: Edge Servers provide real-time or non-real-time control for the devices. Real-time tasks may be vital which involve human safety.
Intelligence: Multi-sensor generates tremendous amount of data and uploads into Cloud, directly. It causes network con-gestion and heavy load on the Cloud server.
[...]
- Citation du texte
- Ajit Singh (Auteur), 2018, Extensible Edge Computing Architecture, Munich, GRIN Verlag, https://www.grin.com/document/488802