Vai al contenuto

Edge intelligence

    With the proliferation of the Internet of Things (IoT), more and more data are created by widespread and geographically distributed mobile and IoT devices. Offloading such huge data from the edge to the cloud is intractable because it can lead to excessive network congestion. Therefore, a more applicable way is to handle user demands at the edge-cloud servers directly, which leads to the birth of the edge computing paradigm. The principle of edge computing is to push the computation and communication resources from the cloud to the edge of networks to provide services and perform computations, avoiding unnecessary communication latency and enabling faster responses for the end users.

    At the same time, Artificial Intelligence (AI) is developing rapidly nowadays. Big data processing necessitates that more powerful methods, i.e., AI technologies, for extracting insights that lead to better decisions and strategic business moves. In the last decade, deep neural networks (DNNs) have become the most popular machine learning architectures. Deep learning, represented by DNNs and their offshoots, i.e., convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs), has gradually become the most popular AI methods in the last few years. Driven by the breakthroughs in deep learning and the upgrade of hardware architectures, AI is undergoing sustained success and development.

    Considering that AI is functionally necessary for the quick analysis of huge volumes of data and extracting insights, there exists a strong demand to integrate edge computing and AI, which gives rise to edge intelligence (EI). EI is not the simple combination of edge computing and AI, but is a tremendous and enormously sophisticated subject, covering many concepts and technologies, which are interwoven together in a complex manner.

    Instead of entirely relying on the cloud, EI makes the most of the widespread edge resources to gain AI insight. Specifically, edge computing aims at coordinating a multitude of collaborative edge devices and servers to process the generated data in proximity, and AI strives for simulating intelligent human behavior in devices/machines by learning from data. Besides enjoying the general benefits of edge computing (e.g., low latency and reduced bandwidth consumption), pushing AI to the edge further benefits each other in the following aspects. On the one hand, AI provides edge computing with technologies and methods, and edge computing can unleash its potential and scalability with AI; on the other hand, edge computing provides AI with scenarios and platforms, and AI can expand its applicability with edge computing.