Edge computing is a big deal right now. This exciting concept is already a reality and, with the rise of the Internet of Things (IoT) and the 5G rollout, is set to become commonplace around the world.
But the big question that many are asking is – Will the edge eventually take over the cloud?
Edging out the Cloud – the Argument for and Against
To be honest, experts are divided by their opinions. Some think that data storage at the edge will, at some point, become more commonplace than the cloud. But others aren’t so sure. The reason for this is that while edge computing will process data at source, there will still need to be definitive communication to the larger network for technology to evolve.
Let’s take AI and computer learning as examples. When it comes to devices, such as home security cameras and other smart devices, it’s essential that information is processed in real-time with no degree of latency.
The device itself and/or edge computing is required to analyse the images produced and take the relevant action without any delay. Needing to send the data to the cloud or traditional network to do this is not only slow (albeit by a millisecond or so) but also takes up valuable bandwidth and is a potential security risk.
A home device is able to carry out the necessary analysis in real-time. The data is kept close to home and larger images that are too big to be decrypted on the device itself can easily be dealt with by edge storage.
This last statement appears to support the case for edge computing taking the place of the cloud. But – and it’s a big but…
As tech advances, with AI models and the like having a continual need to learn, this can’t happen if huge amounts of data are never returned to a central source. It’s this very reason that it’s unlikely that such traditional storage will ever be obsolete, no matter how advanced the edge and IoT devices become.
Training and Inference: two sides of the same coin Communication and cooperation between the edge and the cloud are crucial for technology to evolve. Within the world of AI, this boils down to two tasks: training and inference.
Training: The AI model has to be trained to identify what it is that it is processing.
This is done by the intelligence being fed an enormous amount of correctly tagged
images to learn what they are. Once this is done – and let’s not forget that this
learning is continually advancing – the other element of the model can be brought
into play.
Inference: This is where the technology can act depending on the information that
it’s received. Examples include voice recognition, facial recognition and activation,
depending on the data.
While inference occurs within the device itself (or home edge storage), training needs are far more complex and data-heavy, meaning the only place this can efficiently happen is within the cloud.
In other words, the process requires seamless cooperation between the two.
Whilst the use of edge computing is set to increase exponentially, it’s down to this
cooperative need that it’s unlikely to usurp the cloud.
The three distinct elements of AI and advanced technology that we’re all becoming used to using (the endpoint – or device – the edge and the cloud) represent a symbiotic relationship that means each can only exist in conjunction with the others.
At least, this is what’s likely to occur in our lifetimes. After that – who knows? But, for today at least, we predict that the edge will enhance, rather than devour, the cloud.