CONTENT:
WHAT IS EDGE COMPUTING 1
EDGE COMPUTING IN THE SHOP FLOOR: ADVANTAGES 2
EDGE COMPUTING IN ACTION 2
DEPLOYING EDGE COMPUTING 4
WHAT IS EDGE COMPUTING
Edge computing is one of the hot topics at the moment which everybody is talking about, often without fully understanding it ,as happens when technology approaches buzz word status.
Let’s try to explain this and define what edge computing is. First, edge computing is not a technology but a paradigm: a way of doing calculations that improve the outcome in terms of costs, latency and resilience to outages. You can’t buy an “edge computer”, but you can monitor and automate your shop floor by applying the principles of edge computing. Second, edge computing is not IIoT, but a useful approach to it.
The core principles are easy to understand, but they are a little more difficult to implement correctly. The components involved are:
– Bring the computation near the asset that needs it
– Bring the storage near the software that needs it most
EDGE COMPUTING IN THE SHOP FLOOR: ADVANTAGES
To understand these principles and how to best apply them, let’s focus for a moment on the need of a shop floor:
– Assets must be monitored by extracting data from sensors either in the greenfield or brownfield setting
– This data must be stored and analyzed
– Results of the analysis must be informative and must generate a meaningful action
– The receiver of the action can be an asset itself, thus, closing a control loop
Applying a centralized model of computation to this scenario could be done by reading all the data with low cost embedded hardware, often based on microcontrollers, and then sending each individual reading to some cloud provider where data is stored and analyzed. On the same cloud, one or more automatic tools can create reports, actionable insight, or even control actions that need to be sent back to some asset.
Looking at this model, from the point of view of costs, it is immediately evident that a lot of traffic is generated. Traffic has a cost, which is often hidden, because it is difficult to estimate how much of it will be needed. When I say difficult, I mean that even NASA calculated it wrong, and put at risk an entire cloud operation: https://www.theregister.com/2020/03/19/nasa_cloud_data_migration_mess/
The second element in the centralized model, which is even more obscure, is latency and data availability. What happens if, for some reason, the network between the shop floor and the cloud has an outage or just decreased performance? Action from the analysis will take more time to reach the asset with a decrease in productivity that, at the end of the day, is just an increase in cost.
EDGE COMPUTING IN ACTION
So, how do you turn a centralized model into an edge computing model? As usual, there is no one-size-fits-all solution, but some guidelines can be roughly specified:
- Segment the data coming from assets into clusters based on the location of the data consumers
- Segment the analysis pipeline into clusters based on the maximum latency allowed for the result to be available
These guidelines help identify what kind of calculation that data will go through and where the calculation is performed. This, in turn, helps to decide the best location for computation. For example, imagine a vibration sensor placed on an asset with a goal of detecting early failure from the vibration patterns. The idea is to analyze the raw vibration data and generate an alert if the patterns are altered with respect to the standard of a known working asset. In the centralized model, gigabytes of vibration data would be transferred to the cloud to be fed to a neural network or some other tool for analysis, and then an alert is sent back from the cloud to the asset for, possibly, turning on a red light and informing the operator about the anomaly.
In the edge computing model, since vibration data is local to the asset, and the neural network input does not depend on any non-local source, it is cheaper, faster, and prudent to perform the calculations on a powerful enough device placed on the shop floor. Traffic is reduced to zero because latency depends on the local network performance. Moreover, the edge computing device can also store a long history of vibration data, which also reduces data storage cost[1].
Not all scenarios are so clear cut. For example, imagine a setup like monitoring waste management biogas production with the objective of trying to optimize production based on real-time energy prices and the current status of all the wells. In this setting, the data concerning temperature, humidity, biogas, and oxygen supply for each well is local to the waste management site. But, actions to improve production must be taken into account based on external data (i.e. Mega predictions, costs of energy, efficiency of the gas pipeline, etc.). In this case, the analysis pipeline is not completely local and could be conveniently placed in the cloud. However, applying edge computing principles can still reduce costs and decrease latency, thereby, improving reliability. Each well can be monitored locally by an edge device that stores all the readings and immediately acts on the well’s valves if the oxygen level increases to the warning level. The same device can also aggregate local data and transfer it to the cloud with only a reduced set of information (for example, the average temperature and oxygen level in the last hour) that still reduces traffic.
One hidden advantage of edge computing is the fact that its performance can be progressively improved as more data is accumulated, and the models get better at predicting outcomes and making suggestions. In the waste management example, one can start by sending all the data from edge-to-cloud for a first training of the prediction model (or more than one model in parallel). Then, iteratively reduce the traffic by aggregating data on the edge. And finally, move the model on the edge and use the cloud to periodically retrain it, or to test different models. This is a useful feature, especially, for small and medium sized enterprises that don’t have a big budget upfront, but are still free to iteratively improve their initial solution, knowing that the right architecture is already in place.
DEPLOYING EDGE COMPUTING
Edge computing has many advantages, but it’s not so easy to deploy for a couple of reasons:
– A jungle of devices with very different features and price ranges
– The difficulty of managing security on the edge
– The not so common availability of on-premises solutions that provide cloud services on the edge
There are many edge devices out there to choose from. One possible guideline is the following:
– Sensor and actuator nodes with aggregation and temporary storage capabilities: these devices don’t usually need to be as computationally powerful as they need to be capable of operating in real-time, and be compatible with electric requirements of brown and greenfields. The 4zerobox and the 4zerobox mobile are good examples of this category because they allow data acquisition, some initial aggregation (i.e. averages, Fourier transform, etc.) and are resilient enough to continue monitoring, even in the absence of network connectivity
– Computation nodes: they are usually mid to high level industrial pc that are often equipped with AI co-processors (i.e. GPU) for speeding up neural network execution, training, or retraining.
– Edge Servers: these are full fledged servers with enough resources to host an on-premise solution for storage, analysis and presentation.
Particular care must also be taken in choosing edge hardware that embed security, such as secure elements or trusted execution environments. It is often the case that edge hardware is not designed specifically for security, but, in the case of industrial edge computing, this is mandatory. Edge devices will be able to send commands to the machines as envisioned in many automation scenarios and in the increased convergence between OT and IT.
Choosing the hardware can be difficult, choosing the software is even more challenging. The choice lies between getting hardware from many different vendors, configuring it and facing the task of integrating many different software solutions for each different need,; or, avoid all the configuration and choose an IIoT platform that can seamlessly run on edge or on the cloud (and even in a hybrid combination of both). Zerynth, for example, allows this flexibility since each service from device management to IIoT dashboarding can be hosted with no user effort either on the cloud or on premises. We can even suggest the best hardware to purchase since we know, perfectly, the requirements of our platform, and we can fine tune them for your needs.
Author: Giacomo Baldi
Last updated: February 2022
BIBLIOGRAPHY
[1] Hamilton, Eric (27 December 2018). “What is Edge Computing: The Network Edge Explained”. cloudwards.net. Retrieved 2019-05-14.
[2] “What We Do and How We Got Here”. Gartner. Retrieved 2021-12-21.
[3] Predicts 2022: The Distributed Enterprise Drives Computing to the Edge; Published 20 October 2021 – ID G00757917
[4] 3 Advantages of Edge Computing. Aron Brand. Medium.com. Sep 20, 2019
[5] Yu, W.; et al. (2018). “A Survey on the Edge Computing for the Internet of Things”. IEEE Access, vol. 6, pp. 6900-6919.
Zerynth
Zerynth helps companies easily get their industrial processes digitized and bring innovative connected products to the world. The Zerynth IoT Platform is a full set of hardware-software tools designed by IoT experts to enable digital transformation in a fast, flexible, and secure way.
Founded in 2015, Zerynth has grown steadily. Today Zerynth has 35+ team members with deep IoT expertise and industry knowledge with over 100 customers across many industries. Headquartered in Italy, Zerynth provides support globally thanks to an extensive network of partners in Europe and pan-global locations.