- Published October 2017
Imagine a future manufacturing facility built on the concepts of the (industrial) Internet of Things (IoT): thousands of actuators, thousands of sensors, dozens or even hundreds of cameras, and one center with enormous computational power for processing all the data and controlling all processes in the facility.
Albeit highly appealing from an IT-management perspective, this architecture requires inconceivable data bandwidth and poses challenging latency and real-time considerations. The image data from all the great new high-speed and high-resolution machine vision cameras would contribute the largest share to the network traffic. Furthermore, in security or traffic applications, sending image data to the cloud for processing can expose sensitive information such as faces or number plates to unwanted access and manipulation, while it is computationally expensive to encrypt and decrypt entire images.
These requirements are driving the countermovement of approaches entirely relying on cloud computing: Edge Computing. The term refers to an architecture where, in most cases, embedded systems analyze data close to their source. Embedded vision systems can drastically reduce the network traffic, e.g. when they perform image analysis tasks including good part/bad part decision making, number plate recognition, face recognition or high-level feature extraction right after the image acquisition by a camera. Hence, only core data is transmitted via the network and sensitive data can be encrypted. This requires less bandwidth, reduces latency and jitter in control of actuators and eases the demand of the computational power of peripheral computing and control units. As image processing, unlike many other data analysis tasks in industrial processes and control, benefits greatly from parallel computing, embedded vision systems can be specifically designed to solve imaging problems much faster, with lower-cost hardware, and significantly lower power consumption than any generic cloud computing center.
However, in a large manufacturing plant, Edge Computing can hardly render a central server farm redundant. Here, the novel term Fog Computing comes in. It basically describes a distributed computing architecture, where Edge Computing is applied to every relevant client in a network and each client delivers high-level data to a central cloud computing center for further processing, statistical analysis and storage. This hybrid concept combines the best of Edge and Cloud Computing and is expected to become the dominant setup in complex scenarios of the Industry 4.0/Industrial IoT.
Would you like to apply Edge Computing to your imaging based devices or machines? Active Silicon can provide you with powerful embedded vision solutions, available quickly and at surprisingly low costs thanks to our versatile hardware platforms. Come and visit our experts at Embedded VISION Europe on Oct. 12 and 13 in Stuttgart, Germany.