A review of Embedded VISION Europe 2019October 31, 2019
EVE took place last week in Stuttgart, and the Active Silicon team enjoyed hearing all about the newest products and plans in the embedded vision sector.
The latest technologies presented at the event included MIPI cameras (Vision Components), stereo vision (Nerian) and ultra-low power video (CSEM) but there was a clear focus on AI, with over two thirds of the presentations covering this topic in one way or another. The Khronos Group consortium gave an interesting overview of their how their standards, including OpenVX, NNEF, OpenCL and SyCL support AI/Deep Learning technologies at different levels. Clearly there are huge opportunities for embedded/machine vision applications at the ‘edge’ but this requires smart cameras that can support Deep Learning/AI processing (or inference) in the camera itself. Potential solutions based on embedded/mobile CPUs (e.g. Snapdragon), GPUs (e.g. Jetson), SoC FPGAs (e.g. Zynq) and devices with neural net acceleration (e.g. Movidius) were presented at the conference. With integrated video support and ever-increasing processing capacity in each of these device technologies, delivering a smart camera system is already possible. Techniques like network pruning, optimization and sparsity all help to reduce the heavy load of supporting inference in the camera.
Outside of the smart camera itself however, implementing AI/Deep Learning still has many challenges that hinder mainstream adoption. The industry 4.0 factory needs to be flexible and able to respond to change quickly, but the expertise and time required to implement AI/Deep Learning based solutions are currently an issue. Presenters described the problems with collection and labeling of data, long training times and the difficulties of validation and verification of the functionality of the final neural network implementation. There was the classic story of a system that had been trained to perfection in the afternoons that failed completely when demonstrated to the customer in the morning (due to the different lighting)! While some methods to mitigate these kind of challenges were presented (e.g. mixup, auto augmentation, model distillation), it seems that we are still a long way from the fast, effective, well-understood solution that the industry needs. It’s not clear which AI/DL development flow/architecture and device technology will be the winner, but it will be interesting to see which ones take the lead as the technology matures.
Our own demo centered around our USB3 Vision Processing Unit, an embedded system developed for a specific application in the field of computer vision assisted surgery. The unit simultaneously acquires, displays and processes images from four individual cameras. Additionally, our range of Harrier Camera Interface boards were on display, including our brand-new USB/HDMI board. These boards fit perfectly to the most compact autofocus zoom cameras available – the Tamron MP1110M-VC and Sony EV7250A – to offer a neat yet high-powered solution to remote vision.