A Solid State plc group company

Providing advanced imaging products, embedded systems and solutions

AI, embedded vision and wearable devices

Active Silicon AI Series part 6: Artificial Intelligence and embedded vision revolutionizing wearable device capabilities

March 7, 2018

We’re all becoming familiar with smaller, faster vision processing and marvel at new products being introduced to the consumer market such as phones that can be unlocked by facial recognition, advanced VR and AR in gaming and drones that can spy on our neighbours. These products are now having an impact in industrial environments too, where running Artificial Intelligence software applications on wearable devices benefits capabilities, speed and security.

AI may be brought to devices via apps in the first instance, but as embedded CPUs become smaller, cheaper and more powerful, we will see more adoption of these running on-device AI software – technology being developed by XNOR.ai is one illustration of this.

Taking vision to another dimension
Take, as an example, Microsoft’s HoloLens – a smart headset that comprises a Windows 10 computer, sensors, spatial sound and a high-definition stereoscopic 3D optical display. Their 2nd generation wearable Holographic Processing Unit (HPU) will contain an AI chip meaning that images can be gathered, stored, processed and interpreted more quickly on the headset itself, without the need for Wi-Fi and without the security risks of sending data to and from the cloud. Use of the headset will bring Mixed Reality Capture (MRC) to 3D design and imagery, bringing scale, proportion and perspective to a whole new level when visualizing plans and models. And, of course, it’s bound to include some unique gaming features!

Similar technology is also being employed to improve the experiences of the visually impaired. In October 2017, Orcam launched its MyEye version 2. This small and lightweight smart camera can be attached to a pair of glasses and allows those with limited or no sight to identify objects and faces, and even to read print by simply pointing at it. The device uses established optical character reading (OCR) technology to read text aloud, and applying AI algorithms to face recognition enables the wearer to tell the difference between men and women, and to identify particular people and items that have been learnt.

On the industrial side, companies such as Picavi and Vuzix are successfully offering smart glasses featuring AR and VR to aid warehouse picking and other services in the field. Picavi have tailored their product solely towards supply chain optimization, boasting savings of 30% in time spent selecting items for packing. Vuzix target a broader scope of industries and their range of  M400C glasses offer connectivity via Micro USB, Wi-Fi and Bluetooth.

Wearables are also highly developed in the defence sector. Body-mounted sensors and cameras monitoring soldiers’ heart rates, body temperatures, locations and surroundings are combined with AR and VR applications to allow remote assistance from more experienced soldiers or doctors. Additionally, data from wearable sensors is being manipulated using AI to create even more responsive and realistic training scenarios. Image processing technologies are being developed to better identify targets, including the use of facial recognition to distinguish human targets. Devices with vision and AI capabilities embedded mean that soldiers and security forces can operate in areas beyond the range of Wi-Fi or other connectivity. While the reality of an autonomous military is a way off yet (and, of course, rather alarming), it’s perhaps gratifying to know that the millions of dollars being invested in research in this area will benefit us all in some way, shape or form in another sector.

Fuelling the growth of on-device AI
Increased investment in products for the demanding consumer market is now leading to commercialization of technologies and making them more generally accessible and easier to install. For industry to fully embrace AI and embedded vision devices, the biggest hurdle that must be overcome is compute power; carrying out burdensome processing quickly drains the batteries of small devices. Developments in reducing power consumption by enabling faster processing is making adoption of on-device AI more realistic, for example, Qualcomm’s new SDK for its Zeroth machine intelligence platform makes it simpler for devices with their chip to run deep learning programs without needing to send data to the cloud, thereby saving power. Likewise, Bosch was recently recognised as a CES 2018 Innovation Award honouree for its ultra-low-power MEMS sensors improving battery life for wearables and drones.

Furthermore, enhanced and more widely available time-of-flight cameras are supporting more accurate depth and distance measurements for use in processes such as object and facial recognition.

On-device AI is enabling embedded vision to bring revolutionary applications to wearable technology, opening up radical new opportunities across many sectors. At Active Silicon, we’re investing in our embedded vision systems expertise to ensure we’re able to offer the latest vision systems and interface boards to our customers. These developments offer benefits to security, speed, accuracy and capability, and are playing a huge role in changing the face of machine vision as we know it.