Learning Center August 31, 2017

Active Silicon’s AI series – part 1: Applying Deep Learning to FPGAs

Active Silicon’s AI series – part 1: Applying Deep Learning to FPGAs

Recent news from Microsoft has again brought the development of applying Deep Learning to FPGAs into the headlines. Microsoft is using Intel’s FPGAs (formerly Altera) combined with their own FPGA-based deep-learning platform, Project Brainwave, to enable the acceleration of deep neural networks (DNNs). The speed at which developments within the industry are progressing means that bringing Deep Learning to embedded systems is massively on the increase in a wide variety of sectors, from autonomous vehicles to medical research. With scalability as one of the determining success factors, the world is observing such developments keenly.

In terms of what’s currently in more widespread use, NVIDIA offers their own Deep Learning SDK to power GPU-accelerated machine learning applications for embedded systems and both cloud-based and on-site data centers. Image recognition, driver assistance programs, life sciences and even speech recognition are listed among the applications benefitting from reduced processing times and increased accuracy. AMD has launched its Radeon Instinct MI25 Server Accelerators which, along with its GPUs and software platforms, are designed to meet the challenges of high-performance neural network learning.

Figures suggest that Google’s TensorFlow software library is the most widely adopted Deep Learning framework, mainly due to its high level of internal development and open source accessibility. It can run on one or more CPUs or GPUs with a single API, although it is not yet commercially available on FPGAs. Also in development is Tensorflow Lite – a toolkit for mobile devices, which follows hot on the heels of Facebook’s Caffe2Go framework.

Elsewhere, Greece’s Irida Labs is bridging the gap between cameras and the human eye by bringing visual perception to an extended range of devices. This is being achieved by developing computer vision software, and utilizing image processing and machine learning techniques made for any CPU, GPU or DSP/ASP platform. At Embedded Vision Europe in Stuttgart in a few weeks’ time, Iridia’s CEO and Co-founder, Vassilis Tsagaris, will present a case study on using Deep Learning to advance food product identification. Active Silicon will be exhibiting at the show and we’re really excited about this opportunity to hear and share the latest developments in this area.

Over the past few months our team of innovative engineers have been watching the progress involving FPGAs closely to see how it can benefit our customers in our next generation embedded systems to achieve faster and more accurate image recognition. We’re looking forward to engaging in the discussions in Stuttgart, and keeping an eye on the advancements in general.

Latest News

See More News
Read More Product News
an ROV operating underwater and a Harrier IP camera
March 5, 2026

Sony Block Cameras with More Output Options

Sony autofocus-zoom cameras are popular, high-quality, reliable and adaptable cameras. Harrier technology enhances their functionality…

Read More Application
Mobile robots in a warehouse
February 25, 2026

How Autofocus-Zoom Cameras Transform Vision Systems for AGVs, AMRs & ROVs

Vision systems for mobile robotics are evolving to keep pace with developments in Automated Guided…

Read More Product News
Harrier AF-Zoom USB Block camer
February 17, 2026

USB3 Camera SDK Enables Fast Integration with Jetson Orin

The engineering team working on our Harrier range of USB autofocus-zoom cameras has released an…

Read More Product News
Firebird single and dual CoaXPress frame grabbers
February 4, 2026

High-Speed Imaging Made Simple

Our 1x and 2x FireBird CoaXPress frame grabbers offer CXP-12 speeds in a fanless, cost-effective…

Upcoming Events

See More Events