Imaging and Machine Vision Europe (IMVE) has published their recent interview with our CEO, Colin Pearce. In it, Colin talks about the history of the company, FPGA technology, our history with Xilinx, challenges and opportunities in the frame grabber market and our imminent long-reach HD video transmission technology. Register or log in to read the full article here: https://www.imveurope.com/feature/foundation-built-fpgas
Click here to subscribe to our newsletter.
Calling all engineers – we’re hiring!August 1st, 2018
Our 30th year in business has seen our order book grow in both the frame grabber and embedded systems divisions. We’re now seeking fresh talent to join our team and further grow the business, and are recruiting for Software, Hardware and FPGA Design Engineers.
What are we looking for?
Generally we’re looking for candidates with a relevant qualification and experience in software/electrical/electronic engineering or computer science, but initiative, passion and proven applicable skills are equally important. As we’re a relatively small, agile company, candidates will need to be customer-focussed and flexible, and have the desire to grow and develop.
What can we offer?
Being part of a small team, you will be able to influence future product strategy, as well as streamline our design processes. All current roles are based in our head office in Iver, Buckinghamshire (UK), easily accessible from the M25, M40 and M4 or by public transport via Uxbridge and Slough. In addition to a competitive salary, Active Silicon offers 25 days’ annual holiday, a well-established pension scheme, flexible working hours and a bonus scheme. We’re a sociable bunch and Friday pub lunches, summer barbeques and other gatherings are commonplace, and the Christmas party extends to partners and overnight accommodation. There’s also the opportunity to travel as our clients are based around the globe.
How to apply
Interested? See the full job specs and how to apply here:
(Please note, you must be eligible to work in the UK. Agencies should contact us by email not phone.)
Basler AG acquires Silicon Software GmbHJuly 26th, 2018
Proving that M&A is still rife in the machine vision sector, last week saw the acquisition of 100% of Silicon Software’s shares by fellow German company, Basler. The camera manufacturer will now be able to offer an extended machine vision portfolio with Silicon Software’s hardware and software products. Financial details were not disclosed, but the leading figures of Silicon Software, Dr. Klaus-Henning Noffz and Dr. Ralf Lay, are set to stay with the organization.
Following on from other alliances, Active Silicon is now one of the very few independent vision hardware manufacturers. We continue to offer our frame grabbers, interface boards and embedded solutions through a network of international distributors, ensuring compatibility with an array of cameras and systems. We remain a neutral, agile and innovative partner for a fast-moving marketplace.
Introducing the latest addition to our CoaXPress frame grabber familyJuly 19th, 2018
As computer vision infiltrates an increasing variety of sectors, the cost of vision systems is also coming down – we are delighted to reveal our latest frame grabber – the FireBird Single CoaXPress Low Profile board.
Maintaining all the industry-leading features of our highly-regarded and well-established FireBird boards, this newest arrival has been optimized for cost, so it has wider appeal for a greater range of applications. It is ideal for use with the latest range of small, lower-priced single-link CoaXPress cameras. CoaXPress in this combination offers a more affordable solution with all the advantages of higher bandwidths, real-time triggering, long cable lengths and the robustness and high reliability of a dedicated vision standard.
The low-profile design of this FireBird Single CoaXPress frame grabber allows the board to be used in small 2U enclosures; a full-height bracket is also available for standard PC cases. It is a 4-lane Gen2 PCI Express board and is fitted with a Micro-BNC connector, the latest standard for CoaXPress, which also supports PoCXP. Comprehensive I/O is provided, including front panel I/O.
DMA engine technology “ActiveDMA” guarantees zero CPU intervention, high-speed and low-latency image data transfers. In addition, long cable lengths are supported of up to 40m at 6.25 Gbps and over 100m at 3.125 Gbps. All our FireBird frame grabbers are GenICam compliant as standard and the board is supported by our proven ActiveSDK software. The full specifications can be seen on our website here. More information about the CoaXPress standard is on our Resources page.
As we mark our 30th year, we continue to deliver new and innovative products for the machine vision and embedded vision industries. From space missions to large scale deployment of industrial vision systems, we have provided imaging components that help our customers provide world-class solutions. Contact us to see which of our products could enhance your systems and processes.
Active Silicon AI Series part 9: IBM’s Watson – what’s it all about, and why is it watching the tennis?July 11th, 2018
IBM’s Watson is a cognitive computing capability which, using DeepQA software and the Apache UIMA framework, analyzes very high volumes of data and learns from previous conclusions and data processing to support decision-making. Put simply, it’s a question answering guru powered by AI, able to process data at 80 TeraFLOPs, effectively digesting a million books per second. Watson first made the headlines when it was pitted against human opponents in the US TV quiz show Jeopardy, and beat the reigning champions. Following on from Deep Blue famously beating chess wizard Garry Kasparov in 1997, you might be forgiven for thinking that IBM are just in this for fun, but applications for Watson are far-reaching and promising to change the computational world as we know it.
Watson at Wimbledon
Computing and tennis have come a very long way since Atari’s Pong in 1972, and now AI is changing our experience of the world’s most famous grass court tennis tournament. IBM have partnered with the organizing committee to drive fan engagement and deliver faster, more relevant and more captivating viewing content. By recognising players’ movements and emotions, combined with crowd noise and match data, Watson can identify which moments in a match are the most exciting and is able to compile a highlights montage in just minutes as opposed to the hours that it takes a human editor. Increased, exclusive content on social media platforms and a chatbot called Fred are just some of the other ways in which Watson is growing audience participation and enjoyment. IBM cited a 25% increase in Wimbledon’s social media following in 2016-17 and expects to see this pattern continue.
Watson in medical imaging
Beyond sport, Watson is having a huge impact on the medical sector too. Watson Health was established in 2015 to focus solely on data generated within the medical sector; the platform is also open to developers to use in their own applications. In 2016, IBM announced its Watson Health Imaging division, combining the expertise of academic medical centres, health systems, ambulatory radiation providers and image technology companies. The collaborative “aims to bring cognitive imaging into daily practice to help doctors address breast, lung, and other cancers; diabetes; eye health; brain disease; and heart disease and related conditions, such as stroke.” Since its inception, the number of members has grown from 15 to 24, and we can now see Watson being trained to aid in understanding how a condition is likely to progress, what treatment should be considered, and when to intervene. Watson reads patients’ records, doctors’ reports and other peripheral material, and combines these with medical images to make its diagnoses and predictions.
Also in 2015, IBM bought the health solution provider, Merge, and used the company’s vast collection of medical images to train Watson’s visual recognition capabilities to identify anomalies and changes in patient scans.
And the winner is…
Making interesting further reading are the entries for the IBM Watson AI XPRIZE – a $5 million AI and cognitive computing competition running over four years from 2016-2020. From the initial 147 entrants, the 62 teams progressing to round two were announced in December last year. Details of the projects, and the top 10 selected at this point for a Milestone award can be seen here.
Don’t forget the football
Collaborating with FOX Sports, Watson is bringing its highlighting-creating AI tools to football too. It’s even been put to work analysing massive amounts of data to predict the winner of this year’s FIFA world cup. But we wouldn’t want to spoil the surprise….
Image acquisition solutions for the NVIDIA Jetson and FireBird frame grabbersJuly 4th, 2018
Active Silicon’s Camera Link and CoaXPress frame grabbers offer out-of-the-box compatibility with NVIDIA’s AI computing platform, Jetson. Together, they bring deep learning capabilities and accelerated GPU image processing to vision systems.
More about Jetson
Jetson has been designed specifically for high-performance parallel processing, enabling deep learning to be integrated into compact embedded systems. Jetson can be purchased as a stand-alone module, ready to be installed into an end-user’s system with their own software, or as a developer kit including all the required power supply, cables and software for quick and easy set-up. The latest module, TX2, released in March 2017, offers 8 GB of memory and 58.4 GB/s of memory bandwidth via only 7.5W. Featuring the Pascal GPU, 2.5 Gbps/lane can be achieved using six 2-lane cameras. TX2 utilizes both the Quad ARM Cortex-A57 CPU and an additional HMP Dual Denver CPU.
Jetson is bringing deep learning to a wide range of computer vision applications, both industrial and commercial. Its proficiency in allowing super-fast processing and AI deployment is accelerating the pace of change in many sectors. Some examples include drone deployment, industrial robotics and Intelligent Video Analytics such as those used in surveillance and traffic management.
Active Silicon and Jetson
Active Silicon’s frame grabbers easily integrate with Jetson and relevant demos can be supplied on request. Once NVIDIA’s SDK is downloaded, our ActiveSDK can be installed and our frame grabbers can be operational within minutes. Requiring only a 4-lane PCI Express connector, accelerated processing can be achieved simply and quickly. Following installation, Active Silicon offers unsurpassed after-sales support for all our products and can assist in getting the best performance for your vision system.
We’ve compiled a datasheet with all the information you could need to decide whether Jetson is right for your vision solution. Click here for complete specs and options, or contact us to discuss the opportunities available.
Active Silicon AI series Part 8: 5 applications leveraging drone technology and AIJune 28th, 2018
Computer vision, image processing and drones go hand in hand as high-resolution cameras become smaller, lighter and smarter. We look at what influence AI is having on the products and solutions reaching the UAV market.
Inspection drones aren’t particularly new, but the way in which these miniature flying machines can survey, record, report and even fix issues is changing. Now able to fly for tens of miles along power cables or pipelines, sifting through the amassed data would be a full-time job for a team of humans. Thermal, LiDAR and 3D imaging camera technology has been successfully miniaturized and is being mounted on smaller drones, enabling extended battery life and greater reach. AI software is now facilitating drones to “bypass” flawless pipes, towers and cables and focus on weaknesses and defects so that only critical images are recorded for review. Avitas Systems claim to have nailed the “first inspection solution offering enhanced, robotic-based autonomous inspection, advanced predictive analytics, digital inspection data warehousing, and intelligent inspection planning recommendations”. This would open the door to not only all-seeing and all-knowing drones, but devices that predict the future as well!
Neurala’s Brains for Bots SDK aims to make cameras and inspection drones more intelligent and interactive: “The SDK transforms any type of drone device into an intelligent ‘situational partner’ that can perform operational assignments both on- and offline”. Originally developed for NASA’s planetary exploration, Neurala’s deep learning software can now be supported on-board, meaning that a drone can make its own decisions about obstacle avoidance and item recognition, and even carry out remedial action without needing to communicate with a PC or supervisor. Optimized for mobile deployment, the software can be trained using just an estimated 20% of the traditional number of images. Their search technology has been developed specifically to examine shifting environments for moving targets in real-time, enabling the discovery of a mobile needle in a traveling haystack under critical time restraints.
Drones with the ability to survey huge expanses of agricultural land and autonomously identify areas which require, for example, additional pesticide care, are saving millions of dollars in over-spraying, and, of course, avoiding unnecessary environmental damage from excess chemicals. The importance of drones in agriculture was highlighted by the 2017 NVIDIA Inception Award which recognized the work of Gamaya in combining big data processing via AI and hyperspectral imaging to manage planting gaps, weed detection, nutrient levels and soil erosion while predicting crop yield. This particular application was developed for use in Brazil but when your farm spans thousands of hectares of the world’s most remote terrain, these little tools could be a valuable investment.
Zipline is a notable example of drone technology for the greater good, delivering blood and vaccines to isolated areas of Rwanda. Combine the sense-and-avoid software coming onto the market, and this sector can benefit hugely from flying AI. And Zipline aren’t alone, VillageReach, Flirtey, Vayu Drones and (naturally) Google’s Project Wing are just a few other organizations investing research and capital into intelligent drones – Ehang are even working on developing a drone which can carry a person with the aim of expediting organ donation. While stringent flight regulations in many first-world countries may pose a barrier to implementation, less developed nations are an accessible and practical testing ground.
AI in defense is a contentious area (see our AI post from 2017), but huge profits for successful developers continue to make it one of the driving forces in AI research. Systems Technology Inc (STI) is partnering with other US military contractors to develop a hardware and software solution for UAVs which can be launched from the deck of a warship and controlled by hand gestures. Deck Intelligent Aircraft Body Language Observer, or DIABLO, aims to mitigate the noise and constant distractions within this environment and use machine learning to teach drones to recognize the movements of deck handlers, leading to more efficient launches and landings. Reports suggest that research has overcome the common challenges posed by in-the-field computer vision applications including low light, sun glare, temporal resolution and scene clutter. However, the use of Google’s TensorFlow AI systems to analyze the video footage of military drones has caused consternation within the company, and in May this year accounts emerged of several employees resigning, citing concerns over Google’s collaboration with the Pentagon.
Recent developments in UAV technology are bringing the fantasies of aircraft engineers to life, to the benefit of a range of sectors. The devices are capable of capturing massive amounts of visual data, and now technology exists to analyze this and extract relevant information in previously inconceivable timeframes. Completely autonomous drones are the latest aspiration, and it really won’t be long before intelligent flying robots are an expectation rather than a dream in many areas of life.
EMVA Business Conference 2018 – what did you miss?June 21st, 2018
If you joined us in Croatia recently, you’ll know the key points from this year’s EMVA Business Conference, but in case you missed it, or you’d like a recap, here’s our synopsis of the 3 days.
Friday afternoon witnessed one of the most enthralling discussions in recent years, delving into the recent intensification in Mergers and Acquisitions within our sector. Consolidation in the marketplace has been fast-paced, due to the vast number of smaller players in our niche industry. In general, the market is growing as vision systems become prevalent in more sectors and applications – we can see this in the growth of inline optical inspection, amongst other things. Panellists discussed how M&A activity has been driven by organizations’ desires to acquire technology and grow IP, mature into new market sectors and “own” skilled management who have a clear vision of the future for the industry.
We thought that Michal Czardybon from Adaptive Vision gave one of the most interesting presentations as he described the company’s innovative deep learning add-on. Adaptive Vision combines deep learning recognition software with traditional object recognition algorithms. Together, they offer the ability to learn based only on 20-50 images instead of thousands and less than 10 minutes’ learning time. Michal’s real-world examples of machine vision supported by GPU processing covered case studies in quality inspection, food processing, medical imaging and more, demonstrating the massive potential that deep learning brings to the vision industry. You can read more about deep learning in our AI series of blogs.
Jeremy White, Editor of WIRED magazine, also entertained us with a dig at the industry sceptics. His advice was to keep an open mind when considering new and disruptive technology as you never know which innovations will take off overnight. Referring to former Microsoft CEO Steve Ballmer’s famous dismissal of the iPhone in 2007, his presentation warned us to embrace and learn from AI applications in order not to get left behind.
The opening drinks reception on Thursday evening was the first occasion to catch up with old friends and make new acquaintances, and further networking opportunities abounded. Friday night, however, offered a novel chance to enter the world of fiction, as the social event was held in Dubrovnik’s Old Town, within the famous Walls of Dubrovnik, the main filming location for the Game of Thrones’ city, Kings Landing. The picture here shows our CEO, Colin Pearce and Director of North American Operations, Eileen Zell, enjoying the evening.
Overall, the event was well-attended and informative as always, and we’re looking forward to next year’s gathering in Copenhagen. Sign up to our newsletter to hear more from us about developments in the vision industry.
Meeting rising customer demand with a strengthened Production teamJune 15th, 2018
As we enter our 30th year of providing vision system components for an ever-wider range of application areas, our order book is expanding rapidly. To help us meet the demand for a greater number of high-quality products shipping from our premises, we’re pleased to welcome two more members to our Production team.
Steve joins us as Stores Controller, ensuring our stock levels are maintained and all our customer orders are leaving on time and as requested. His role reinforces the critical supply chain function of the business and safeguards the continuity of product supply. Alongside him, Sonny is our newest Test Technician, guaranteeing the quality of our products. As well as testing our new boards and modules prior to despatch, his work also encompasses after-sales servicing and testing innovative products still in the R&D phase.
It’s all in the detail
Our team goes above and beyond with exceptional attention to detail to deliver and support our best‑in‑class products. View our range of frame grabbers, embedded systems, camera interface boards and software, and contact us to see how our products could enhance your vision system.
EMVA heads to CroatiaJune 5th, 2018
The beautiful city of Dubrovnik will host this summer’s 16th European Machine Vision Association (EMVA) Business Conference on 7-9 June, which will see around a hundred top minds from the machine vision industry converge to discuss and share insights on the future of the industry.
The keynote address will be delivered by Philippe Legrain, Political Economist and Writer, and will look at the risks and opportunities of Europe’s economic future. Further presentations will focus on machine vision in industry and inspection, deep learning, autonomous vehicles and wider developments in the sector in general. Networking opportunities abound, including lunches and dinners, and culminating in a city tour on Saturday.
Young Professional Award
Alongside the conference, EMVA will present the Young Professional Award to honor the outstanding and innovative work of a student or young professional in the field of machine vision or computer vision. Over the last few months, nominations have been compiled and judged, and we look forward to seeing which fresh talent will be recognized this year.
More details about the event can be seen on EMVA’s dedicated website here. Active Silicon’s CEO, Colin Pearce, will be joining the conference and the discussions, along with Eileen Zell (Director, North American Operations), and Frans Vermeulen (Head of Sales and Marketing), and we’ll report back on the event highlights.
Shaping machine vision standards at IVSMMay 30th, 2018
This month’s IVSM in Frankfurt was the biggest ever, with attendees for the OPC UA Machine Vision meetings boosting numbers to around 140, and with more machine vision standards and meetings than ever, the schedule for the week was very full! Active Silicon’s CTO, Chris Beynon, reports back.
Camera Link surprise
Following the recent approval of v2.1, the Camera Link meeting was expected to concentrate on defining a significantly enhanced v3.0. Instead, the unexpected result was to drop v3.0 and quickly set a v2.2 by adding GenCP. The expectation is that Camera Link will then shift towards maintenance mode, to allow continued support for Camera Link frame grabbers and products for many years to come.
Progress for CoaXPress
CoaXPress, under Chris Beynon’s chairmanship, concentrated on two key topics. Firstly, the committee reviewed the outstanding topics to allow CXP v2.0 to go to ballot shortly. Secondly, the recent conclusions of the CCRC group (who had been tasked to suggest options to avoid competition between CoaXPress and Camera Link HS in the optical sector) were studied. The CoaXPress committee was very positive about the group’s proposal, which maintained the majority of CoaXPress’ protocols and investment by companies in Intellectual Property, while basing the lowest levels of the protocol stack on an enhanced version of the Camera Link HS format. More work will be undertaken to review this proposal in detail.
The GenICam meeting concentrated on completing the next release, which would have been v3.1 but is now expected to have a new name reflecting the year and month of release, possibly 2018.06. Several presentations also examined the current hot topic of embedded vision.
There was also plenty of discussion around the forthcoming GenDC standard (previously called GenSP) that will define the way that image data is transmitted and potentially stored. Many of the meetings – from GenICam through CoaXPress to USB – discussed this, with the main debate being about the balance of future extendibility versus current complexity. Some simplifications were agreed which seemed to be generally supported.
In the PlugFest, Active Silicon tested its brand-new low-cost single channel CoaXPress frame grabber, as well as participating in the second ever Camera Link PlugFest ready for certification of our products to the new v2.1 standard. See more about the benefits of our industry-leading frame grabber range here.
The next IVSM will be in Austin, Texas, starting 17th September, where the work will continue. Keep up to date with machine vision standards and industry updates by following us on social media and subscribing to our newsletter.
Industry 4.0 – what are the challenges holding it back?May 23rd, 2018
When we looked at the role of machine vision in Industry 4.0 development, plenty of organisations were talking up the opportunities. But what has actually come to fruition, and where will the next big developments come from? Our AI series has also touched on areas of industry as Deep Learning and use of neural networks enable intelligent processing, smart factories etc. Overall, we’ve investigated the possibilities, established the importance of vision systems and looked at some realities. However, Industry 4.0 is still only moving slowly, so what’s holding it back?
One of the major obstacles is integrating existing machinery, technology and processes. After all, not many manufacturers are in the position of being able to strip out their old equipment and replace it with new, shiny, highly-connected, smart gear instead! Redesigning what’s already in place will inevitably slow down developments and cause delays in getting new systems up and running and working without glitches. One organisation addressing these challenges is SmartFactory-KL, a German-based partner consortium made up of nearly 50 members all collaborating to research and develop Industry 4.0 software and hardware. They have a dedicated plant in Kaiserslautern for testing and analysis, preparing technology for implementation into existing factories; concepts include vertical integration via edge devices, a modular safety concept, cloud connections via 5G and improved infrastructures.
Another barrier to Industry 4.0 roll-out is concern over security. With information being sent from production line to the operations room, and beyond, what measures are in place to safeguard data? In his recent article for insight.tech, Maurizio Di Paolo Emilio looks at the option of fog computing to reduce pressure on the security aspects, as well as bandwidth and implementation costs, associated with migrating to Industry 4.0. Various communication protocols are also being developed to ensure ease of authentication and connectivity between vision components, including the OPC Unified Architecture, DeviceNet and the Fanuc Intelligent Edge Link and Drive (FIELD) system.
Personnel having an adequate level of expertise to operate new technology is a further challenge. While robots might be taking over picking and placing, picking staff are now expected to know how to programme and control these devices. But not every factory operative can, or wants, to become a robotics engineer overnight, so recruiting and training the workforce requires investment and time. Festo have published an excellent case study of their work with students and employees in Mexico to illustrate how technology companies can support the movement to Industry 4.0.
The role of industrial vision
Recent developments in embedded vision systems, bringing even smaller and more cost-effective cameras and sensors to market, will help Industry 4.0 reach maturity. Cameras little larger than your thumb nail in combination with robotics, and often, advanced processing software, are speeding up production lines and connecting them to other areas of the factory floor and control centre. Basler have issued an interesting white paper illustrating the role of industrial cameras in the development of Industry 4.0.
We know that machine vision is a key element in transforming today’s factories into tomorrow’s production centres and our products for industrial vision are under constant development to help overcome the obstacles and pave the way for global deployment of Industry 4.0. Contact us to discuss how we can support your industrial processes with enhanced vision components.
Insights into GPU processing at the Machine Vision ConferenceMay 15th, 2018
Active Silicon’s Frans Vermeulen will deliver a presentation on high-speed image acquisition with real-time GPU processing at tomorrow’s Machine Vision Conference and Exhibition.
It’s not too late to register to attend and, with hundreds of industry specialists due to converge on Milton Keynes, it’s an excellent chance to meet associates and discuss the latest developments in machine vision. As well as being in the exhibition area with our live USB3 Vision Processing Unit demo, Frans will be speaking in the “Deep Learning and Embedded Vision” session just before lunch.
Frans’ presentation will look at solutions to the rising demand for increased speeds and higher resolutions in image processing, using frame grabbers in combination with GPUs. Modern GPUs offer extremely efficient processing and present inspiring opportunities in deep learning as their parallel structures allow large blocks of data to be processed simultaneously. Through the use of case studies, he will illustrate ways to optimise different vision systems using GPUs. See our GPU solutions resource page for more details about processing using GPUs.
We hope you’ll be able to join Frans, our CEO Colin Pearce and our team of experts at the event to hear more about how we can enhance your vision system with our world-class components. If you can’t be there but want to know more about our Camera Link Frame Grabbers, CoaXPress frame grabbers or embedded vision systems, please get in touch.
Record sales as we enter our 30th yearMay 9th, 2018
Closing the books on the 2017-18 financial year has been rather pleasing as particularly strong growth in the second half of the year has resulted in our highest ever annual sales figures. This excellent result reflects the hard work, diligence and innovation of our engineers combined with continuously improving efficiencies in our operations and supply chain. Our range of Camera Link and CoaXPress frame grabbers will soon be enhanced with a new single and dual CXP-6 version, and work on our embedded vision systems is continuing apace. What better way to celebrate our 30th year in business?
Active Silicon AI Series part 7: Will we still need doctors? Computer vision and AI steer medical diagnosticsMay 2nd, 2018
Medical imaging has come a long way since Wilhelm Röntgen developed the first x-ray images in 1895, and now encompasses an array of techniques including radiology, MRI, ultrasound and endoscopy to name just a few. Deep learning and other Artificial Intelligence tools are now being applied to image processing to make diagnostics faster, more accurate and more predictive. Companies such as Avalon AI, whose bold vision is to “accelerate the development of a cure for ageing”, Kheiron Medical Technologies, working in the field of breast screening, and Innersight, supporting surgeons in the construction of patients’ surgical plans, have all brought AI products to market in a bid to lead the field of proactive medical imaging. Recently, Google has also jumped on the medical imaging research bandwagon with its retinal imaging developments. Using cameras running AI programs, data is collected and an algorithm interprets the cardiovascular health of the subject from blood vessels in the eye – and not just their condition today but a prediction for the next five years.
Medical diagnostics brought to the palm of your hand
Computer vision and AI are also maturing in handheld devices. Numerous organisations are shrinking scanning and diagnostic equipment, and combining vision and AI to enable these instruments to monitor, calculate and predict. ThinkSono claim to have created the world’s first software to diagnose Deep Vein Thrombosis (DVT) which utilizes image processing software and neural networks to allow a health professional to make a diagnosis via a portable scanner and their smartphone. Despite delays, Butterfly are due to launch iQ this year – a handheld medical ultrasound scanner which uses deep learning and computer vision to change the face of MRI and ultrasound scanning. Signostics have combined MEMS sensors with machine learning and computer vision algorithms to bring miniaturization to bladder screening.
To relieve pressure on health professionals still further, scans and images can now be processed outside the lab, often by the patient themselves, such the BiliScreen app being developed in the Ubiquitous Computing Lab of the University of Washington. This revolutionary technology is designed for use in the early detection of pancreatic cancer and liver disorders. The app uses a smartphone’s built-in camera to photograph a patient’s eye and computer vision processing extracts the white area, or sclera. A machine learning algorithm identifies the levels of bilirubin present in the sclera; bilirubin build-up presents as yellowing of the skin and sclera (jaundice) and is a key factor in identifying liver and pancreatic ailments. The app is able to identify even slightly raised levels that would go unnoticed by traditional monitoring.
Benefitting the developing world
Other organisations have chosen to focus their efforts on those patients generally less well served by technology. MobileODT is in the early stages of bringing cervical cancer screening to women in poorer parts of the world who have not had a reliable screening infrastructure in place due to the under-resourced and remote facilities available to them. Microscope manufacturer Motic has teamed up with Intellectual Ventures and Bill Gates’ Global Good Fund to distribute its EasyScan GO, a microscope which employs AI algorithms to identify malaria parasites in blood samples. Without the need for a clinician to read the results, patients at risk in developing countries can be identified in about 20 minutes.
So will computers replace doctors?
While the volume and accuracy of medical imaging is undoubtedly being accelerated and improved by computer vision and AI, we can’t see the medical professionals hanging up their latex gloves any time soon. What is likely is that these professionals will be able to focus more of their efforts on prevention and treatment of diseases, and leave more of the routine identification processes to the computers. Which is great news for bringing screening and diagnostics to more people more quickly.
At Active Silicon, we’re already proactive with our medical product development, and have passed customer audits to ISO 13485. Contact us to see how our products could advance your medical imaging.
Active Silicon chosen to preserve history for future generationsApril 26th, 2018
Piql is an innovative technology supplier offering secure, searchable, high-volume data storage on 35mm film. Having first made an impression in the film industry by revolutionizing the way in which movies can be printed, their latest unique application digitizes data from various mediums, applies OCR and indexing (for fast searching) then digitally prints the data onto highly-durable 35mm film using coding technology similar to QR codes.
The data may then be accessed using their specialist scanner to read the data back from the film. The advantage? The film lasts for up to 500 years, so no need to continually check and refresh data as is the case on other physical mediums.
Piql is currently working with a diverse global client base including the Brazilian Football Confederation, the National Museum of Norway and the National Archives of Mexico. Projects involve preserving legal documents, historical media assets, national heritage manuscripts, audio recordings and works of art. Read more about their projects here.
Piql chose an Active Silicon Firebird Quad CXP-6 frame grabber to integrate into their readers due to its high-speed image acquisition, cost-effective pricing and its off-the-shelf compatibility with other system components. As the process writes data at 20-24 frames per second onto 150mm of 35mm film, and each frame generates 80MB of data, 1.6GB of data are processed per second, continuously. This massive amount of on-the-fly data processing and high compute power could not be managed by a CPU so Piql selected a NVIDIA GeForce 1080 Ti GPU due to its large memory and frame buffer, and superior system flexibility. All Active Silicon frame grabbers are compatible with both NVIDIA’s GPUDirect for Video, and AMD’s DirectGMA, and are easily installed with Piql’s choice of a Vieworks TDI line scan camera. Furthermore, when Piql move their vision systems to Linux, as is planned for better compatibility with their other systems, our frame grabbers will support this OS with a simple driver installation.
Join us at UKIVA’s Machine Vision ConferenceApril 18th, 2018
The Machine Vision Conference and Exhibition returns to Milton Keynes, UK on 16 May 2018, and we’ll be there, amongst the gathered experts, engineers and industry leaders, to share and discuss all the latest developments in the machine vision industry. Active Silicon will be exhibiting and showcasing our USB3 Vision Processing Unit – demonstrating live simultaneous acquisition, processing and display from four HD resolution USB3 Vision cameras. The unit processes the image streams in real-time and provides several data output options, including 3G-SDI.
In addition, we will also be delivering a talk on “High-speed image acquisition with real-time GPU processing”, during which we will look at using frame grabbers in combination with GPUs to meet the rising demand for increased speeds and higher resolutions, plus the options for deep learning. The conference program covers all aspects of the machine vision industry and there is no doubt there’ll be sessions of interest for all attendees – view the agenda here.
We hope you’ll be able to join us in Milton Keynes and we look forward to meeting you and hearing all the latest industry updates. Want to know more about our embedded systems, frame grabbers and imaging solutions? Come along and speak to us or contact us via our website.
Considering the qualities of CoaXPressApril 10th, 2018
After introducing the industry’s first CoaXPress frame grabbers to the market in 2011, Active Silicon is proud to be supporting the development of the latest version of the CoaXPress standard, expected to be launched in the second half of this year. Our CTO, Chris Beynon, chairs the Technical Committee and has been heavily involved in progressing the technology from its infancy in 2008. A recent article in Vision Systems Design outlined the high-speed advantages of the interface, and as it grows in popularity, we wanted to take a closer look at the features.
The current version, V.1.1.1, supports data rates of up to 6.25Gbps and allows for multiple cables to be used, increasing the bandwidth available between camera and frame grabber. The high speeds and low latencies combined with the simplicity, scalability and robustness of the cabling is particularly appealing to the vision industry, where certain applications have been limited by the slower speeds of Camera Link, high costs of 10 GigE and limited cable length of USB3 Vision. In particular, the adoption of CoaXPress in the inspection and metrology sector has led to faster image processing of more data using higher resolution cameras, and ultimately more efficient production lines. Our single, dual and quad FireBird CXP-6 frame grabbers support an 8-lane Gen2 PCI Express interface running on 32- and 64-bit Windows or Linux, and guarantee zero CPU intervention. Combined with our new software application, ActiveCapture, we now support enhanced access and control of multiple cameras and frame grabbers within a system. Each CoaXPress link offers power up to 13W and device control up to 20Mbps for camera control or triggering – for faster devices, the links can be concatenated to provide multiples of the single coax bandwidth.
Innovation made affordable
2018 will see the launch of our new FireBird single CXP-6 board (with more variants to follow), bringing all the benefits of our current boards to a lower-cost frame grabber. As CoaXPress enjoys wider adoption throughout a greater market share, we’ve been working hard to develop a cost-effective solution and bring machine vision to even more systems. Similarly, camera manufacturers are also working to bring down the cost of cameras, such as Adimec’s new single-link CXP NORITE series.
The future of CoaXPress
CoaXPress v2.0 will introduce even greater speeds across multiple frame grabbers and cameras as CXP-10 and CXP-12 will offer up to 10 Gbps and 12.5 Gbps respectively. Additionally, the revision will introduce forward error correction (FEC) and support for 3D data. The increased speed per link will mean high-end systems could require fewer cameras and frame grabbers, making the technology more affordable to a wider audience. As Andy Wilson mentions in the Vision Systems Design article, developers will also be able to implement the new protocol efficiently using FPGA cores.
The benefit of standardizing machine vision interfaces is the resulting wide choice of cameras and frame grabbers – so users can pick and choose to create an optimal solution for every application. Whether your priority is speed, processing capacity or image resolution, engineers are enjoying unprecedented choice in hardware, software and integrations. CoaXPress is undoubtedly playing an important part in making machine vision accessible to more industry sectors, and driving efficiencies in automated inspection.
You can see details of all the machine vision standards on our dedicated webpage, and more about the CoaXPress standard on our dedicated CoaXPress Resource page. Click here to see our range of CoaXPress frame grabbers, and contact us to discuss your machine vision needs.
Active Silicon supports cutting edge PCB inspectionMarch 28th, 2018
We’ve covered plenty of developments in technologies in our news stories over the past few months, all requiring printed circuits boards (PCBs) to encompass more functions, offer more capacity, and be smarter. These advancements mean that the volume of PCBs in production has grown, and the number of solder joints, complexities and areas for inconsistencies in the boards have increased. The growth of surface mount technology and reduced board size further complicate manufacture. Faults in PCBs can lead to serious system failures in an end product, and include such diverse challenges as having insufficient or excess solder, missing components, parts being offset or damaged, or incorrect parts being fitted. Automated Optical Inspection (AOI) is a method of visually inspecting PCBs in order to detect imperfect boards, or identify them for removal from the production line and repair. Best employed after the solder step in board production to identify faults early in the manufacturing process, AOI is becoming a vital element of modern-day manufacturing and inspection.
Implementing AOI is well suited to most industrial environments as light sources can be easily controlled and hardware does not need to be too rugged, allowing users to choose from a selection of components to build their systems. Systems can utilize one or more high definition cameras – of course, the more cameras used the more detailed the resulting image due to more angles being covered. With components getting smaller, higher image resolutions are necessary while inspection is required to speed up. This all results in the need for faster imaging and we’re seeing an increasing number of manufacturers move to CoaXPress frame grabbers to meet the requirements of high-speed image acquisition. In addition, most modern production lines have the ability to capture 2D and 3D images, which can increase image processing time. High-speed image acquisition is again the solution to keep inspection time at the optimum level. Typically, images are acquired synchronized to the manufacturing line and lighting system – the lighting system being key by “strobing” in order to freeze the motion of the part, or the part is momentarily stopped and the camera triggered under continuous lighting.
Software running the image processing is also key. This must be able to analyze the PCB images at the rate of capture, and can be integrated into the camera(s) or run on a linked PC. Interestingly, some AOI providers, such as IVS and G2Metric, are implementing machine learning into their software which is allowing inspection machines to decide for themselves whether a previously unknown defect is critical or not.
Three variations of programming an AOI system prevail. The first one requires the pre-examination of a “golden” board – a perfect example that the system can then compare other boards to – and is known as template matching. Pattern matching refers to comparisons against good and bad examples that the system has already learnt. Statistical pattern matching is a little smarter and uses statistical methodologies to decide which deviations from the norm are acceptable, and which will result in rejection. Images and data about PCBs are programmed into the AOI system to educate it to know what to look for, and the system can be up and running quickly in a vast range of production lines.
AOI in action
Active Silicon Camera Link frame grabbers and CoaXPress frame grabbers are being used by several global players in industry-leading inspection systems. Our high-performance Firebird CoaXPress frame grabbers offer a fast PCI Express 8-lane Gen2 interface, zero CPU acquisition and can be used with cables of up to 40m at 6.25 Gbps, and over 100m at 3.125 Gbps. Firebird Camera Link frame grabbers support GenICam for Camera Link cameras, and maintain low latency even when used with multiple camera applications.
Click here to view our range of frame grabbers to see which one could enhance your inspection system.
Active Silicon in KoreaMarch 19th, 2018
28 March will see the doors open on Automation World 2018 in Seoul, Korea, and Active Silicon’s products will be on display. The three-day show encompasses robotics, sensors, pneumatic components and all the key hardware and software associated with industrial automation. The Smart Factory Expo, Korea Vision Show and Aimex are co-located and will focus on machine vision, industrial image processing and IIOT elements. In total, 1200 booths will be displaying the latest technologies and services in this modern industrial sector. Additionally, conference sessions over the course of the event will offer learning and discussion opportunities, and the potential to network with customers and peers will be abundant, as always.
Active Silicon are partnering with our Korean distributor, OnVision, to demo our frame grabbers. We hope you’ll be able to join them on booth 1-Q104 at the Korea Vision show to see how their expertise and our hardware could be used to benefit your machine vision systems.
Machine Vision Roadshow 2018 – Last chance to join usMarch 13th, 2018
This week our Machine Vision Roadshow will visit its final destinations in Switzerland and Germany – all the locations can be seen on the website at https://www.mvroadshow.org/. Contact us now to make sure you don’t miss out on the chance to meet with experts and view the latest solutions for your machine vision system.
So far we’ve had over 100 visitors who have all given us excellent feedback on the demos and knowledge-sharing we offered on the truck. You’ll be able to see our leading range of frame grabbers as well as innovative components from Büchner Lichtsysteme, JAI, Kowa Optimed, Laser 2000, Osela, Pyramid Computer and Q.VITEC.
If you’re not able to join us on the truck then contact us to understand how we can help with cost-effective and simple solutions to benefit your business.
Active Silicon AI Series part 6: Artificial Intelligence and embedded vision revolutionizing wearable device capabilitiesMarch 7th, 2018
We’re all becoming familiar with smaller, faster vision processing and marvel at new products being introduced to the consumer market such as phones that can be unlocked by facial recognition, advanced VR and AR in gaming and drones that can spy on our neighbours. These products are now having an impact in industrial environments too, where running Artificial Intelligence software applications on wearable devices benefits capabilities, speed and security.
AI may be brought to devices via apps in the first instance, but as embedded CPUs become smaller, cheaper and more powerful, we will see more adoption of these running on-device AI software – technology being developed by XNOR.ai is one illustration of this.
Taking vision to another dimension
Take, as an example, Microsoft’s HoloLens – a smart headset that comprises a Windows 10 computer, sensors, spatial sound and a high-definition stereoscopic 3D optical display. Their 2nd generation wearable Holographic Processing Unit (HPU) will contain an AI chip meaning that images can be gathered, stored, processed and interpreted more quickly on the headset itself, without the need for Wi-Fi and without the security risks of sending data to and from the cloud. Use of the headset will bring Mixed Reality Capture (MRC) to 3D design and imagery, bringing scale, proportion and perspective to a whole new level when visualizing plans and models. And, of course, it’s bound to include some unique gaming features!
Similar technology is also being employed to improve the experiences of the visually impaired. In October 2017, Orcam launched its MyEye version 2. This small and lightweight smart camera can be attached to a pair of glasses and allows those with limited or no sight to identify objects and faces, and even to read print by simply pointing at it. The device uses established optical character reading (OCR) technology to read text aloud, and applying AI algorithms to face recognition enables the wearer to tell the difference between men and women, and to identify particular people and items that have been learnt.
On the industrial side, companies such as Picavi and Vuzix are successfully offering smart glasses featuring AR and VR to aid warehouse picking and other services in the field. Picavi have tailored their product solely towards supply chain optimization, boasting savings of 30% in time spent selecting items for packing. Vuzix target a broader scope of industries and their basic range M100 glasses offer connectivity via Micro USB, Wi-Fi and Bluetooth but battery life is more limited to a maximum of just one hour when using the display, camera and high CPU loading options continuously.
Wearables are also highly developed in the defence sector. Body-mounted sensors and cameras monitoring soldiers’ heart rates, body temperatures, locations and surroundings are combined with AR and VR applications to allow remote assistance from more experienced soldiers or doctors. Additionally, data from wearable sensors is being manipulated using AI to create even more responsive and realistic training scenarios. Image processing technologies are being developed to better identify targets, including the use of facial recognition to distinguish human targets. Devices with vision and AI capabilities embedded mean that soldiers and security forces can operate in areas beyond the range of Wi-Fi or other connectivity. While the reality of an autonomous military is a way off yet (and, of course, rather alarming), it’s perhaps gratifying to know that the millions of dollars being invested in research in this area will benefit us all in some way, shape or form in another sector.
Fuelling the growth of on-device AI
Increased investment in products for the demanding consumer market is now leading to commercialization of technologies and making them more generally accessible and easier to install. For industry to fully embrace AI and embedded vision devices, the biggest hurdle that must be overcome is compute power; carrying out burdensome processing quickly drains the batteries of small devices. Developments in reducing power consumption by enabling faster processing is making adoption of on-device AI more realistic, for example, Qualcomm’s new SDK for its Zeroth machine intelligence platform makes it simpler for devices with their chip to run deep learning programs without needing to send data to the cloud, thereby saving power. Likewise, Bosch was recently recognised as a CES 2018 Innovation Award honouree for its ultra-low-power MEMS sensors improving battery life for wearables and drones.
Furthermore, enhanced and more widely available time-of-flight cameras are supporting more accurate depth and distance measurements for use in processes such as object and facial recognition.
On-device AI is enabling embedded vision to bring revolutionary applications to wearable technology, opening up radical new opportunities across many sectors. At Active Silicon, we’re investing in our embedded vision systems expertise to ensure we’re able to offer the latest vision systems and interface boards to our customers. These developments offer benefits to security, speed, accuracy and capability, and are playing a huge role in changing the face of machine vision as we know it.
Machine Vision is coming to a location near youMarch 1st, 2018
From 6-19 March, leading technology and manufacturing companies in Germany and Switzerland will be able to meet us and our partners as our Roadshow brings machine vision to a location near you.
We’ve teamed up with experts in lighting, cameras, lenses, software and integration to get machine vision on the road and we’ll be visiting workplaces to demonstrate our leading-edge technologies. Click here to see the dates and places we’re stopping at. We can still add extra destinations and can bring our specially converted truck right to your door to discuss your requirements – full details of the Roadshow can be seen at http://www.mvroadshow.org/.
Our partners are Büchner Lichtsysteme, JAI, Kowa Optimed, Laser 2000, Osela, Pyramid Computer and Q.VITEC. Including our innovative frame grabbers, our Roadshow truck will be fully loaded with hardware, software and expertise which offer optimal solutions for your machine vision system. We want to showcase the latest technologies, advancements and applications in the field of industrial vision and image processing, and by bringing the truck to you, we’re presenting you with the opportunity to view components from the entire system all in one place and at a time that’s convenient to you. We’ll be running live demos to show you how all these elements fit together to create outstanding solutions.
Catch up with us while we’re on the road to see how our ideas could enhance your systems. The last Roadshow, in 2015, was fully booked, so don’t miss your chance to put all this expertise in front of your team.
ActiveCapture – for image acquisition, analysis and displayFebruary 27th, 2018
We are excited to announce the launch of ActiveCapture – our latest front-end software for FireBird frame grabbers. The software application provides optimized image acquisition and display allowing the user to access and control all installed cameras and frame grabbers in the vision system in a clear and intuitive manner.
Highlights of the software include: a Feature Browser showing and allowing control of the GenICam features for the frame grabber and the camera; comprehensive image display features such as zoom and color sampler; real-time histogram and statistical information for the selected image or region within the image with a simple click, and the 1D Profile button shows the intensity profile of all or part of a line or column; a Command line allowing direct register access to the camera for low-level debugging; the Events Controller controlling all asynchronous events that can be generated by the hardware; and, the ability to capture and play back image sequences.
Support for Camera Link and CoaXPress cameras
ActiveCapture is a GenICam GenTL program that can be used with cameras supporting GenICam, such as CoaXPress, and Camera Link cameras using CLProtocol. It is also designed for use with non-GenICam Camera Link cameras.
Whether you need to quickly prototype a hardware solution, evaluate a camera or demonstrate solutions to customers, ActiveCapture works with any camera and provides a simple and straightforward method to configure the system hardware, allowing control of various features of the image acquisition such as triggering and image resolution. Full specifications for ActiveCapture and the complete range of Active Silicon’s compatible frame grabbers are available on this website. Contact us to see how ActiveCapture could benefit your image processing system.
Experience state-of-the-art embedded PC based vision systemsFebruary 20th, 2018
Look out for our partner, ADL Embedded Solutions, demonstrating our PC/104 boards at Embedded World later this month. Designed for a range of applications, our Phoenix PCI/104e Camera link frame grabber and FireBird Quad USB 3.0 Host Controller will be included in ADL’s embedded vision presentation on their booth in Nuremberg from 27th February. Our frame grabber comprises intelligent scatter-gather hardware which reads its instructions direct from memory without any host CPU intervention, offering faster processing for data acquired from a variety of Camera Link sources, including digital frame capture and line scan cameras. The host controller uniquely configures each pair of its four USB 3.0 ports to use a single lane Gen2 PCI Express interface, thereby eliminating inconsistencies resulting from shared bandwidth.
ADL targets applications using small form factors such as PC/104, 3.5″ boards, and custom designs, helping its customers to address challenges such as speed, size, extended temperature, power consumption, ruggedness and expandability. Combining Active Silicon’s first-class hardware with ADL’s leading design and implementation can be a winning solution for embedded vision systems. We hope you’ll be able to visit the show to find out more, and you can view our full product range here and contact us to discuss your imaging requirements.
Embedded Vision gains prominence at Embedded World 2018February 14th, 2018
The industry’s most exciting international show for the embedded market will open its doors on 27 February in Germany. Embedded World saw over 1,000 exhibitors and 30,000 visitors meet in Nuremberg last year, and this year, with focus on the theme “Embedded goes autonomous”, the show promises to be even busier. As vision systems become more of a focal point in the sector, “Embedded Vision” has developed into a new, distinct element within the conference program, reflecting its huge growth potential.
Our partner, EKF Elektronik, will be exhibiting at the event and will be displaying our FireBird Camera Link 3U cPCI Serial frame grabber. Based on CompactPCI® technology, EKF’s product range includes robust cPCI Serial industrial computers designed for applications in harsh environments and covering extended temperature ranges. We’re delighted that our products are assisting in their success – you can read more about the solution in EKF’s Industrial Vision fact sheet. Our FireBird acquisition boards are designed for ultimate performance and reliability, providing the very fastest image acquisition without any CPU intervention using the latest FPGA families, DDR3 memory and a fast Gen2 PCI Express interface.
IPO will raise capital for STEMMER IMAGING as it plans expansion and acquisitionJanuary 29th, 2018
We reported the sale of STEMMER IMAGING back in June, and it now looks as if the management team, which acquired a share of almost 25% in the company as part of the transaction, stand to gain as STEMMER has announced plans to make an Initial Public Offering (IPO) on the Frankfurt Stock Exchange during the first half of this year.
Parent Company SI Holding GmbH (a PRIMEPULSE company which also owns AL-KO Group) currently holds all of STEMMER’s shares and will retain a minimum of 51% after the IPO capital increase is carried out and the existing shares are sold in a secondary offering. Proceeds are quoted to be an expected €50M from the placement of new shares regarding the IPO.
STEMMER IMAGING has been run as a self-contained division of the AL-KO Group since its sale in 2017. It has been a major player in the European machine vision industry for over twenty years, and this move is expected to raise capital for further growth and acquisition. Watch out for geographical expansion and procurement of non-industrial machine vision technologies as STEMMER embraces the growth seen in digital machine vision, and the industry in general.
What’s trending in machine vision in 2018 and how we’re driving progressJanuary 23rd, 2018
2017 saw continued growth in embedded systems, and we believe this trend will continue apace in 2018. Driven by requirements for higher speeds and higher resolution, we expect to see increased implementation of consumer and off-the-shelf embedded technology across multiple sectors, including manufacturing, inspection and medical imaging. Multiprocessor system-on-chip (MPSoC) development will mature as we have seen from Xilinx with their Zynq family of SoCs and APSoCs, and we’re investing resource in developing and applying this technology.
During 2018, processing power offered by CPUs and GPUs will undoubtedly increase, allowing AI and deep learning algorithms to be implemented in more and more applications. NVIDIA’s Jetson GPU is one example which makes the CUDA platform accessible to computer vision and robotic applications. We regularly write about the development of AI and how it’s affecting the machine vision industry in our AI Series of blogs.
Embracing progress – Active Silicon’s strategic direction
We excited about 2018; we’ve been busy through 2017 preparing several new products to bring to market. While celebrating our 30th anniversary, our overall objective remains to continue the organic growth we’ve seen over recent years. We’re going to be growing our team and will be looking for talented engineers to join our R&D area. Our profits will be reinvested into the business as we expand our embedded vision expertise.
We’re already proactive with our medical embedded products, and have passed customer audits to ISO 13485. However, we’re working towards formal certification for this standard and expect to further expand and enhance our offering in this sector.
We are seeing a clear shift towards multichannel 4K video, particularly in the medical market, and we’re working towards our first 4K video product. Additionally, growth in USB3 has changed the requirements for some of the low-end demand in the frame grabber market. In response to this, our latest embedded vision processor includes four USB slots as well as maintaining compatibility with other standards.
In this highly competitive marketplace, our first-class customer support continues to help us stand out from our competitors. Our new ActiveCapture is a front-end, out-of-the-box software application which provides enhanced features and usability for our customers, including those with non-GenICam compliant cameras.
We’ve invested in strengthening our CoaXPress frame grabber range to meet the increasing data transfer rates required in machine vision, and will be launching our latest FireBird single, dual and quad CXP-6 boards, designed to address both the lower cost volume market and high-end requirements. These new boards offer faster processing at less expense while maintaining all the benefits of our existing FireBird series. Alongside these, we’ll be working towards CoaXPress v2.0 and have CXP-10 and -12 boards under development.
Of course, one important forum to hear opinions about future trends is the VISION show in Stuttgart, and we hope to see many of our customers, partners and suppliers there to discuss the industry’s challenges and opportunities. Our 30th year is set to be our busiest yet!
News from the Operations teamJanuary 11th, 2018
We are very pleased to welcome Richa to Active Silicon. Richa joins us in the Operations team assisting our Operations Manager, Simone, and ensuring, amongst other things, that all our products are despatched and arrive as and when ordered.
Formerly a Service Delivery Manager, Richa comes from a strong customer service background and brings experience from Telecoms and Accounting Software sectors. She is proficient at supporting both internal and external stakeholders, having been involved with customer requests, inter-departmental liaison, help guides and training.
Providing a reliable and dependable service to our customers is, of course, of paramount importance. We are delighted that Richa will further strengthen our supply chain and customer support operation to help us continue leading the way in pre- and after-sales service.
Active Silicon’s AI Series – part 5: AI and computer vision are bringing Industry 4.0 to a smart factory near youJanuary 9th, 2018
Our recent blog Industry 4.0: what does it mean for machine vision? covered the impact that the 4th Industrial Revolution is having on our sector. A major factor in the development of the so-called revolution is the adoption of Artificial Intelligence (AI) software allowing machines to learn and process information to a better degree than humans. So, what influence is AI having on modern developments? Three principal areas form the focus of current discussions: automated quality inspection, predictive maintenance and the role of robots.
Several organizations are claiming to offer the first software suites to bring deep learning to machine vision, for example Vidi from Cognex in the industrial imaging sector; ZeroDefectMiner from Qualicent in the automotive, medical and aerospace sectors. They are being closely followed by companies who are adding machine learning capabilities to their portfolio, such as Cyth System’s Neural Vision and Sualab’s Vision Inspection AI solutions for textiles, leather and printing inspection. But how do they work?
Instead of the traditional method of inspection, where vision systems use cross-correlation or pattern matching to check for anomalous shapes, fill levels, irregular sizes and foreign bodies, AI inspired algorithms and Artificial Neural Network systems can now be used to teach computers to evaluate the quality of a product in the same way that a person can – for example, to look for an unacceptable level of imperfections on the skin of a fruit, or too many flaws in a textile roll. These processes are no longer required to be limited to “pass” or “fail” results, but can allow greater classification of products, and even select defective items for correction where deemed possible. As you would expect, these computers can work faster than a human workforce, and without rest, allowing manufacturers to increase their yield massively. It is even possible, as a result, for production processes to be automatically reviewed and enhanced to prevent recurring faults.
The game changer here is not the level of AI software being made available, but the level of expertise required to implement the new software. Cyth claim that Neural Vision can be implemented by a programmer with no machine vision experience whatsoever, and MVTec’s well established HALCON library promises a new release which will allow users to train CNNs themselves, thereby potentially increasing the number of applications incorporating AI into their inspection systems.
The practice of monitoring and repairing faults and wear in machinery before they cause a breakdown or stoppage of the production line is being made faster and more efficient using machine vision and AI software. Data and images of machinery, robotics, belts and cables captured by sensors and cameras can be recorded and processed using algorithms programmed to trigger an alert when intervention is necessary. Developments in big data and cloud computing mean that a vast amount of data can now be handled over an enormous geographical range, allowing production managers based in one continent to repair and replace production line equipment in another. Additionally, new technologies allowing more data processing to be carried out near the processing unit (at the edge) rather than sending it to the cloud means that even more information can be handled and processed. Such programs also enable a more efficient servicing schedule for equipment, ensuring that maintenance is carried out as and when necessary, rather than simply because a machine might be due its annual check-up!
So, AI is changing factory floors at an unprecedented rate, bringing the Industry 4.0 revolution to even more production lines and inspection facilities. But at what cost to us humans?
The role of robots
We’ve all read about robots becoming smarter and more independent, even to the extent that world domination is feared by some. Without doubt, robots powered by AI have an important role to play in the Industry 4.0 revolution. Robots have been used in industrial settings for decades, carrying out simple, repetitive tasks on a production line, or moving items around a warehouse. Historically, they have had limited intelligence, and have had to work separately to humans for fear of injury. That’s all changing. As an example, Rethink Robotics has spent the last five years developing collaborative robots (cobots) for use in industrial settings. In 2017, Alicona released Tool Cobot – a fully portable industrial collaborative robot which brings 3D optical metrology to the next level. Using machine vision and smart technology, these robots operate alongside workforces, keeping production efficient, and their human colleagues safe. We can see such another use case with Ocado’s Smart Platform robots, which bring goods in the warehouse to the human picker instead of the picker spending time walking to the goods. Now AI promises to make robots even more autonomous and cerebral as they begin to think and act like humans. This, of course, is great for production processes and bottom line reporting, but what’s the impact on real people?
Two opposing camps put forward arguments here – the first maintains that robots will replace people in many jobs, from picking in a warehouse to loading and driving a delivery truck, and unemployment will inevitably rise. Add to that the perceived threat of AI enabling machines to become too clever, and it might not just be our labor pool that robots are disrupting but our everyday lives and security too.
The other side states that robots will take only the most monotonous jobs, encouraging humans to train for more skilled roles, thereby creating an enhanced working environment. Furthermore, these advocates affirm that, as robots in the Far East cost about the same to implement and run as robots in the West, we could see a shift to local production instead of the east-bound outsourcing that we’ve seen in past years, electronics manufacturing being a prime example. The benefits of using a low-cost, Asian workforce will become outweighed by the savings created by automating and optimizing production lines in Europe and the US. Additionally, consider the shortened shipping time associated with local production, and the increasing demands by consumers for priority delivery at low cost could be better met.
The key word when looking at the human versus robot argument is collaboration: collaboration between man and machine; collaboration between internal company departments with different objectives; collaboration between organizations with different expertise. If these areas cooperate successfully, benefits to all can be maximized and threats minimized.
AI and machine vision is bringing Industry 4.0 to a factory near you
Whatever the eventual outcome, combining machine vision, AI and Industry 4.0 is promising to change the engineering world as we know it. Industry has a lot to learn from the smartphone manufacturers and consumer tech organizations who are developing miniature and embedded systems at a rate of knots; opportunities to implement advanced and intelligent systems to optimize production and inspection are extensive and exciting.
Season’s greetings from Active SiliconDecember 20th, 2017
As the Christmas spirit spreads through our office and production lines, we’d like to take this opportunity to thank all our customers, suppliers and partners for their support through 2017, and wish you all an enjoyable holiday and successful 2018.
This year has brought innovative developments to our embedded system range, enhanced GPU processing compatibility to our frame grabbers and eight new faces to the Active Silicon team – it’s been a busy 12 months!
Watch this space in the New Year as we have a couple of imminent new product launches which we’re really excited about. Happy holidays everyone!
3D time-of-flight camera specialist acquired by American corporation Rockwell AutomationDecember 18th, 2017
Rockwell Automation, Inc. with headquarters in Milwaukee, Wisconsin, has announced the acquisition of the Scottish company Odos Imaging earlier this month.
Rockwell Automation, a major international player in the industrial automation sector, employs around 22,000 people world-wide. According to Rockwell’s vice-president Lee Lane, the acquisition allows Rockwell to build further on their portfolio of smart sensing and safety products and brings 3D time-of-flight sensor technology to industrial applications.
Odos Imaging based in Edinburgh, specialises in the development and manufacture of advanced cameras for science and industry with core technologies in 3D time-of-flight. Ritchie Logan, Strategic Business Development Manager, commented that Odos Imaging is delighted with the acquisition and the commitment from Rockwell Automation to their products and technology. Odos Imaging will continue to market their current portfolio of 3D cameras.
Active Silicon appoints Quality and Compliance ManagerDecember 5th, 2017
We are delighted to welcome a new member to the Active Silicon team as we pursue a path of continuous improvement throughout all areas of the business.
Keith joins us with more than 15 years‘ experience of quality management gained during a solid career in mechanical engineering, and will oversee all customer and supplier quality issues, working closely with our customer facing staff, Supply Chain Manager, Inspection and Production teams. He is tasked both with ensuring our current standards are impeccably met, including those associated with our ISO 9001 accreditation, and with maintaining our momentum towards ISO 13485 compliance for Medical Embedded Systems. Keith will become the focal point for all our compliance responsibilities and will enhance procedures for areas such as environmental management, information security, and health and safety matters.
As a leading supplier of imaging products for regulated industries such as the medical and life science sectors, quality and compliance are at the heart of everything we do. You can view our certifications, accreditations and policy statements here, and contact us for more information on our products and services.
Active Silicon supports real-time GPU processingNovember 28th, 2017
These APIs enable many filter, convolution and matrix-vector-operations to be performed directly on data from a frame grabber using a GPU without the need to be processed by system buffers or by the CPU. This makes data acquisition very fast with very low latency as the GPU memory is made directly accessible to the frame grabber. Modern GPUs are extremely efficient at processing images and graphics, and their parallel structure makes them particularly well suited to uses where large blocks of data need to be processed in parallel.
We’ve published details on our website to help you understand the processes involved when running both Windows and Linux OS, and the setup requirements for each one to help you know what’s needed and how best to get started, including a video demo of GPU processing. Have a look at our GPU solutions page and contact us with your image processing queries.
Active Silicon’s AI Series – part 4: Cloud-based FPGAs offer accelerated machine learningNovember 21st, 2017
We have established in our AI series that FPGAs are one of the key technologies in the development of AI. While DNN training may still be best carried out on a GPU, FPGAs are offering unprecedented opportunities in allowing engineers to customize and revise their systems. Previous obstacles have included a limited number of developers with the knowledge and experience necessary to make FPGAs widely appealing, but new developments placing FPGAs in the cloud, and making them available to more common languages, will inevitably encourage wide adoption.
After much anticipation, Amazon have now launched their Elastic Compute Cloud (EC2) F1 instances. The F-series of instances were introduced last year and follow a naming convention linked to the number of CPU cores – F1 has just one core. Using Xilinx chips, these new F1 instances offer customizable cloud-based FPGAs chargeable by the hour; with no long-term commitments or up-front payments, the chips can be programmed multiple times without additional costs. Amazon’s AWS cloud services support all major frameworks, attracting AI development from multiple sectors.
Similarly, Microsoft is in the process of implanting FPGAs across its Azure cloud services, using Intel Stratix 10 FPGAs. As we covered in August, Project Brainwave aims to accelerate the development of DNNs and offer more advanced machine learning to the masses. A recent announcement from Intel, that its FPGAs will be powering the Alibaba cloud, further endorses the growth of FPGAs “as a service”, suggesting it could be embraced by a wide audience.
Interestingly, Google are still sticking to providing their cloud-based AI developments via ASICs rather than FPGAs. Tensor Processing Units (TPUs) are designed to support TensorFlow on GPUs and CPUs with the understanding that the amount of machine learning computation required to train and run AI applications is best supported on the GPU; Google maintains that “Our neural net-based ML service has better training performance and increased accuracy compared to other large scale deep learning systems”.
Of course, engineers have more decisions to make than just a straight choice between the location of their technology. In addition to their FPGAs, Intel have opened doors to an array of deep learning options with products including their Loihi neuromorphic chip, Movidius Neural Compute Stick and Movidius Myriad X Vision VPU SOC. The stick brings AI into the realms of a “plug and play” add-on for end users, truly enabling access to advanced CNNs via the Caffe framework. Loihi, Intel’s first self-learning chip, professes to work like the human brain and get smarter and faster over time. Myriad X is specifically designed for combining and enhancing imaging, visual processing and deep learning. Up to 8 HD resolution RGB cameras can be connected to the chip, and accelerators allow processing up to 700 million pixels per second, all, of course, meeting today’s low power expectations.
Undeniably, the focus on hardware and software development specifically for augmenting Artificial Intelligence is intensifying. This means that it’s not just big-budget autonomous vehicle manufacturers and research-heavy medical applications that can benefit from AI. Machine vision with integrated AI will be able to offer levels of inspection and detection that really were previously limited to human intervention. While resources to program the hardware are still costly, and systems still require a high level of training and integration, adoption of accelerated machine leaning processes is likely to progress relatively slowly, but as new technologies emerge and become mainstream, wider implementation will become commonplace.
As the world of computer imaging progresses apace, we’ll keep you informed of AI developments and ensure our products are compatible with all the latest advancements, whether they’re the in the cloud, on a chip, or coming to an inspection line near you.
Industry 4.0: what does it mean for machine vision?November 15th, 2017
How did we get here?
At a basic level, Industrial Revolutions change the way we make things. The 1st Industrial Revolution transformed 18th and 19th century agricultural and rural societies into an urban workforce, mastering iron, textiles and mechanized manufacturing for the first time. Further developments triggered the 2nd Revolution between 1840 and 1914, when oil and electricity become widespread, and newly available power supplies allowed mass production, leading to such inventions as the telephone, light bulb and internal combustion engine. The 3rd Industrial Revolution, commencing in the 1980s, is also referred to as the Digital Revolution, and reflects the move from analog electronic and mechanical machines to the digital devices with which we are familiar today; perhaps the most influential development has been the invention of the Internet.
The 4th Industrial Revolution is emerging as we write. Commonly known as the Industrial Internet of Things (IIOT) or Industry 4.0 (originating from a project instigated by a German government working group), the technological advancements characterizing this movement center around astonishing developments in Artificial Intelligence, advanced robotics, biotechnology, IOT and cutting-edge automation.
The progression of Industry 4.0
Industry 4.0 is moving us all towards increasingly automated and enhanced productivity – anticipated benefits include lower costs, faster processes, increased quality control and better use of resources. Most companies in the manufacturing sector (79.9%) expect to see positive effects resulting from digital transformation. Technologies teaching computers to think for themselves are becoming commonplace in the manufacturing world as Deep Learning software is being more widely mastered and adopted. Along with a maturing IOT, Big Data, Quantum computing and other factors, Industry 4.0 is set to change the way we make things for ever. “Smart factories” are in the strategic plan of many businesses – the aspiration covers, amongst other things, automated stock control, where shelves will replenish themselves, and machinery being able to identify faults and deterioration and fix them before they become a problem minimizing, and even eradicating, down-time. While some labor unions fear job losses, advocates of Industry 4.0 assure us that staff in the future will be trained to work alongside automated machinery, relieving them of the tedious tasks and replacing them with more intellectual ones. We’ll look at this aspect in greater detail in a future blog.
What does it mean for us?
Image processing is playing a key role in Industry 4.0 – capturing image data, processing this information, and instructing other devices is a fundamental part of creating and operating smart factories. IOT and cloud computing are connecting manufacturing plants and processes like never before. Machines that can “see” are now being trained to use images to make decisions, speeding up and refining identification and inspection processes. Key recent developments mean that emerging technologies are allowing collected data to be processed and actioned with little or no human intervention, making processes vastly quicker and more accurate. Digital manufacturing encompasses predictive maintenance, condition monitoring and augmented reality, all of which are enhanced by computer vision. In addition to machine vision on a large scale, such developments as wearable devices utilizing embedded vision components are bringing imaging to an even greater range of industrial roles – as demonstrated by such products as Picavi with their “pick-by-vision” glasses.
As systems become smaller, more affordable and more reliable, we will inevitably see an increase in adoption of machine vision in all sorts of industry environments, and engineers and designers are working hard keeping up with requirements. Those involved in the machine vision industry must recognize this growing phenomenon and work to enhance our systems accordingly to enable the smooth flow of data within and between operational sites. We can expect to see more cross-company collaboration and consolidation, such as that earlier this year of ViDi Systems being acquired by Cognex, resulting in the “first ready-to-use deep learning-based software dedicated to industrial image analysis”.
At Active Silicon, we ensure all our products are compatible with the latest available software and hardware driving machine vision in the Industry 4.0 revolution. Contact us to see how our products could enhance your systems and help you stay at the leading edge of your sector.
 Impact of the Fourth Industrial Revolution on Supply Chains, World Economic Forum in collaboration with BVL International (2017, p. 5)
Active Silicon co-operates in exoplanet explorationNovember 8th, 2017
NASA regularly launches scientific balloons high up into the Earth’s atmosphere to aid research on such fundamentals as the origins of our universe, cosmic rays, black holes and other planetary and space investigations. Launched from a variety of sites from Antarctica to Sweden and Hawaii to Australia, flights generally last from a few hours to a few days; earlier this year saw the launch of a mission designed to run for more than 100 days using a Super-Pressure balloon. The balloons are generally over a million cubic meters in volume and can carry payloads up to the equivalent to three small cars.
Of course, capturing images and using this data to further scientific understanding is a primary function of many of these balloons. Active Silicon is proud to report that two of our Phoenix PC/104-plus frame grabbers will be launched using one of these balloons as part of the exoplanet research project known as PICTURE-C (Planetary Imaging Concept Testbed Using a Recoverable Experiment – Coronagraph), being undertaken by The University of Massachusetts‘ Lowell Center for Space Science and Technology. The mission will use a 60cm off-axis unobscured telescope and a high-contrast coronagraph launched in a high-altitude balloon floating approximately 40km above the earth’s surface. The aim is to directly image debris disks and exozodiacal dust around neighbouring stars in order to explore Earthlike planets orbiting Sunlike stars. This is the latest experiment in the PICTURE series, with previous sounding rockets having been successfully launched in 2015 and 2011. PICTURE-C will entail two flights, one scheduled for September 2018 and one in September 2019.
Our Phoenix frame grabbers will be used in the acquisition system of a low-order wavefront sensor, in a wavefront corrector which will modify time-varying aberrations such as pointing jitter. High-speed, low‑latency acquisition is essential to the success of the experiment – researchers are aiming for a framerate of 200Hz with a mean acquisition latency of less than 180μs – and our support team are actively helping the researchers prepare the critical equipment. The balloon’s payload used in PICTURE-C will include the Wallops Arc Second Pointer (WASP) gondola which was successfully tested in previous missions – this flexible system points scientific instruments at targets with arc-second accuracy and stability. Results from the experiment are due to be presented in January 2019 by the research team.
Active Silicon products are well suited to space exploration projects due to their robustness and high-reliability. We look forward to being part of the next exciting mission!
A host of opportunities presented at 1st Embedded VISION Europe ConferenceNovember 2nd, 2017
15 expert presentations, over 190 registered participants and dozens of exhibitors – the first Embedded VISION Europe Conference on October 12th and 13th, organized by EMVA was very well received by the industry. Leading players including Intel, AMD, and Qualcomm gave presentations on the incredible image processing capabilities of their chips. From car and people tracking to face recognition – tasks that not long ago required large supercomputers can now be performed on compact, light-weight and low-power specialist processors.
When it comes to unsupervised machine learning, even of complex objects, different platforms are available showing impressive proficiencies that are often close to, or even beyond, human performance. Some are capable of performing both deep learning and recognition tasks on the same powerful GPU-like hardware. More compact chips with less power consumption rely on external deep learning of deep neural networks, e.g. training in the cloud, and then execute recognition tasks based on such learned models.
On the programming side of embedded vision systems, the need for lengthy development cycles has been eliminated in some cases; Mathworks showed a framework to develop advanced imaging algorithms in MATLAB, which can generate code for embedded processing platforms.
In addition to these insights into state-of-the-art embedded vision, the two half-days of the conference plus an evening reception offered lots of opportunities for networking among leading industry players. We hope for a further issue of this event format and are looking forward to attending again.
Active Silicon is your partner in Embedded Vision Systems for demanding applications in industrial manufacturing, medical, traffic, security, entertainment, and many other fields. We leverage the latest technologies that fulfill your requirements of reliability, performance, ruggedness and long-term availability. Please contact us to discuss your strategy or current project in Embedded Vision.
Synopsis of this month’s IVSM: what’s new in machine vision standards?October 31st, 2017
Last week Active Silicon participated in the International Vision Standards Meeting in Hiroshima, Japan, along with other industry professionals involved in driving machine vision. The event took place on 16th-20th October and covered discussions on all the current machine vision standards. So, what’s new?
GenICam The GenSP proposal for describing images has got much closer to a standard and has a new name – GenDC (Generic Data Container). The process may also result in a new version 2.0 of the GenTL standard that can supply GenDC data to an application – watch this space.
CoaXPress The group, chaired by our CTO, Chris Beynon, worked on the limited number of outstanding issues needed to allow version 2.0 to be released, with plans to go to ballot around the new year. V2.0 adds faster connection speeds (up to 12.5 Gbps per cable) and various enhancements to the protocol. Several new cameras were successfully tested in the Plugfest. Keep up with the latest developments here.
Camera Link Progress made in Hiroshima means a ballot on version 2.1 will take place soon, which resolves a number of issues with v2.0, and adds definitions for many new data formats. Part of v2.1 will be the requirement for a mandatory plugfest; Active Silicon successfully operated our frame grabbers with eight cameras in the first ever Camera Link plugfest, which took place during the meeting. Additionally, Camera Link v2.1 will support FPGAs being used to implement the Camera Link interface – particularly relevant progress when considering many recently released products and those currently in development. You can read more about the role of FPGAs in computer vision in our AI series of news stories – Part 1, Part 2 and Part 3.
Attendees were also able to enjoy the hospitality of the Japanese hosts, including indulging in local cuisine, visiting the infamous A-bomb dome and touring the thought-provoking Hiroshima Peace Memorial Museum.
Interested parties will meet again in Spring next year, in Frankfurt, to further the work and ensure our industry keeps up with the ever changing opportunities in machine vision. Want to know how our products could help you stay competitive? Visit our website or contact us to see how we can help you break new ground with your machine vision technology.
Great insight on the great shrinkOctober 25th, 2017
The Week Magazine recently published an interesting article following the history of Moore’s Law, and the phasing out of this self-fulfilling prophecy. When the driving force of the computing industry – that the number of components in an integrated circuit will double every two years – comes to an end, where will computing head to? The article outlines likely solutions, amongst others, as better programming, the introduction of 3D chips and quantum computing. We thought the article gives a great insight into the direction that computing is moving, as demand grows for ever smaller and more powerful devices. Read the full article here.
Without doubt, technology is moving off the desktop and towards the cloud, edge or other remote center, and our engineers are moving with the times to enable our embedded systems and frame grabbers to keep supporting the applications of the future. Contact us to see how our solutions could advance your applications.
Reproduced with kind permission from The Week, Issue 1138, 19th August 2017
Active Silicon’s AI series – part 3: Less programming and faster vision solutions with CNNsOctober 18th, 2017
It is one of the great goals of computer vision to enable machines to see and understand images like humans. In many regards, vision systems outperform humans already, as long as the task can be bounded by a limited set of rules and conditions – such as geometric measures and tolerances of manufactured parts, or color and evenness of a surface. However, the necessary algorithms require a high level of effort and expertise to be programmed and are lacking the capability of abstraction beyond a certain level of variance in shape and/or texture. This is where Artificial Intelligence can unleash an array of great opportunities.
Nowadays, cameras can capture images at much higher frame rates than humans and without being subject to fading concentration. In geometric measurement and 3D analysis, vision systems are already more accurate and much faster than humans. Yet, before recent breakthroughs in image processing research, most machine vision applications used to be solved by extracting hundreds to thousands of filter and wavelet features from the pixel matrix, selecting those features with the highest information content and providing them to a manually configured or statistically trained classifier. This was enormously time consuming and required a high level of expertise.
Thus, two factors were slowing down the global adoption of machine vision techniques: Firstly, the lack of sufficiently fast, robust or computationally affordable algorithms for image feature extraction and classification. Secondly, and most importantly, the lack of computer vision developers who were capable of implementing solutions to new as well as known machine vision applications with existing algorithms.
How CNNs support vision solutions
Both obstacles are widely resolved by machine learning technologies. With the invention of so called Convolutional Neural Networks (CNNs), an elaborated feature extraction is not required anymore. Instead, the artificial neural network autonomously learns how to analyze images correctly to achieve the desired results. Thereby, CNNs are a special architecture of Deep Neural Networks and the approach is referred to as Deep Learning.
The mathematical model behind these multi-layered artificial neural networks is inspired by the human brain. Highly simplified, these neural networks perform their image analysis and make their classification decisions as follows: Greyscale or RGB pixel values are fed into receptor neurons on the first network layer. These are connected to multiple neurons on a second layer, which again are connected to neurons on several following layers and finally to a layer of a few output neurons. In each of these millions of connections between two neurons, the signal is either amplified or damped by a weighting factor. In the learning phase of such a network, three things are optimized to maximize recognition rate: the number of layers, the number of neurons per layer, and the weighting factor in each connection.
Thanks to this powerful machine learning approach, system engineers today just need to train a CNN with adequate sample images, e.g. of good and bad items in quality inspection, of skin carcinoma or fish lice. Quickly, the algorithm can be trained, tested and put into operation. These techniques accelerate the adoption of machine vision in many more applications, enable solutions for previously unsolvable problems, and reduce costs.
What part can we play?
We at Active Silicon support the advancement of artificial intelligence in machine vision as our engineers ensure our embedded systems are ready to accommodate deep learning architectures. We will be ready when AI enables enhanced complex systems for industrial, medical, scientific, traffic, or security purposes on a large scale.
Would you like to know if your imaging challenge can be solved by artificial intelligence? Please contact us and let us find the answer!
Follow our blog and social media channels to stay up to date on developments!
IVSM 2017 Fall MeetingOctober 12th, 2017
Next week will see the meeting of machine vision minds in Hiroshima, Japan, for the Fall 2017 International Vision Standards Meeting, organized by the Japan Industrial Imaging Association (JIIA). From 16th to 20th October, delegates will discuss current industry standards and future projects. GenICam standardization is top of the agenda, which will also include GigE Vision, CoaXPress, Camera Link, Camera Link HS and USB3 Vision. Most of Wednesday will be dedicated to the usual Plug Fest, enabling testing of compatibility and experimentation. The meeting is supported by the global G3 group (AIA, EMVA, JIIA, VDMA and CMVU). You can see more about Machine Vision Standards here.
We’ll be watching developments closely, especially in relation to this year’s decision of the AIA Board of Directors to overturn the previous IVSM vote to make GenTL mandatory in Camera Link. This was discussed heatedly at the Spring IVSM so it will be interesting to see if any changes are made this month.
Our CTO, Chris Beynon, will be chairing the CoaXPress sessions and we’ll keep you informed about the progress of Version 2. In addition, this summer the G3 Future Standards Forum created a new EMVA Working Group which will focus on advancements and potential standardizations in Embedded Vision, and we’ll be following news from here as well.
Fog = Edge + Cloud! Are you still in the know about trends in computing for vision?October 10th, 2017
Imagine a future manufacturing facility built on the concepts of the (industrial) Internet of Things (IoT): thousands of actuators, thousands of sensors, dozens or even hundreds of cameras, and one center with enormous computational power for processing all the data and controlling all processes in the facility.
Albeit highly appealing from an IT-management perspective, this architecture requires inconceivable data bandwidth and poses challenging latency and real-time considerations. The image data from all the great new high-speed and high-resolution machine vision cameras would contribute the largest share to the network traffic. Furthermore, in security or traffic applications, sending image data to the cloud for processing can expose sensitive information such as faces or number plates to unwanted access and manipulation, while it is computationally expensive to encrypt and decrypt entire images.
These requirements are driving the countermovement of approaches entirely relying on cloud computing: Edge Computing. The term refers to an architecture where, in most cases, embedded systems analyze data close to their source. Embedded vision systems can drastically reduce the network traffic, e.g. when they perform image analysis tasks including good part/bad part decision making, number plate recognition, face recognition or high-level feature extraction right after the image acquisition by a camera. Hence, only core data is transmitted via the network and sensitive data can be encrypted. This requires less bandwidth, reduces latency and jitter in control of actuators and eases the demand of the computational power of peripheral computing and control units. As image processing, unlike many other data analysis tasks in industrial processes and control, benefits greatly from parallel computing, embedded vision systems can be specifically designed to solve imaging problems much faster, with lower-cost hardware, and significantly lower power consumption than any generic cloud computing center.
However, in a large manufacturing plant, Edge Computing can hardly render a central server farm redundant. Here, the novel term Fog Computing comes in. It basically describes a distributed computing architecture, where Edge Computing is applied to every relevant client in a network and each client delivers high-level data to a central cloud computing center for further processing, statistical analysis and storage. This hybrid concept combines the best of Edge and Cloud Computing and is expected to become the dominant setup in complex scenarios of the Industry 4.0/Industrial IoT.
Would you like to apply Edge Computing to your imaging based devices or machines? Active Silicon can provide you with powerful embedded vision solutions, available quickly and at surprisingly low costs thanks to our versatile hardware platforms. Come and visit our experts at Embedded VISION Europe on Oct. 12 and 13 in Stuttgart, Germany.
October 2017 Newsletter – Event FocusOctober 9th, 2017
Active Silicon to the (mountain) rescueOctober 3rd, 2017
At Active Silicon there’s no such thing as a quiet period but when three of our adventurous colleagues set off on a hiking trip to the beautiful mountains of Swedish Lapland this summer for a bit of re-energizing time out, the weekend turned out to be more exciting than they had anticipated.
The team of Emile, Rich and Chris (along with a Swedish friend) spent four days hiking and climbing, including the 2097m peak of Kebnekaise, in sub-zero temperatures and at some points, deep snow. They were fully prepared and equipped for the weekend, unlike the unlucky hiker they had to rescue along the way. Having lost the trail and fallen through the snow, the Swedish walker was near-hypothermic when she was spotted by the Active Silicon team, who brought her to safety and were able to call off the official rescue helicopter. Our engineers have been the subject of many articles in the trade press in the past, but this is their first national newspaper coverage! It all goes to prove the importance of preparation and resolve – aspects in which we pride ourselves both in and out of the office 🙂
Innovation in action – view our latest advancements at Embedded VISION EuropeSeptember 28th, 2017
Embedded VISION Europe is approaching fast, and we’re preparing to head to Stuttgart to join over 150 other delegates at the EMVA’s first conference focusing solely on the embedded sector. The conference program will cover hardware and software developments as well as looking at industry standards, and of course, current hot topics such as Deep Learning and miniaturization.
Our team will be in the exhibition area to talk you through the latest developments in image acquisition and processing, and, among our other embedded products, we will be proudly showcasing our USB3 Vision Processing Unit. We will demonstrate live simultaneous acquisition and display from four USB3 Vision cameras. The unit processes the image stream in real-time and provides several data output options, including 3G-SDI.
The USB3 Vision Processing Unit has been developed for industrial and medical use. It is currently in production for use by one of our clients leading the way in computer vision assisted surgery. Internally the VPU consists of one of our COM Express carrier cards fitted with a high performance Intel i7. A PCIe/104 expansion slot allows for flexibility in system design and the acquisition of video data from a variety of sources. With all the standard PC interfaces available, this embedded system can be readily adapted for many embedded vision applications.
Active Silicon’s AI series – part 2: Artificial Intelligence and machine vision: the good, the bad and the uglySeptember 19th, 2017
Plenty of books and movies have been created around the benefits and dangers of Artificial Intelligence (AI), since the first integrated chips were able to control complex systems back in the 1950s. Recently, Tesla and SpaceX entrepreneur Elon Musk and Facebook founder Mark Zuckerberg have engaged in a very public argument about the fundamental risks and opportunities of AI, triggered by Facebook reporting that they needed to stop an experiment where autonomous chatbots had developed their own inscrutable language by reinterpreting the meaning of English vocabulary.
The dark side was highlighted just last month by an open letter to the UN from Elon Musk and 115 other specialists across 26 countries, calling for an outright ban on autonomous weapons. The UK government, for one, appears to have listened and is adopting policies not to develop or use fully autonomous weapons.
Most essays on AI emphasize both its wonderful opportunities and its life-threatening risks. When IBM’s Deep Blue proved in 1997 that human intuition and experience from thousands of chess matches could no longer outperform a machine, it wasn’t only the philosophers who became anxious about the power of this technology. Today, software frameworks for machine learning are publicly available (e.g. https://www.tensorflow.org/), thus, each individual developer is in charge of the safety of their own experiments while political regulations are widely lacking, and probably ineffective.
However, at Active Silicon, we believe there are great opportunities being created by machine learning in conjunction with imaging; with new approaches in Deep Learning, known machine vision applications can be implemented much faster and previously insuperable problems can be solved. While following these developments closely, we expect engineers to take their responsibilities seriously and ensure the ethical utilization of any technological advancements.
Would you like to know more about the opportunities created by artificial intelligence in machine vision? Then please visit our blog regularly in the upcoming weeks as we publish more updates, or simply subscribe to our newsletter.
Bringing CCD performance to CMOS camerasSeptember 13th, 2017
Active Silicon partners closely with a number of camera manufacturers and specialists in scientific imaging for medical and life science applications. Our range of FireBird Camera Link frame grabbers offers advanced functionality and reliable operation, allowing camera manufacturers to reach new limits in, amongst other things, microscopy. Epifluorescence microscopy provides a particular challenge to imaging due to low light levels, high signal-to-noise levels and because the emitted light often fades during the microscopy process. In fluorescence microscopy, a specimen is illuminated using light of a specific wavelength, triggering fluorescence of certain components. Typically, biological specimens don’t fluoresce themselves, rather specific structures in the specimen are dyed with chemicals, called fluorochromes, that fluoresce when excited by the light, thereby making it possible to identify very specific cellular components and impurities to a greater degree than with other microscopic methods. It is possible to dye different structures with different color emitting dyes, as can be seen in the accompanying image.
The picture shows the egg chamber of a Drosophila. The fruit fly lends itself particularly well to studying cell function and development in biomedical research, in this particular image the development of eggs (oogenesis) was studied. The DNA in the individual cells is shown in blue, the F-actin cytoskeleton in red and the mRNA at the transition to the main body area in green. The fluorochrome dyes used were DAPI, Rhodamine, and GFP.
A major advantage of modern CMOS cameras for microscopy is the ability to operate in low light environments, which means the cells are less likely to suffer deterioration from photobleaching. The combination of high quantum efficiency and low noise is allowing advances in imaging beyond the capabilities of CCD cameras, and enabling improved research on cell development over time.
One of our customers has been able to develop a digital CMOS camera with twice the speed, three times the field of view and drastically less noise than the market-leading CCD cameras. The Camera Link CMOS camera, once connected to our Camera Link frame grabber, allows images of 4 megapixels and 16-bit to be transferred to a host PC at 100 frames/sec. The real-time processing and high data-rate performance are matching the more expensive CCD units and offering a more affordable solution for bright-field, fluorescence and a range of other microscopy techniques.
Other applications of this camera include super-resolution microscopy, TIRF microscopy, ratio imaging, FRET, High-speed Ca2+ imaging, Real-time confocal microscopy and light sheet microscopy.
Active Silicon is proud to support our customers developing these, and other, revolutionary imaging processes.
Active Silicon’s Engineer proves precision is keySeptember 6th, 2017
Richard Brown, one member of our team of Software Engineers, likes to make sure he’s on target. At the British Small-bore Shooting Championships this month Richard earned himself a place on the reserve list for the England team. Putting him amongst the top 12 national shooters from a field of over 350 entrants, he proved his accuracy and drive for precision, something he also shows in his everyday work with us. Well done Richard!
Active Silicon’s AI series – part 1: Applying Deep Learning to FPGAsAugust 31st, 2017
Recent news from Microsoft has again brought the development of applying Deep Learning to FPGAs into the headlines. Microsoft is using Intel’s FPGAs (formerly Altera) combined with their own FPGA-based deep-learning platform, Project Brainwave, to enable the acceleration of deep neural networks (DNNs). The speed at which developments within the industry are progressing means that bringing Deep Learning to embedded systems is massively on the increase in a wide variety of sectors, from autonomous vehicles to medical research. With scalability as one of the determining success factors, the world is observing such developments keenly.
In terms of what’s currently in more widespread use, NVIDIA offers their own Deep Learning SDK to power GPU-accelerated machine learning applications for embedded systems and both cloud-based and on-site data centers. Image recognition, driver assistance programs, life sciences and even speech recognition are listed among the applications benefitting from reduced processing times and increased accuracy. AMD has launched its Radeon Instinct MI25 Server Accelerators which, along with its GPUs and software platforms, are designed to meet the challenges of high-performance neural network learning.
Figures suggest that Google’s TensorFlow software library is the most widely adopted Deep Learning framework, mainly due to its high level of internal development and open source accessibility. It can run on one or more CPUs or GPUs with a single API, although it is not yet commercially available on FPGAs. Also in development is Tensorflow Lite – a toolkit for mobile devices, which follows hot on the heels of Facebook’s Caffe2Go framework.
Elsewhere, Greece’s Irida Labs is bridging the gap between cameras and the human eye by bringing visual perception to an extended range of devices. This is being achieved by developing computer vision software, and utilizing image processing and machine learning techniques made for any CPU, GPU or DSP/ASP platform. At Embedded Vision Europe in Stuttgart in a few weeks’ time, Iridia’s CEO and Co-founder, Vassilis Tsagaris, will present a case study on using Deep Learning to advance food product identification. Active Silicon will be exhibiting at the show and we’re really excited about this opportunity to hear and share the latest developments in this area.
Over the past few months our team of innovative engineers have been watching the progress involving FPGAs closely to see how it can benefit our customers in our next generation embedded systems to achieve faster and more accurate image recognition. We’re looking forward to engaging in the discussions in Stuttgart, and keeping an eye on the advancements in general.
Improving on excellenceAugust 23rd, 2017
Active Silicon is delighted to welcome its newest software engineer to our growing team. Stuart MacLean joins us to lead the development of new embedded technology in the area of 3D visualization and hardware compression technology for HD and 4K video. This technology is primarily targeting next generation computer vision assisted surgery and will first be available in our second generation variant of our USB3 Vision Processing Unit.
A highly experienced C++ engineer, Stuart brings 20 years of technical knowledge and commercial awareness, gained while developing geoscience software in the oil and mining sector. His more recent roles have introduced him to embedded systems and robotics, which has nurtured his love for physics.
Our engineers work tirelessly to continually improve our products, and extending our range of embedded systems with high reliability and long product life is just one of the areas we’ve invested in this year. Contact us for more information on how our innovative imaging and embedded solutions along with exceptional support can enhance your machine vision applications.
Visit us at Embedded VISION EuropeAugust 8th, 2017
A brand new tradeshow focusing on embedded vision will open its doors on 12-13 October in Stuttgart, Germany. The Embedded VISION Europe show is organized by the European Machine Vision Association (EMVA) and Messe Stuttgart. Active Silicon will be exhibiting our latest embedded systems for the machine vision industry and we look forward to meeting you there.
For us the event is an important opportunity to share knowledge from multiple sectors as delegates from a broad spectrum of industries gather in Germany. The conference program will cover hardware and software developments as well as looking at industry standards, and of course, current hot topics such as Deep Learning and miniaturization. Our embedded systems are often developed for specific OEM applications and bring our leading-edge image acquisition and processing to a variety of industries, including medical devices, broadcast, gaming, surveillance and manufacturing. Come and visit us in the exhibition area to see and discuss what our experts can do for your business.
A clear view of our sunAugust 3rd, 2017
Active Silicon’s Phoenix CoaXPress frame grabber has helped enable a break-through in the development of advanced adaptive optics, according to Dirk Schmidt, assistant scientist at the USA’s National Solar Observatory (NSO) and project scientist for the international MCAO team.
The multiple layers of atmospheric turbulence caused by the mixing of air masses with different temperatures present a serious challenge when observing any object in space, including the study of the sun. To overcome this problem, researchers have been advancing adaptive optics, a method that applies one or more flexible mirrors to compensate for the distortion of the incoming light waves. Recently a groundbreaking new optical device was developed with an ultra-fast vision system and three deformable mirrors at its heart for use with a high-resolution telescope.
The cameras in the vision system produce more than 1500 frames per second with 992 x 992 pixels and together with our CoaXPress frame grabbers researchers at the NSO have been able to guide a system of three deformable mirrors that change shape and position in order to correct the aberrations in the wave path. The mirrors are placed at three different altitudes, and when used in combination capture distortion-free images. Schmidt explains: “The [frame grabber] in this application is used in the wavefront sensor, which measures at fast speed the optical correction we need to apply with deformable mirrors. The speed of the image acquisition in this sensor is traditionally our bottleneck, and the number of pixels we can get per time is one (maybe even the) major limitation to us. For this reason, we always look for the fastest camera and interface on the market.”
This multi-conjugate adaptive optics (MCAO) device at the Goode Solar Telescope has tripled the size of the corrected field of view compared to previous single mirror systems. The system, funded by the National Science Foundation, is the result of decades of research and development supported by the NSO, New Jersey Institute of Technology’s Big Bear Solar Observatory, and the Kiepenheuer Institute for Solar Physics (Germany). Understanding solar activity has a vital role to play in being able to prepare for power surges and disruption to satellites, GPS and communication systems resulting from solar storms.
Active Silicon is proud to be part of this ground-breaking development and we look forward to our continued work with the NSO and NJIT in bringing even clearer imaging to solar research. Our latest interaction involves integrating our Firebird Quad CoaXPress frame grabber.
Are you interested in learning more about the benefits of the MCAO device? Check out the following link with a short video: https://cuna.nso.edu/clear/index.php/2017/01/10/clear-demonstration/
Investing in the future of our industryJuly 21st, 2017
Two new faces will grace the Active Silicon UK head office this summer as Jamie and Haley join us for work placements.
Jamie has completed his first year studying Electronic Engineering at Southampton University and Haley is at the end of her second year studying Physics at Imperial College London. They will be helping us with software library management, assisting us with research on firmware development and supporting us with PCB testing.
While we hope they will gain lots of valuable experience in the image processing industry from us, we’re looking to learn from them too. As an open and transparent organisation, we welcome contributions from all areas, and we see this partnership as an investment in both the future of our company’s development and a chance to share our expertise with the younger generation. We hope they have an enjoyable and beneficial time with us!
July 2017 Newsletter – GPU processing in imagingJuly 21st, 2017
The July issue of Active Silicon’s newsletter is now available to view here. This month we look at GPU processing in imaging and the benefits it offers to applications with high image data rates. Our case study focuses on MWA-Nova and why they chose to implement one of our camera link frame grabbers into their Cine Film Scanner. All our FireBird Camera Link frame grabbers and CoaXPress frame grabbers support GPU processing and you can read more here.
New software engineer strengthens Active Silicon’s R&D ResourcesJuly 18th, 2017
Our new Junior Software Engineer, Ioannis Hadjicharalambous, is now a month into his role at Active Silicon. Ioannis joined us from a technology company where he gained several years of experience working with C#, C++ and Java. He has a MEng degree in Electronic Engineering from the University of Surrey and will be using his skills here working on ActiveCapture and release testing amongst other things.
Active Silicon offers a range of software to support its vision hardware products across several operating systems. This is one of the most responsive and flexible support offerings in the industry and Ioannis will be involved in designing and implementing these crucial solutions, and developing new ways to keep our products at the forefront of machine vision technology.
If only they knew what we can do…July 12th, 2017
Active Silicon has extended its Marketing Communications Team and welcomes Natalie Ryan as its newest member. Natalie will be working on strengthening our customer database and reaching an even greater audience with updates on our frame grabbers, camera interfaces and embedded vision solutions. Coming from a background in marketing, communications and event management, she will be looking to share our successes with new potential customers all over the world.
As Active Silicon approaches its 30th birthday in 2018, raising awareness of what we do, and the unique solutions that we offer, will be a top priority. Machine vision is becoming standard in an increasing number of industry sectors, and Natalie will be helping to identify new partners who could benefit from our range of leading imaging products.
Wilhelm Stemmer puts his creation in the hands of AL-KO and the current management teamJuly 7th, 2017
The founder of the pan-European imaging distributor, Stemmer Imaging, has sold his company to AL-KO AG. AL-KO is a supplier of automotive technology, gardening implements and air-conditioning technology with around 4,000 employees operating in 45 locations. AL-KO AG acquired about 75% of the company shares while the current management team of Christoph Zollitsch and Martin Kerstig now hold close to 25%. The deal was put in force on June 30th, 2017.
Stemmer reports it will form a self-contained division of the AL-KO Group leading to a major diversification alongside the existing segments of air conditioning, gardening technology and electronics. AL-KO sees Stemmer as an investment in their positioning in the Industry 4.0 environment.
Conference Roundup – EMVA Business ConferenceJuly 4th, 2017
Last week Active Silicon was pleased to participate in the 15th EMVA Business Conference, where a hundred registered delegates, speakers and thought-leaders assembled in Prague.
The European Machine Vision Association (EMVA) is a not-for-profit organisation founded in 2003 to promote the development and use of machine vision technology. The annual conference offers the opportunity for European and global professionals, experts, analysts and strategists from the industry to meet their peers, discuss and learn from exemplary practice within the sector.
One of the keynote speakers, Adam Kingl from the London Business School, gave a stimulating talk as he explained the likely impact of the changing requirements of Generation Y – those in the labour market who move jobs every couple of years and are expecting a better work-life balance and more flexible working. With the machine vision industry requiring a high level of technical expertise, Adam offered some tips on retaining and managing the best workforce.
An exciting new technology was introduced by Luca Verre from French manufacturers Chronocam. Chronocam are developing machine imaging that works in a way more similar to the retina. Using CMOS technology alongside new processing methods, this will be an interesting development to follow!
Adimec’s Chief Scientist, Jochem Herrmann, shared his views on which direction the image processing market is taking. In order for machine vision to move out of its niche marketplace and into the more mainstream space, he believes that more processing will be done away from the sensor using small but powerful MPSoC (multi-process system-on-chip) embedded devices – which are also running the application as well.
The conference agenda also included a debate on the threats and opportunities presented by the Asian market. With a participant level of almost 10% coming from Asia, the event highlighted the growing influence this region is having on the sector. However, Frank Grube, CEO of Allied Vision, explained that the availability of support and proximity to the end customer were the most important factors when servicing the low cost, high volume market and therefore European production is unlikely to be affected that much by Asian imports.
The 2017 Young Professional Award was awarded to Boaz Arad, recognising his work “Sparse Recovery of Hyperspectral Signal from Natural RGB Images”. This enables hyperspectral imaging to be attainable within RGB cameras without the expensive, more complicated systems that traditional solutions provide.
Alongside the conference programme delegates engaged in B2B meetings, which offered a very productive way to meet with business partners and potential customers. More socialising was possible at the river cruise through the historic city on Thursday, and a trip to the Museum of Public Transport and dinner on Friday night. The image above shows Colin Pearce, Active Silicon’s CEO, in front of a beautiful historic tram.
More details about the event can be viewed at http://www.emva.org/
A Welcome to Alex our new Materials ControllerJune 29th, 2017
We are very pleased to welcome Alex Lopes to our Purchase and Supply Chain Management team. Alex joins us from a packing machinery manufacturer where he was responsible for logistics and stock control. Outside the office Alex is passionate about bikes, cars and rugby, having played for a number of years for Reading RFC. He now shares his expertise in planning and improving efficiencies with the Active Silicon team.
Alex’s role as Materials Controller at Active Silicon supports the critical supply chain function of the business, helping to safeguard the continuity of product supply – especially important for embedded systems with a long product life. At Active Silicon we pride ourselves in maintaining full control of all components used in the manufacturing process to ensure quality, reliability and availability.
New Vision Solution for Automation in Food Production and Sorting at the LASER World of PHOTONICS, June 26 – 29, in MunichJune 22nd, 2017
Machine Vision is becoming an increasingly important subject in the huge global photonics market as cameras and image processing systems are becoming increasingly powerful.
Visit our partner’s booth, Laser 2000 (B3.103), to experience a highly advanced machine vision system based on a high-speed color line scan camera from JAI (Sweep+ SW-2000Q-CL-80) and our Camera Link 80-bit (Deca) frame grabber FireBird FBD-1xCLD-2PE4. Color line scan applications are especially relevant for quality inspection and sorting of fruit, vegetables and pastries, as well as raw materials and waste for recycling purposes.
The camera from JAI features 4 CMOS line scan sensors behind a prism for maximum sensitivity and color reproduction. A near infra-red (NIR) channel makes it possible to capture NIR simultaneously with the RGB light spectrum. With its 80-bit (Deca) configuration the camera offers the highest-possible bandwidth available with the globally adopted and field-proven Camera Link interface.
Active Silicon’s frame grabber is specially designed for this interface with proven long term reliability. While our full-height PC model is utilized in this demo, the same functionality and features are available from our low-profile frame grabbers for embedded PCs and 2U rackmount chassis, which are also showcased at the booth.
Please get in touch with Laser 2000’s machine vision consultants on-site or – if you can’t make it to the Munich trade fair – find all the details about the demonstrated frame grabber here: https://www.activesilicon.com/products/firebird-camera-link-frame-grabber-1xCLD-2PE4/
15th EMVA Business Conference, 22 – 24 June 2017 in PragueJune 20th, 2017
This years’ Business Conference of the European Machine Vision Association (EMVA) is being held in the Czech capital Prague.
Starting Thursday this week over 100 registered participants will hear from high-level speakers delivering insightful keynote speeches on various subjects including economy, management, machine vision and related technologies.
One of the highlights will be a panel discussion between the CEOs of the German and Dutch industrial camera OEMs Basler, Allied Vision, Baumer, and Adimec. They will talk about their strategies to compete with low cost suppliers from Asia versus occupying market niches to maintain and grow their businesses.
Stay tuned for breaking news from Prague by visiting this news blog regularly, or follow us on:
Please make sure you are also subscribed to our YouTube Channel.
The Next Big Thing in Engineering?June 13th, 2017
In the UK right now everyone interested in great engineering is looking at the finalists of the prestigious MacRobert Award, presented each year by the Royal Academy of Engineering to the UK’s most exciting engineering innovations. This Award is renowned for spotting the “next big thing in engineering”. This year’s finalists are Darktrace, Raspberry Pi and Vision RT, all three already global players.
Darktrace is recognised for their self-learning cyber defense. This Enterprise Immune System technology works in a similar way to the human immune system. With the help of AI algorithms and unsupervised machine learning the system learns what is normal within a network. It uses this understanding to then identify anomalies, reacts on emerging threats such as ransomware, data theft or prohibited access and has been shown to work very efficiently.
Raspberry Pi Foundation has redefined home computing with their easy-to-use and inexpensive mini-computers. Raspberry Pi has been developed to encourage young people to learn computer coding and programming. It can now be found outside the target market as a fully functional computer; though small in size and very low in price, Raspberry Pi is also being used in areas like robotics, electronics R&D and scientific research.
The third finalist, VisionRT, developed a surface guided radiation therapy technology. A 3D stereo camera system tracks continuously and in real-time the skin surface of a patient lying on a treatment table and compares it to the ideal position with an accuracy of less than 1mm. If the movement of the patient exceeds a certain value, the system signals the treatment delivery system to pause radiation. Pinpoint accuracy in radiotherapy improves treatment efficiency, as well as patient comfort and safety.
The winner will be announced on 29 June 2017 at the Academy Awards Dinner in London in front of an audience of top engineers, business leaders, politicians and journalists.
Update: The winner of the MacRobert Award 2017 is Raspberry Pi with their highly innovative, easy to use, credit card-sized computers. The worthy winner finds several fans among the Active Silicon team, as a few of our engineers use the Raspberry Pi privately and even for certain test set-ups at work.
Application Focus – June issueJune 12th, 2017
In the June issue of Active Silicon’s newsletter we talk about high-speed imaging in astronomy, in particular about “Lucky Imaging”. Lucky imaging is a method used with Earth-based telescopes to acquire high-res images of astronomical objects. Read more about Lucky Imaging, the project MOSCAM and about the vision system involved that includes Active Silicon’s FireBird Camera Link frame grabber and cabling in the June issue. To get our latest news, please sign up to our newsletter.
Keeping Memories – The Technology Behind Modern Film PreservationJune 8th, 2017
Whether it is a historical film in an archive or just a family film from earlier days, getting an old film into a digital format is an important step to increase accessibility of the film, allow easy reproduction and ultimately it is a way to save a film for future generations. Cine Film Scanners from MWA-Nova are world-renowned for their high standard in design and engineering. Some of the challenges and methods to provide world class results are discussed below.
After the film is carefully cleaned and professionally repaired it is inserted into the scanner. A gentle sprocketless capstan transport ensures the original film is safely moved and a laser system tracks the film so that every variation in position, e.g. through shrinkage of the film, can be compensated for. MWA Cine scanner processes images and magnetic or optical sound simultaneously.
MWA Cine scanner are color calibrated against TAF reference. For eliminating scratches and some further color correction, a diffuse light source, driven by a high energy RGB-LED-strobe-array, offers a free adjustable color balance.
A high speed sensor captures high resolution images of each film cell (i.e. each image). The processing of the raw sensor data happens in real-time at a rate of around 25 frames per second. The images come with a resolution of around 5k/20MP and the user can choose from various output formats. The processing is done in real time through the use of a GPU in a computer system downstream rather than within the image sensor. In this way, the processing can take advantage of evolving computer technology and the scanner system can be shipped with different sensor technology as required.
The image acquisition in the cine film scanner is performed by a Camera Link frame grabber from Active Silicon. Reliability, high performance and ease of integration of Active Silicon’s card lead to its successful integration – read more about our Camera Link frame grabber product range on our website.
Interested in film and historical film documents? Pinewood film studios, the home of James Bond, are Active Silicon’s direct neighbour. View their seven minute documentary of the “Pinewood Film Studios Open Day” from 1977!
The May Issue of Active Silicon’s “PRODUCT FOCUS” Newsletter is outMay 30th, 2017
The focus of this issue is our FireBird Camer Link cPCI Serial frame grabber. Fast acquisition, comprehensive I/O, GenICam compatibility and extended temperature support, make this card flexible, easy to integrate and fit for demanding applications. Read the newsletter to learn more.
Line Scan and High Speed Imaging Specialist Chromasens Joins Lakesight TechnologiesMay 25th, 2017
After the acquisition of the camera OEMs Tattile and Mikrotron, Lakesight Technologies is integrating Chromasens into their group. Chromasens, based in Constance, Germany, has about 60 employees mostly working in R&D, and annual sales of round 10 million Euro. Both managing directors, Martin Hund and Markus Schnitzlein, keep their roles at Chromasens and are also represented in the Board of Directors of Lakesight. Chromasens is looking forward to benefitting from extensive synergies such as expanded sales forces and more resources for the development of innovative products.
All three camera OEMs, Tattile, Mikrotron, and Chromasens are offering line-scan and high-speed cameras. Multiple machine vision systems worldwide are already getting the best out of imaging technologies by combining their cameras with Active Silicon’s Camera Link or CoaXPress frame grabbers. We are looking forward to seeing new machine vision solutions enabled through the interoperation of innovative cameras and our high-performance image acquisition cards.
Lakesight aims to create a global leading platform in the machine vision sector. According to Ambienta, Lakesight’s private equity backer, Chromasens fits perfectly within the group and is complementary on all levels, from the product portfolio to the sales channels, and the R&D capabilities. Ambienta claims to have the world’s largest pool of capital to invest in businesses that will improve resource efficiency from an environmental perspective and will benefit large industry sectors such as energy, food production and healthcare. Machine Vision is certainly a key technology to meet these aims.
A Sum up of the Spring IVSM – Good Progress and Some ControversyMay 22nd, 2017
Around 80 experts from vision companies around the world met up recently in Natick, near Boston MA, for the Spring International Vision Standards Meeting (IVSM).
The week started with the GenICam meeting, where a key decision was to formally agree to add GenSP as a module of GenICam. GenSP will allow standards such as CoaXPress to use and transfer new image formats, such as 3D image data, in a consistent way throughout the industry. Significant work has taken place to get from an early proposal at the previous IVSM, to a much more complete version now.
On Wednesday there was the “PlugFest”, where companies work together to ensure interoperability of their products. Several new cameras were successfully tested with Active Silicon’s FireBird CoaXPress frame grabber.
The meeting for the Camera Link standard had unusually high attendance, following the decision of the AIA Board of Directors to overturn the previous IVSM vote to make GenTL mandatory. Strong feelings were expressed at the meeting, but it looks unlikely that the Board will reverse its controversial decision.
The CoaXPress roadmap was discussed under the chair of Chris Beynon, Active Silicon’s CTO. Version 2 of the standard is getting close to completion, so the meeting concentrated on resolving the outstanding topics, ready to go to ballot late summer.
Artwork at the hosting company, Mathworks, allowed some inspired photos to be taken of the attendees, including Active Silicon’s attendees Chris Beynon and Emile Dodin.
The next IVSM starts 16 October 2017 in Hiroshima, Japan. Read more about the organisation of the key vision standards.
A Big Welcome to our New FPGA/VHDL Design EngineerMay 18th, 2017
Matt Bridges joined Active Silicon as a FPGA design engineer. He recently arrived in the UK from South Africa, where he had studied and then worked in Cape Town.
Matt’s previous company designed the computers for the MeerKAT telescope, a pathfinder project for the Square Kilometer Array (SKA). Matt was involved in FPGA design of high-speed interfaces used in these computers which process the large amounts of radio frequency data sent from the telescope’s many dishes.
A big welcome to Matt at Active Silicon! We are happy to have another experienced FPGA engineer on board. Field programmable gate array (FPGA) technology is at the heart of many of our products and their reach into new and innovative areas continues to grow.
IVSM – Spring 2017May 8th, 2017
COMING UP: International Vision Standards Meeting (IVSM) in Natick, MA, USA, May 8-12.
The international vision standardization community meets for their spring meeting in the United States. This time the event is hosted by MathWorks and takes place at the company’s headquarters in Natick, close to Boston. On the agenda is the defining of roadmaps and the development of the machine vision standards GenICam, Camera Link, CoaXPress, GigE Vision and USB3 Vision. Also the Future Standards Forum takes place, in which future standardization projects are discussed utilizing resources such as OPC Vision or MIPI Camera Serial Interface.
Stay tuned to our social media channels to learn about the outcomes of this conference. For more information on Global Vision Standards see our Machine Vision Standards page.
Active Silicon Sponsors International Youth Hockey TournamentMay 4th, 2017
Congratulations to the hockey team HC Den Bosch JC1, which came 4th in the “World Youth Hockey Tournament” that took place end of April in Zoetermeer, The Netherlands. The connection with the hockey team is via Frans Vermeulen, Active Silicon’s Business Development Manager – and his son plays in the team. This international tournament for boys and girls between 12 and 14 years old provides a great opportunity to compete and socialize with top teams from other countries.
The UK’s New Machine Vision Conference & Exhibition, April 27th, Milton Keynes, UKApril 19th, 2017
The UK Industrial Vision Association (UKIVA) is inviting machine vision users, researchers and interested professionals to its new Machine Vision Conference & Exhibition on April 27th 2017 at the Arena MK in Milton Keynes, UK.
Over 50 presenters from leading machine vision enterprises will share insights into the latest developments in the industrial imaging world. Please check out the program.
Are you based in the UK or a neighboring country? Then please come and join us and take this great opportunity to learn about innovative machine vision technologies and applications, as well as a chance to meet with manufacturers and other users in this field. The conference and exhibition is free to attend and offers free parking and breakfast for early bird delegates.
If you are already using or considering embedded systems for vision applications, then we may have a talk of interest to you: “Embedded Vision: Hardware Architectures and Implementation” at 2:30 pm in the Vision Innovation track. Our CEO, Colin Pearce, will present state of the art, flexible and powerful hardware architectures based on novel chipsets and design paradigms, which minimize time and cost for the development of embedded vision solutions.
We are very much looking forward to seeing you in Milton Keynes! And please don’t hesitate to pre-arrange a meeting with our team.
Embedded Vision Systems – Paving the Way to More, Faster and Cheaper Machine VisionApril 7th, 2017
The application of imaging in industrial manufacturing, medical devices, traffic, transportation, logistics, life sciences and research is often a challenge, despite the great technical advances over the last 10 years. Multiple hardware components, such as optics, cameras, cabling, data acquisition, processing and storage units need to be considered, and need to comply with high data rates and computationally intensive algorithms. Thus, classic vision systems are often based on high-performance PCs.
When just a few further requirements come into play, however, embedded vision systems become the best, if not the only solution:
Constraints in space, temperature, or mechanical robustness
Embedded vision systems allow for much higher spatial integration of data acquisition, processing, storage and output components. Further, unlike PC systems designed for IT or consumers, embedded systems can be optimized to robustness against extreme temperatures, vibrations and mechanical shocks by careful selection of parts, connectors, PCBs and manufacturing processes. Active Silicon has more than 25 years of experience in the integration of electronic components for challenging applications in defense, marine, space, medical, and automotive.
Embedded systems also have the advantage that components can be specifically selected with long-time availability in mind. The careful supply-chain management at Active Silicon Silicon assures that embedded systems typically retain the same form, fit and function for at least 10 years.
Processing speed and power consumption
Image processing is characterized by high data volumes derived from each image represented by large 2D-pixel matrices with millions to tens of millions of entries – and with each pixel carrying up to 16 bits of greyscale or 24 bits or more of color information. Additionally, cameras with high frame rates deliver several dozen if not hundreds or even thousands of images per second.
Even high-end CPUs are not capable of applying even moderately complex algorithms on this amount of data. Modern GPUs are a solution, yet power hungry and costly. Instead, the latest Field-Programmable Gateway Arrays (FPGAs) can be optimized for parallel signal processing and thus are ideally suited for images and video streams. Despite comparably low clock rates and very low power consumption, they outperform general purpose GPUs and CPUs depending on the algorithm. Latest integrated architectures comprising CPU and FPGA provide efficient implementation options for algorithms with parallel and serial processing needs.
As embedded vision systems are customized to the requirements of the specific application, the selection of the right components and the optimization of production processes can reduce manufacturing costs considerably.
Active Silicon’s ready-to-use embedded solutions and modular system designs allow for short time to market and significantly lower development costs.
New Product Launched: Customizable Vision Processing Unit with four USB3 Vision inputsApril 3rd, 2017
The Vision Processing Unit (VPU) acquires image data from up to four USB3 Vision cameras, processes the image stream in real-time and provides several options for image and data output. Our latest embedded vision product is now available for medical and industrial OEMs who seek to integrate multiple cameras in their systems. Read the full product announcement.
Automation World Korea – Automation, Machine Vision, Smart FactoryMarch 28th, 2017
It’s time again for the annual Automation World – Korea’s leading exhibition of automation and smart factory is starting today at Coex in Seoul, Korea. For three days visitors will experience the latest technologies in automation systems, sensors, machine control, motion controllers, robots and machine vision, as well as the exciting new developments in areas such as smart factory industrial software systems, Big Data and Industrial IoT. There will also be opportunities for visitors to expand their knowledge by attending accompanying seminars and conferences.
Lucky Star Gazing – High-Speed Imaging in AstronomyMarch 23rd, 2017
Although stars are not exactly whizzing about in the sky, high-speed cameras provide an important tool for gaining high-res images of astronomical objects.
Most telescopes are based on Earth and so the resolution of images taken by these ground-based telescopes are influenced by the distortion the light undergoes by passing through several kilometres of turbulent atmosphere. This results in a much lower resolution than can be expected from a space located instrument, such as the Hubble Space Telescope. One method to overcome the blurring effects of the atmosphere turbulence is “Lucky Imaging”. Images are taken with a high-speed camera using exposure times short enough (100ms or less) to ensure the changes in the Earth’s atmosphere during the exposure are negligible. If thousands of images are taken there are likely to be a number of frames where the object in question is in sharp focus due to the probability of having less atmospheric turbulence during the short exposure period of the “lucky” frame. By taking the very best images, for example 1% of the images taken, and combining them into a single image by shifting and adding the individual short exposure pictures, “Lucky Imaging” can reach the diffraction limit – the best resolution possible with a particular instrument – in this case a 2.4m aperture telescope.
MOSCAM is an astronomy project using “Lucky Imaging”. It is a cooperation between Sheffield University in the UK, and NARIT, the National Astronomical Research Institute of Thailand, with the aim to discover faint companions of stars in our solar neighborhood.
For this project a high-speed Camera Link camera is mounted to the telescope in the Thai National Observatory, which is situated on Thailand’s highest mountain Doi Inthanon. For optimal image acquisition, Active Silicon’s FireBird Camera Link 80-bit frame grabber was chosen along with a 10m passive Camera Link cable also supplied by Active Silicon which was critical to the project. The system performs perfectly at the maximum 80-bit (Deca) acquisition speed even within the sometimes electrically noisy environment.
Active Silicon is a specialist in Camera Link solutions. We can provide acquisition hardware, optimal cabling, and in cooperation with our trusted partner companies, find the right camera for you.
German Industrial PC and IT Specialist EKF Featuring Compact PCI Serial Industrial Rack at Embedded World 2017March 14th, 2017
With close to 1,000 exhibitors and over 30,000 visitors, the Embedded World in Nuremberg is the place to go for all players in the embedded systems field.
From March 14 to 16, the German specialist for industrial PCs, EKF Elektronik GmbH, will exhibit its CompactPCI (cPCI) Serial based industrial rack SRS-3201-BLUBOXX in hall 3, stand 155. This highly versatile industrial PC is available with Active Silicon’s FireBird Camera Link 3U cPCI Serial Frame Grabber for challenging imaging applications with high-speed and high-resolution cameras.
Visit the booth of EKF Elektronik GmbH (hall 3, stand 155) and experience how demanding vision projects can be realized with compact industrial grade PC systems.
Our Managing Director, Colin Pearce, and our Business Development Manager, Frans Vermeulen, will be visiting the show and would be happy to meet you. Please contact us if you would like to arrange an appointment.
ADL Embedded Solutions with Dedicated Vision Area at Embedded World 2017March 9th, 2017
The leading international trade show on embedded electronics, systems and solutions opens its halls from March 14 to 16 in Nuremberg, Germany.
Embedded PC specialist ADL Embedded Solutions are dedicating a special area of their stand 1-554 to Embedded Vision. There, ADL will also showcase Active Silicon’s Phoenix PC/104e Camera Link frame grabber and the FireBird Quad USB 3.0 Host Controller. Both interface cards have been thoroughly tested and approved for use in ADL’s PC and embedded PC systems. As well as the uncompromised performance and reliability, the cards have proven to be especially well suited for embedded environments thanks to their low power consumption and heat dissipation.
Visit the booth of ADL Embedded Solutions (hall 1, stand 554) and experience the state of the art of embedded PC based vision systems.
Our Managing Director, Colin Pearce, and our Business Development Manager, Frans Vermeulen, will also visit the show and would be happy to meet you. Please contact us if you would like to arrange an appointment.
PRODUCT FOCUS Newsletter out – Embedded Vision and what we can do for youMarch 8th, 2017
The March issue of our PRODUCT FOCUS newsletter is out discussing our COM Express based embedded vision systems. Active Silicon designs and manufactures custom embedded systems, predominantely for OEM applications such as medical devices, industrial automation or remote monitoring. Read the newsletter to see what we can do for your application.
67th Sanremo Music Festival Broadcasted to Millions via State-of-the-Art Video EquipmentFebruary 27th, 2017
The Italian public-service TV station RAI broadcasted the 67th annual Sanremo Music Festival on February 7 to 11 to millions of viewers in Italy and across Europe. Newcomers and old-stagers performed previously unreleased songs. The best performances were determined by a highly decorated jury in combination with public televoting.
Optimal sound, sophisticated video effects and absolute reliability are key requirements to the technical equipment used in live broadcasting. Hence, the organizers of the Sanremo Music Festival built on Phoenix Dual-HD-SDI frame grabbers and dedicated drivers from Active Silicon in combination with the Mac-based live video software package CatalystPM.
Are you pushing the envelope of modern video broadcasting or just looking for the right solution for your next video system? Learn more about our HD-SDI frame grabbers for video broadcasting systems here.
Embedded Vision Systems – The future is System-on-ChipFebruary 14th, 2017
From mobile devices to cybernetics and the Industrial Internet of Things (IIoT) – the applications for embedded vision systems are enormous, though traditionally constrained by size, power and cost.
As Thomas Rademacher describes in this latest article in Vision Systems Design magazine, the decision to migrate to a small “System on Chip” (SoC) approach is driven by volume as well as performance. New technology on the market – combining powerful ARM processors with FPGA (Field Programmable Gate Array) logic – all on a single chip, provides a compelling solution for small, low cost embedded vision systems.
Embedded Vision specialist Active Silicon is proud to pre-announce its upcoming release of such an embedded systems platform based on the Xilinx Zynq technology. Look out for our new product range “BlueBird Embedded”.
With our new embedded platform, we will be able to provide solutions for specialized vision systems as well as general purpose embedded control systems, but also importantly shorten the development cycle to timescales not previously available for System-on-Chip solutions.
If you would you like to know more or discuss embedded vision and systems in general, please contact our solution experts at and we’d be happy to help.
Where there is a Camera, there is a Frame GrabberJanuary 31st, 2017
In his latest article on Novus Light, Dave Wilson reports on PCI Express as a potential advancement in camera interface technologies. At the Vision Show 2016, cameras with built-in PCIe interfaces were on display, available with a copper cable interface with up to 7 meters length or – more costly – fiber link cables from the camera to a PCIe interface card in the PC. With PCIe Gen3 x8, a nominal bandwidth of a stunning 64 Gbps became achievable.
Only a few applications will benefit from the high data throughput and this approach has its price. On the one hand, the costs of the overall system increase due to more expensive camera hardware, specialized cabling and a dedicated PCIe interface card. Given a frame grabber is essentially an interface card itself, to the user the physical setup is the same. On the other hand the setup can be considered to have the intelligence of the frame grabber embedded into the camera. This, however, implies that no standardized machine vision camera interface, such as CoaXPress or Camera Link, is available and the system developer would need to stick to the same camera brand to avoid major re-designs in hardware and software.
Although in the early days of machine vision frame grabbers were complicated to use, interoperated with just a certain range of certified cameras, and were quite costly themselves, times have changed for the better. Thanks to enormous advancements in standardization of standards like GenICam and CoaXPress, cameras and frame grabbers of the same standard are fully compatible and work plug’n’play. CoaXPress, for example, relies on low cost coax interconnects, which can operate reliably over many dozens or even a few hundreds of meters. In the upcoming release of the new CoaXPress standard, up to 50 Gbps of nominal bandwidth will become available via four aggregated links.
Consequentially, if your application needs extreme high bandwidths beyond 25 Gbps right now, and you can afford additional costs and dependency on a single camera OEM, PCIe is one of the options you have today. In all other cases, dedicated and well-adopted machine vision interface standards such as Camera Link or CoaXPress are the usual path. Yet, when selecting components for these standards, we advise to ensure the long-term availability of the components and evaluate the expertise of your vendor of choice by requesting in-depth application consulting.
High-Speed & High-Resolution CMOS Cameras leverage CoaXPressJanuary 24th, 2017
With his latest article in the Vision Systems Design magazine, Andy Wilson gives an introduction to the CoaXPress standard and a comprehensive market overview of cameras and interface boards including achievable bandwidths and frame rates at maximum resolution.
CoaXPress (CXP) is an internationally adopted camera-to-computer interface standard that fulfills most requirements of modern imaging applications. The biggest advantages of CXP are probably the bandwidth and the cabling. The bandwidth: up to 6.25 Gbps over a single standard coax cable, while several cables can be aggregated providing, for example , 25 Gbps of video data bandwidth with 4 CXP cables. As a reference, this enables the transmission of 12 Megapixel resolution with 190 frames per second (8 bits per pixel). The cameras can be powered via the coax data cable too. The cabling: Relatively low-cost cables can be used over long distances. CXP-1 with 1.25 Gbps allows for up to 130 m cable lengths while the CXP-6 with 6.25 Gbps can still be operated via 40 m of pure passive cabling. This is more than enough for most industrial automation, surveillance and intelligent traffic systems. Thanks to its 20 Mbps upstream bandwidth, CXP allows for the real-time control of camera settings.
And the story goes on – the new CoaXPress v2 standard will provide 10 Gbps and 12.5 Gbps of bandwidth via a single cable.
As one of the founding members of the CoaXPress standard, Active Silicon is always on the leading edge of the development, featuring a range of CoaXPress interface cards with up to four links as well as small interface kits for the renowned SONY FCB block cameras.
Meet Eileen at the A3 Business ForumJanuary 17th, 2017
Meet Eileen Zell, director of our North American operations, along with other key players in the machine vision, robotics and automation industry at the A3 Business Forum in Orlando, January 18-20, 2017.
Updated version of Active Silicon‘s LabVIEW Driver available!January 12th, 2017
The LabVIEW software package is popular with developers in Machine Vision and imaging in general allowing amongst other things, rapid development of applications and prototyping. Active Silicon‘s dedicated LabVIEW driver ensures full compatibility of our interface boards with National Instruments’ LabVIEW software.
With Active Silicon‘s LabVIEW driver the user gains easy access to the functionality of our FireBird acquisition boards within the LabVIEW graphical programming environment. It allows the user to snap and grab images into LabView’s IMAQ environment where the images can be further processed as required. In addition, advanced programmers can utilize functions within the PHX Library API, as the LabVIEW driver provides an interface into the Active Silicon PHX Software Development Kit.
The LabVIEW driver was recently updated for use with modern high-speed cameras via support for the optional Vision Development Module (VDM). Developing high-speed vision applications has never been easier!
If you have any questions, don’t hesitate to contact our support team.
Merry Christmas and Happy Holidays!December 22nd, 2016
Many thanks to all our customers, partners and suppliers for a successful year. Have a great Christmas with friends and family and a peaceful break. Best wishes from everyone at Active Silicon.
Already a traditional event, the staff from our Headquarters near London got together on “Christmas Jumper” day, December 16th, to celebrate Christmas with a “Secret Santa” followed by a Christmas lunch at the local pub.
We look forward to 2017 being a good year too with further business growth and to support our customers with some exciting new products and technologies.
Another Big Fish About to be Swallowed by a WhaleDecember 19th, 2016
US based Teledyne Technologies and British company e2v have reached an agreement for a recommended cash offer by Teledyne to e2v shareholders at a 47% share price premium. A significant drop in e2v’s share price following disappointing half-yearly results announced in early November gave an acquisition opportunity.
In the machine vision market e2v is known for its high end image sensors and camera solutions, much like the machine vision division of Teledyne, though as President and CEO of Teledyne, Robert Mehrabian, puts it, there is minimal product overlap and in fact e2v is highly complementary to Teledyne. Both companies are strong in space and astronomy imaging, as well RF devices, but each with a differing product ranges.
If accepted by shareholders, the deal should be completed in the first half of 2017 with a transaction value expected to be around £627 million.
From High-Speed Imaging to Embedded SystemsDecember 9th, 2016
Did you miss out on visiting the VISION 2016 trade fair in Stuttgart, Germany? No problem – this video by Vision Systems Design magazine provides you the essence.
In the video, our CEO, Colin Pearce, gives an introduction to Active Silicon and an insight into our latest product innovations, followed by a tour of the various live product demonstrations at the booth. The first being the quad channel CoaXPress interface card for image acquisition from multiple high speed cameras.
The next demo showcases one of our embedded systems products to acquire and process video streams from up to four USB3 Vision cameras, the USB3 Vision Processing Unit.
The third demo shows our support for NVIDIA’s GPUDirect for Video technology using our frame grabbers and de-bayering of up to 2 billion pixels per second on the GPU with no CPU load.
As Embedded Vision becomes more and more a subject for applications in various industries, Active Silicon is offering its high-end Camera Link frame grabbers in embedded form-factors as well as standard PC formats. The final demo in the video shows an application requiring precise high-speed triggering with our new 3U CompactPCI Serial Camera Link frame grabber.
And the Winner is…November 24th, 2016
A big thank you to everyone who participated in our Prize Draw at VISION 2016 in Stuttgart, Germany at the beginning of November.
The winner of the iPad Air 2 WiFi 128GB was …
Heng Wei Chang of Delta Electronics, Inc. (www.deltaww.com), Taiwan.
Congratulations from the Active Silicon team!
For total fairness and transparency, we filmed the draw! YouTube: https://youtu.be/2HFEP-rc1uo
Active Silicon Appoints New Partner in JapanNovember 22nd, 2016
Active Silicon Appoints New Partner in Japan for Enhanced Local Sales and Customer Support
Our customers in Japan can now benefit from a local sales and service partner of Active Silicon. Tokyo-based company Forte Solutions Asia Ltd, run by Alex Bird, is now the local point of contact when it comes to leading edge Camera Link and CoaXPress interface cards and other high-end imaging products including our camera interface range for Sony block cameras and Embedded Vision Systems.
You can reach Forte Solutions on
Alex and his team are looking forward to support you!
VISION 2016 – A Great Success!November 17th, 2016
With almost 10,000 visitors from 58 countries and more than 400 exhibitors, VISION 2016 in Stuttgart, Germany set new records. The team at Messe Stuttgart once again did a marvellous job to assure that this event provides excellent business and networking opportunities for Imaging and Machine Vision companies like ourselves.
At the show there was significant interest in our embedded systems and high-speed acquisition cards reflecting industry trends. The three very busy show days were concluded with our prize draw – and a new high spec iPad will be shortly on its way to the lucky winner in Taiwan.
Each day, the business networking didn’t stop when the show ended, but continued into the evening, accompanied by good food, German beer and wine, at various after show events – including the highlight of the week – the “VISION Wonderland” extravaganza – with ice-skating!
New CoaXPress Website is LiveNovember 7th, 2016
CoaXPress – The World’s Leading Interface Standard for High-Speed Imaging
What is CoaXPress? The concept was first demonstrated at VISION 2008, in Stuttgart, Germany. Then at the same show a year later the CoaXPress Consortium, with Active Silicon as a key member, won the Vision Award for technical innovation. Two years later in 2011, CoaXPress became an international standard and is hosted by the Japanese Industrial Imaging Association (JIIA).
The CoaXPress website has just been given makeover and is available at www.coaxpress.com, now containing useful resource links plus a list of suppliers of CoaXPress products and technology.
Questions? Being one of the original developers of the technology, and today, playing a leading role in the CoaXPress standards working group, we would be pleased to help integrate CoaXPress technology into your applications to give that competitive edge enjoyed by many of our customers. Please contact us.
Take a Closer Look at our Live Demonstrations at the VISION Show!November 4th, 2016
We have put together some great demos to showcase our products and technologies – we have USB3 Vision cameras running with our new Embedded Systems platform; high performance GPU processing at 2 GPixel s/sec; multiple asynchronous cameras – single card acquisition; Compact PCI Linux embedded system using our new 3U cPCI Serial card.
So, come and take a closer look at booth 1H52 and if you can’t make the show, watch our news feeds for video clips of the demos!
Win an iPad at Active Silicon’s BoothNovember 1st, 2016
Yet another great benefit for getting in touch with Active Silicon – as well as free expert advice, there is now also a chance to win an iPad Air 2 WiFi 128GB.
There is just one week to go until the global machine vision industry gathers in Stuttgart, Germany to exhibit its latest innovations. Take the opportunity to experience high-speed and high-resolution imaging taken to new levels at Active Silicon’s booth, 1H52.
Get free insights into the technology trends with Camera Link, CoaXPress and interfaces in general; learn about today’s ground-breaking performance levels of Embedded Vision Systems and what it takes to make these readily available for your technical and commercial advantage.
And – as if that isn’t enough – you can enter our free prize draw for a new Apple iPad Air 2 WiFi 128GB by placing your business card in the Active Silicon Prize Draw Box at booth 1H52. The draw will take place at the end of the show and the winner informed by email.
Come and see us at booth 1H52 from Nov. 8 to 10 at VISION 2016 in Stuttgart, Germany.
See and Experience High-Speed and Embedded ImagingOctober 20th, 2016
See and experience high-speed and embedded imaging at the VISION tradeshow, booth 1H52.
Every two years year, the world’s leading manufacturers in machine vision, scientific, medical and other imaging disciplines gather at a single spot: The VISION tradeshow in Stuttgart, Germany.
On Nov. 8 – 10 we are happy to welcome you to this outstanding opportunity to find the right partners in imaging and learn about the latest innovations in this rapidly developing field of technology. Get to know the team at Active Silicon and experience the astounding performance that the latest high-speed camera setups and embedded vision systems can achieve! In our live demos at booth 1H52 we will showcase our frame grabbers, GPU enhanced imaging and our COM Express based embedded Vision Processing Unit.
Do you have a need for fast and detailed image capture with high reliability and customizable to your specific requirements?
Meet us at the show and take the opportunity to obtain in-depth introductions to the technical features and application scenarios of our frame grabbers with CoaXPress, Camera Link, HD-SDI and LVDS interface , our Sony block camera interface kits or our finest selection of Embedded Vision Systems.
Progress on the Vision Standards FrontOctober 20th, 2016
More than 80 experts from vision companies around the world met up last week in Liège for the Autumn International Vision Standards Meeting (IVSM).
The GenICam standard, designed to enable plug & play interoperability between various hardware and software parts of a vision system, celebrated its 10 birthday with a brightly coloured cake! GenICam is now focused on extending its reach to embedded systems, as well as a new proposal to standardize image and data steaming, called GenSP.
For the Camera Link standard, the debate focused on the scope of version 3. It was agreed via a vote, that GenTL will be mandatory for a full GenICam compliance in v3. Active Silicon was one of the companies that strongly supported this requirement.
The CoaXPress roadmap was discussed under the chair of Chris Beynon, Active Silicon’s CTO. Good progress was made on several topics including faster operating speeds for version 2 of the standard, which is scheduled for release next summer.
The week was wrapped up with the indispensable “PlugFest”, where companies work together to ensure interoperability of their products. Several new cameras were successfully tested with Active Silicon’s FireBird CoaXPress and Camera Link frame grabbers.
The next IVSM starts 8 May 2017 in Boston, USA.
Biggest Company Acquisition in Machine VisionOctober 11th, 2016
The biggest company acquisition in the Machine Vision field to date: FLIR buys Point Grey for $253 million in cash.
Industrial and scientific imaging is an international and booming market. FLIR Systems, world leading in thermal imaging is about to expand its presence in visible spectrum cameras, while at the same time benefiting from the interface technology expertise of Point Grey.
On October 3rd, 2016, FLIR Systems announced that it has reached agreement to acquire Point Grey Research, one of the leading machine vision camera manufacturers. The transaction is expected to be completed by the end of 2016.
IVSM Autumn 2016October 7th, 2016
COMING UP: Next International Vision Standards Meeting (IVSM) in Liège, Oct. 10-14
GenICam, Camera Link, CoaXPress, GigE Vision, USB3 Vision – five technical standards which make the life of machine vision engineers much easier every day and accelerate innovation in the global imaging industry. Yet, standards need to be evolving themselves. Active Silicon actively supports the continuous development of CoaXPress, Camera Link and GenICam for high speed imaging and easy interoperation of cameras, frame grabbers, cables and software libraries.
Twice a year we meet up with imaging experts from all over the world to drive the standardization process forward. This autumn, the IVSM stops over in Liège, Belgium. Stay tuned to our social media channels to learn about the outcomes.
For more information on Global Vision Standards see our Machine Vision Standards page.
Camera Link Frame Grabber in 3U CompactPCI Form FactorOctober 5th, 2016
Active Silicon’s new FireBird FBD-1XCLD-3CPCIS-2PE4 is a fully-fledged Camera Link frame grabber in the 3U cPCI Serial format. On a PCB size of just 160 mm x 100 mm this acquisition card captures image data from Camera Link cameras – Base, Dual-Base, Medium, Full or Deca (80 bit, 85 MHz) configurations. Learn more about the advanced features of this compact board for high reliability systems in embedded cPCI Serial setups here: FireBird CL 3U cPCI Serial
Changing the Face of Machine VisionSeptember 27th, 2016
New article by Greg Blackman about the rise of embedded vision and its ubiquitous deployment for machine vision in industrial automation and our everyday lives.
Read here about the opportunities and challenges: IMVE article.
With embedded technology becoming more affordable, the advantages of a tailor-made system that is compact, low power and high reliability are apparent. Interested in embedded systems and looking for a competent and reliable partner…one that can offer a short time to market as well as long term availability?
See some examples of embedded systems we have designed and now manufacture: Embedded Systems by Active Silicon.
The New Issue of PRODUCT FOCUS is OutSeptember 19th, 2016
Tough Mudder 2016August 30th, 2016
What did you do on August 20, 2016?
Saturday, rain, strong winds… the perfect day to face a real challenge!
Brian, JP, Chuck, Alex and Andrew, all engineers at Active Silicon, completed the Tough Mudder challenge course in Cirencester Park (https://toughmudder.co.uk/events/2016-south-west). Over the 12 mile course they had to conquer 25 obstacles requiring well-coordinated teamwork, swim and dive through freezing water, crawl under electric wires, climb, sprint, carry each other – just to name a few of the treats! After 2.5 grueling hours in mud and rain the five Tough Mudders romped home in good spirits looking forward to the next one!
Fast and Reliable Breast Cancer DiagnosisAugust 24th, 2016
Fast and Reliable Breast Cancer Diagnosis – with High-speed and High-resolution Imaging
The latest Digital Slide Scanners from one of the world’s leading scientific cameras manufacturers delivers rapid and automatic super high-resolution scans of laboratory slides. The fastest models scan up to 100 slides per hour with a scanning resolution of 0.46 μm at 20x magnification. The Slide Scanner adjusts the objective lens to focus each image in real time and even allows for 3D scans of thicker samples via an image stacking feature.
By analyzing for the presence and frequency of certain types of tumour marker proteins in a histological sample, it is possible to detect cancer in its early stages. For example, HER2, a human epidermal growth factor receptor, has been shown to play an important role in the development and progression of certain aggressive types of breast cancer. Via ImmunoHistoChemistry (IHC) the HER2 receptor can be visualized as darkly stained areas in the cell walls of cancerous tissue as shown in the accompanying microscope image. The information gained with the high-resolution images from the Digital Slide Scanner provides critical information for treatment planning.
All image acquisition in the Digital Slide Scanners mentioned above is performed using fast and reliable Camera Link frame grabbers from Active Silicon. We are proud to contribute to this life-saving technology.
High Speed and High Resolution Imaging with Advanced Frame GrabbersAugust 17th, 2016
New article by Andrew Wilson about latest trends in frame grabbers (page 33):
Frame grabbers offer extended interface capabilities
Faster cameras, high-speed and emerging interface standards were on show at this year’s Vision Show in Boston.
Frame grabbers are getting either smaller or faster and in the case of Active Silicon frame grabbers, now affordable for a wider range of applications.
Looking for GenICam compliant Camera Link and CoaXPress frame grabbers? Have a look at our frame grabber selection.
The New Low Profile Camera Link Frame Grabber from Active SiliconJune 30th, 2016
Sleek and slender, the low profile FireBird 1xCLD-2PE4L is designed for use in small embedded PC enclosures and rackmount cases where full height PC cards are not suitable. It supports all configurations from Base, Dual-Base, Medium, Full and Deca (80-bit), at clock rates up to 85 MHz.
Like all FireBird grabbers the FBD-1xCLD-2PE4L uses Active Silicon’s proprietary “ActiveDMA” technology that provides the very fastest image acquisition without CPU intervention. Full compliance with the GenICam standard allows smooth integration with all major imaging software libraries and a broad set of cameras. In addition, the proven PHX SDK is available for Windows, Linux, QNX and Mac OS X.
Contact us for more information!
SONY FCB Camera Turned into a Broadcast-Quality HD-SDI CameraJune 29th, 2016
This plug’n’play kit turns every SONY FCB EV7100 into a broadcast-quality HD-SDI camera, e.g. for the monitoring of surgical procedures.
The Active FCB-EV-HD-SDI is a complete and cost-effective interface solution to provide real-time HD-SDI video from the SONY EV series of block cameras. Take full advantage of the high definition digital video provided by the FCB-EV block cameras with the help of this kit! Among other applications, the SONY FCB EV7100 including the Active FCB-EV-HD-SDI module have lately been successfully integrated into a surgical lighting system for the monitoring, recording and teaching of surgical procedures.
What applications of a SONY FCB EV-Series camera with our HD-SDI module come to your mind!
Please let us know and contact us!
Active Silicon’s PRODUCT FOCUS NewsletterJune 22nd, 2016
Active Silicon recently started to share selected product news with its customers, partners and subscribers via its PRODUCT FOCUS newsletter.
Our latest newsletter informed our subscribers about our Active Silicon VisionPro Driver which allows our frame grabbers to be used with one of the leading machine vision software libraries – Cognex VisionPro and the Cognex Designer software suite.
Read this newsletter and subscribe for future news here.
Food for Thought at Recent EMVA ConferenceJune 15th, 2016
Inspiring food for thought at the recent EMVA conference in Edinburgh: Dr. Albert Theuwissen introduced ways to utilize the abundant number of pixels in latest CMOS sensors as well as better trade-offs between sensitivity and color fidelity. Dr. Claus Risager of Blue Ocean Robotics presented a successful franchise business model to keep up with the enormous demand and speed of innovation in robotics and professional actor Michael Rickwood’s closing keynote on how to banish viewers’ boredom in public presentations.
As well as the superb dinner at The Hub, Edinburgh’s iconic gothic landmark, there was live Scottish music and the opportunity to join in and learn some famous Scottish dances had the attendees cheering and chatting in a laid-back atmosphere. The next venue for the EMVA Business Conference is in Prague, Czech Republic, June 22, 2017.
Eileen is a Certified Vision ProfessionalJune 7th, 2016
The director of our North American operations, Eileen Zell, has successfully passed the AIA’s exam to become a Certified Vision Professional.
At Active Silicon we value both academic knowledge and hands-on practical experience in Machine Vision for all our employees, which in turn allows them to provide the best possible support to our customers prior to and after purchase of our frame grabbers, embedded vision systems or custom imaging solutions.
14th EMVA Business ConferenceJune 2nd, 2016
Just one more week to go until the 14th EMVA Business Conference, 9 – 11 June 2016 in Edinburgh
EMVA announced that over 100 participants registered again and high level speakers will give various insightful keynote speeches on various subjects from economy, management, machine vision and related technologies.
New CCD Image Sensor for Flat Panel InspectionMay 24th, 2016
CCD image quality with 47MP @ 7fps for flat panel inspection and wherever high image resolution is key
The KAI47051 CCD image sensor with 47MP resolution and up to 7fps at 12bit pixel depth is the new flagship of ON Semiconductor. Where previously multiple low-resolution cameras had to be arranged in a matrix to create high resolution images, e.g. inspection of display panels, solar panels, organic tissue or other objects, a single camera utilizing the KAI14051 does the job. Several camera OEMs have already designed this sensor into cameras with Camera Link and CoaXPress interfaces to cope with the data rate.
Active Silicon’s FireBird frame grabbers simplify system design with their full support of the GenICam standard. By direct routing of the incoming pixel stream to the GPU using NVIDIA’s GPUDirect, Active Silicon’s SDK enables real-time processing.
Do you have any questions on how to build up high-resolution imaging systems with single or multiple cameras? Just contact our technical experts now!
Welcome Keith to Our IT TeamMay 13th, 2016
A big welcome to Keith Sickelmore to our IT team!
From customer relationship management to production planning in a highly technical environment, an efficient, reliable and secure IT system is a must – and allows us to focus on what we do best: products and solutions which solve our customers’ needs. With Keith aboard we are setting the stage for further enhancement of our internal infrastructure.
AIA Vision Show 2016 – BostonApril 28th, 2016
Advanced Imaging products and Embedded Systems, Boston, May 3-5 2016
Active Silicon invites you to THE Vision Show, Hynes Convention Center, Boston MA, May 3-5. Get inspired by the latest imaging innovations at our booth #922, and receive first-hand advice on how to build your next Machine Vision solution in manufacturing and logistics automation, quality inspection, security, defense, bio-science, motion analysis and so many other application areas.
Engineers of high-speed imaging systems will be interested to hear Chris Beynon’s presentation on the update of the CoaXPress camera interface standard at the Vision Standards User Group Meeting on May 4th, 2 pm.
Vienna Eurovision Song Contest Powered by Active SiliconApril 20th, 2016
Over 1 Billion TV Viewers Enjoyed the Vienna Eurovision Song Contest Powered by Active Silicon.
With a TV audience of over 1 billion global viewers, the Eurovision Song Contest (ESC) is one of the biggest TV events of the year. In 2016 the ESC finals show takes place in Stockholm, Sweden, May 14.
Besides the impressive stage equipment and international live broadcasting, the voting from 25 countries is one of the great technical challenges. The best image quality is a must, minimal latency paramount and failure absolutely no option.
At the ESC 2015 in Vienna, an Active Silicon Phoenix HD-SDI frame grabber visualized the voting results in HD resolution. In this setup, as in many other professional broadcasting applications, the Active Silicon cards give absolute reliability, shortest latency possible and the highest image quality.
HD-SDI allows data transfer rates of up to 1.484 Gbps and a direct on-screen visualization based on low latency signal pass-through. The wide feature set for broadcasting supports the parallel processing of multiply video streams and several I/Os enable easy communication with peripherals.
The Active Silicon SDK is appreciated by developers for its ease of integration. In one-time projects like the ESC this guarantees an effective system implementation on time and budget.
Active Silicon at SPIE Defence + Commercial SensingApril 15th, 2016
40 years SPIE DSS (now SPIE DCS): Attend the leading global sensing and imaging event for defense and commercial applications.
Active Silicon will be exhibiting for the 13th year running at the SPIE DCS in Baltimore, USA, April 17 to 21, booth 743. We will be showcasing our embedded systems architecture based around the COM Express standard, as well as our frame grabber range designed for industry, medical and defense applications. All our products are designed for long product life, both in terms of supply and support in demanding applications.
Host Controller Card to Reliably Operate Four USB 3.0 Cameras SimultaneouslyApril 5th, 2016
A system setup with two to four USB 3.0 cameras usually requires dedicated host controller hardware to seamlessly transfer data into the PC’s system memory.
Active Silicon’s new Firebird Quad USB 3.0 controller, in PCIe/104 format, supports four USB 3.0 ports arranged as two ports per host controller with each controller having its own PCI Express x1 Gen2 interface to give a combined total data throughput of 10 Gbps. The USB 3.0 Host Controllers used are the proven Renesas μPD720202.
Would you like to learn more about how to set up imaging systems with multiple USB 3.0 cameras? Please contact us!
Quality is Paramount – Active Silicon Passed ISO 9001 AuditMarch 31st, 2016
Recognition of Further Improved Procedures at Active Silicon by ISO 9001 Auditors
As a trusted supplier to the industrial, medical and aerospace technology markets, the quality of products and solutions is paramount at Active Silicon.
Recently we successfully passed our annual ISO 9001 audit by the accredited certification body ISOQAR. We have further enhanced internal processes since the last audit a year ago, and this was well received by the auditors.
The best way to experience the benefit of a quality-driven supplier is to evaluate its products and services.
Results of the IVSM in Kyoto – Spring 2016March 23rd, 2016
Results of the last International Vision Standards Meeting (IVSM) in Kyoto
Experts from leading companies in the global imaging industry came together for the latest IVSM in March in Kyoto, Japan. They worked on enhancements to the primary machine vision interface standards, from GenICam through CoaXPress and CameraLink.
In the CoaXPress meetings, chaired by Active Silicon CTO Chris Beynon, excellent progress was made on the outstanding issues for the next main release of the standard, version 2.0. This version will include higher speeds of 10 and 12.5 Gbps per coax link as well as additional features.
Further work also took place on Camera Link with the key points of version 3.0 discussed, including a lively debate on making GenICam GenTL mandatory – a requirement encouraged by Active Silicon.
The GenICam meeting was in a more reflective mode, with GenICam 3.0 now released and the 10th anniversary of the standard this year. The meeting worked on “blue-sky” thinking of what should be on the wish-list for version 4.0.
At the Plugfest, Active Silicon verified the operation of their frame grabbers with new CoaXPress cameras from Adimec, CIS and JAI.
How to De-Bayer 2 Billion Color Pixels per SecondMarch 16th, 2016
At last week’s Korean Vision Show, as part of Automation World 2016, our FireBird Quad CoaXPress frame grabber demonstrated its acquisition performance with a Mikrotron 25 MP color camera providing 80 fps. The processing of this vast amount of image data requires the highly parallel processing capabilities of GPUs. Even just the write and read operations on the system RAM before delivering the data to the RAM of a graphics card would overload modern CPUs and bus architectures.
Featuring NVIDIA’s GPUDirect for Video, the FireBird frame grabber can effectively bypass the system RAM and deliver the incredible amount of 2 billion Bayer encoded pixels per second directly to the GPU for real-time de-Bayering, scaling and displaying on the screen without any load on the CPU.
Active Silicon’s YouTube ChannelMarch 8th, 2016
Have you checked out Active Silicon’s YouTube channel?
We have videos about the company, technologies and applications. And please subscribe if you don’t want to miss our upcoming video clips!
Our New Website is Launched!February 29th, 2016
We are excited to announce the launch of our new website!
With a bright new feel and uncluttered design, we wanted to make our new site easy to navigate, user friendly and be able to provide information quickly. Content includes information on Active Silicon, our products and services as well as the latest news and events.
Please check it out at
on your desktop PC as well as on your smartphone or tablet and tell us what you think.
Send any feedback to email@example.com
IVSM Spring 2016 in KyotoFebruary 25th, 2016
COMING UP: International Vision Standards Meeting (IVSM) in Kyoto, March 7-11.
The Japan Industrial Imaging Association (JIIA) is hosting the next meeting of leading experts of the industrial vision industry from all over the world. Under the support of the global G3 group (AIA, CMVU, EMVA, JIIA, VDMA) the biannual meeting is held to enhance the standardization of camera interface protocols and the software standards used.
Active Silicon continues to strongly support this key effort to further enhance vision systems. As technical chair of the CoaXPress committee, our CTO Chris Beynon will work on the new 2.0 version of the CoaXPress standard, which will offer doubled bandwidth of up to 12.5 Gbps per lane. Active Silicon also contributes to the GenICam and Camera Link standards.
In addition, Active Silicon will provide its GenICam compliant Camera Link and CoaXPress frame grabbers to the plug-fest, where the interoperability of cameras, cables and host interfaces is tested.
Meet Us at Stemmer Tech Forum in Silverstone and StockholmFebruary 18th, 2016
The UK and Nordic Machine Vision Technology Forums feature up to 50 lectures with topical and relevant content, an exhibition of leading machine vision suppliers, as well as many opportunities for networking.
Active Silicon will be present at the exhibition and our team will be happy to discuss your current challenges in Machine Vision. We will be showcasing innovations in CoaXPress and Camera Link frame grabbers as well as embedded vision systems. https://www.activesilicon.com/products/
Please contact us if you wish to pre-arrange a 1-to-1 meeting at our exhibition booth.
Mikrotron Acquired by AmbientaFebruary 10th, 2016
Mikrotron, the German high speed camera manufacturer has been acquired by Ambienta, a leading European private equity fund, headquartered in Milan. Mikrotron together with Tattile, a vision systems provider purchased by Ambienta in 2012, will form the core of Lakeside Technologies, the consolidation project by Ambienta with the aim of a building a major European player in the machine vision sector with global reach.
Active Silicon’s CoaXPress and Camera Link frame grabbers are compatible with Mikrotron cameras and the combination of camera and frame grabber are used in many high speed vision systems.
Two Experienced Test Engineers Joined Our TeamFebruary 4th, 2016
And Active Silicon continues to grow… we are pleased to announce that two experienced Test Engineers have recently joined our Production Department – a big welcome to Luminita Pintilie and Brian Pereira! With their commitment to quality and attention to detail they foster the core values of our company.
Lumi and Brian will be involved with the test specification phase of products moving from R&D into production as well as the routine testing of products in full production such as our frame grabbers and embedded PCs, etc.
CoaXPress Goes DragchainsJanuary 27th, 2016
CoaXPress is a widely adopted global standard for industrial high-speed and high-resolution cameras, offering a bandwidth of up to 6 Gbps per cable with the option to aggregate multiple cables. Among many other advanced technical features, this interface uses coaxial cables and as well as achieving very high data speeds, it also provides simultaneous power and uplink communication.
Now for the first time, coaxial cable designed for use in dragchains have been successfully evaluated and qualified for CoaXPress by Active Silicon’s engineers. These abrasion-stable and oil resistant cables are specified to an excellent bend radius of 60-125mm and a test period of a minimum of 2-4 million double strokes.
Dragchains are used in many industrial applications, such “pick and place” machines in electronic assemblies, or for the inspection of manufacture parts using moving cameras.
Are you involved in machine vision using moving cameras? Please contact our team.
A Big Welcome to Our New Application Support SpecialistJanuary 19th, 2016
A big welcome to Graham Gibbons as a lead member of our support team – Graham joins Active Silicon as our Application Support Specialist.
Graham brings 20 years of experience as a field support engineer in imaging and high-tech industries, and will be focused on maintaining and improving our rapid and effective support strategy. Responsiveness combined with technical excellence is the support team’s core value. Feel free to contact Graham and introduce yourself!
Webcast on Camera InterfacesJanuary 7th, 2016
CoaXPress / GigE / USB3 – free webcast about the benefits and applications of three major camera interfaces in Machine Vision – by Imaging and Machine Vision Europe (IMVE).
Max Larin (Ximea), John Phillips (Pleora) and Chris Beynon (Active Silicon), also the co-author of the CoaXPress standard, present key features, application scenarios and an insight into upcoming performance enhancements of the global interface standards for industrial cameras.
While CoaXPress exceeds by far the maximum bandwidth and cable length of Camera Link, it remains as easy to use as the consumer interfaces USB 3.0 and GigE. This is particularly useful for OEMs and system integrators, whose software platforms utilize the GenICam standard already. CoaXPress users additionally benefit from the real-time control and data transmission between camera and frame grabber. With a wide variety of cameras, frame grabbers and cable suppliers supporting the standard, customers can choose the right solution from low-end / low-cost equipment up to high-speed cameras delivering as much as 25 Gbps, while benefiting from high-flex cables – particularly useful in robotic/motion applications. The roadmap to version 2.0 of the standard gives even greater speeds and many additional benefits.
The free webcast can be viewed here.
Would you like to discuss with our experts in person? Please contact us any time.
Active Silicon Wishes a Happy Holiday SeasonDecember 21st, 2015
With many new faces at Active Silicon, the Christmas office party gets bigger and bigger! To celebrate another successful year we all met at an exclusive hotel, enjoyed a festive dinner and a great night out.
From Active Silicon, we would like to wish you a happy and peaceful Holiday Season and New Year! We would like to thank all our customers, partners and suppliers for their business, trust and support. With this in mind we are looking forward to an exciting New Year with our growing team and the enthusiasm to deliver innovative products and excellent support.
Stemmer Technology Forum in the Netherlands and FranceDecember 16th, 2015
Over 200 visitors attended the recent Technology Forums organized by Stemmer Imaging in Eindhoven and Paris.
We enjoyed the opportunity to get involved, demonstrate and discuss new technologies and ideas in high-end imaging with OEMs, camera manufacturer and system integrators.
As a result of excellent organization, there was plenty of time to engage with experts, end-users and representatives of the many companies present. No doubt everyone left having gained some knowledge and feeling inspired about their next machine vision project.
The Stemmer Technology Forums roadshow moves next to Silverstone in the UK on March 3rd and then on to Solna near Stockholm in Sweden on March 8th 2016.
We are looking forward to seeing you there – please get in touch with us today to reserve an appointment in advance!
AMS Announced Purchase of CMOSISDecember 15th, 2015
CMOSIS and AMS – a perfect fit for further innovation in high-end imaging?
Austria based provider of high-performance sensors and analog solutions AMS has recently announced the purchase of CMOSIS, the pioneer of high-speed and high-resolution image sensors.
At a price of 220m Euros, AMS expects an additional annual sales volume of 60m Euros and complementary synergies from CMOSIS’ intellectual property in image sensor design and AMS’ technological competencies and manufacturing capabilities.
Today, CMOSIS uses third-party service providers to manufacture their image sensors. By early 2018 AMS plans to take over the manufacturing of image sensors in their own custom-built production plant in upstate New York.
As an expert for high-end imaging components and solutions, Active Silicon sees opportunities for cost reduction as well as fast and major advances in image sensor technologies at CMOSIS with the support of AMS. We will be happy to provide the adequate backend support for future sensors with even higher frame rates and resolution.
Read more about the acquisition of CMOSIS and its subsidiary AWAIBA by AMS here.
Sony FCB H11 and Its Successor the FCB EV7100December 3rd, 2015
Sony’s HD block camera, the FCB H11, is famous for its superior image quality thanks to Sony’s Exmor CMOS sensor technology and a high-quality zoom lens.
The successor to the FCB H11 is the FCB EV7100 which uses the IMX136 image sensor (1/2.8” format, 2.4 MP) featuring very high sensitivity and dynamic range. The image quality of the EV7100 and multiple special features like Moire-reduction, compensation for spot or sun light, de-fogging, low delay, and a wide dynamic range mode offering 130dB make it ideal for any outdoor, low vision or even medical monitoring application.
Our Active FCB interface modules make the integration of a broad range of FCB cameras an easy process. The interface boards have a typical footprint of just 46 x 42 mm and can simply be mounted at the rear of the camera with the supplied mounting kit. Their broadcast quality HD-SDI output can be integrated seamlessly in various system architectures and full access to the video and lens control settings are provided.
Even though Sony discontinued the FCB H11, thousands of installations use this camera worldwide. Customers of Active Silicon can rely on the long-term availability of interface boards for the H11, for its successor the EV7100 and for various other FCB cameras.
Are you interested to save time and resources at the integration of Sony FCB block cameras? Please contact us right here – our team of technical experts is looking forward to help.
IT System Engineer to RecruitDecember 1st, 2015
Active Silicon, a specialist technology company in the global imaging market, is looking to recruit an IT Systems Engineer to maintain, manage and deliver reliable IT solutions to our team of highly qualified engineers, as well as to all other business areas within the company.
The ideal candidate is likely to be highly competent and confident in his ability to solve problems, as well as able to grasp the big picture and manage many different tasks in parallel.
Further details can be found here.
Article on CoaXPress Frame GrabbersNovember 26th, 2015
Since its release in 2009 CoaXPress has become the dominant standard in high-speed and high-resolution imaging. This is not surprising when taking a glance on the specification: a video downlink at 6.25 Gbps, the opportunity to aggregate several lines to multiply this bandwidth, data transmission, camera control and power supply over a cost-effective coax cable of up to 100 meters in length and great interoperability of all components thanks to GenICam.
In his latest article, Dave Wilson from Novus Light Technologies Today explains the state of the art of CoaXPress frame grabbers. He concludes that, because of the high level of standardization and the ease of use, frame grabbers have become commodity items.
Active Silicon is proud to be one of the founding members of CoaXPress and is pushing the further development as a leading party in the standardization committee. Being one of the first suppliers of CXP frame grabbers, our products have already proven their performance, reliability and their ease of integration over more than 5 years in industrial, medical and scientific applications. While standardized hardware components seem to be quickly replaceable, Active Silicon is delighted having been able to ensure full customer satisfaction with individual advice by technical application experts, short lead times and with our commitment to the long-term availability of our products.
Link to our range of CoaXPress frame grabbers.
Article on High-Speed ImagingNovember 10th, 2015
High-speed imaging used to be a privilege of well-funded research departments and institutions. Greg Blackman of IMVE illuminates how the latest advances in CMOS sensor technology and industrial camera interfaces enable systems based on off-the-shelf components, which deliver for example – 500 frames per second at 4-megapixel resolution. These new technologies, their relatively low price point and their ease of use, thanks to international standards like CoaXPress, open up new and interesting industrial applications with significant business potential.
Read the full article here.
Are you in high-speed imaging already or do you feel inspired by the new opportunities?
We are happy to support you in the development of your system with our wealth of experience, latest know-how and the right hardware solutions at hand.
New Camera Link Frame GrabberNovember 3rd, 2015
Do you have a requirement for high-speed imaging? Or multi-camera setups with tough real‑time requirements?
Active Silicon with its broad range of CoaXPress and CameraLink frame grabbers is the universal partner for virtually any demanding machine vision application.
The latest addition to our series is the high-performance, yet cost-effective Camera Link frame grabber, the FireBird 1xCLD-2PE4. It supports all configurations from Base, Dual-Base to 80-Bit over 10 taps (Deca). As well as its two CL-Mini (HDR/SDR) connectors with PoCL it offers a comprehensive D‑sub connector with opto-isolated TTL and RS-422 input and output control lines mounted on the same end panel.
Like all FireBird grabbers, the FDB-1xCLD-2PE4 will smoothly integrate with all major imaging software libraries and a broad set of cameras through full compliance with the GenICam standard. In addition, the proven PHX SDK is available for Windows, Linux, QNX and Mac OS X.
Our technical experts are happy to support you in the design of the right solution for your imaging task. Contact us right away…
Global Interface Standards Discussed at IVSM ChicagoOctober 29th, 2015
Experts from leading companies in the global imaging industry came together for the latest semi-annual International Vision Standards Meeting (IVSM) in mid-October in Chicago. They discussed the new and extended versions of the primary machine vision interface standards, from GenICam through CoaXPress and Camera Link.
The specification of the upcoming CoaXPress 2.0 standard has been detailed in key aspects. Semiconductor manufacturers said they will bring out the first chips supporting the increased bandwidth of 10 and 12.5 Gigabit per second towards the end of 2016.
Other important news from the Chicago meeting is that Camera Link, one of the older machine vision standards, will be actively enhanced. Among various improvements, new standard specifications will likely make the support of GenICam mandatory.
Another essential event at each IVSM is the ”plug-fest” where manufacturers of cameras, cables, frame grabbers and imaging software test the interoperability of components based on the different standards. The plug-fest once again confirmed the seamless interoperation and leading performance of our frame grabbers. The GenICam support of the Camera Link grabbers was especially well received. Frame grabbers from Active Silicon were successfully tested with current production cameras as well as prototypes from AlliedVision, Basler, Baumer, e2v, JAI and others.
Technology Forum – Start in MunichOctober 21st, 2015
The European Vision Technology Forum tour of Stemmer Imaging starts tomorrow near Munich. At the Active Silicon booth, we are showing our latest frame grabber and camera interface innovations.
Our experts are looking forward to providing advice to system developers requiring, e.g. high speed, long-term availability, real-time or ruggedized design. Feel free to contact us in advance to arrange an appointment.
Autumn IVSM 2015 in ChicagoOctober 8th, 2015
After a successful International Vision Standards Meeting (IVSM) in Spring 2015 hosted by Active Silicon in London, the meeting moves on to Chicago, IL. It takes place from Oct. 12 to 16, this time hosted by Components Express.
Chris Beynon, technical chair of the CoaXPress standardization committee, is looking forward to the finalization of the CoaXPress standard version 1.2. The committee is also working on version 2.0, which will have data rates doubled from 6.25 Gbps to 12.5 Gbps per cable and as in previous versions multiple lines can be operated in parallel.
Furthermore, Active Silicon will contribute its GenICam compliant Camera Link and CoaXPress frame grabbers to the plug-fest, where the interoperability of cameras, cables and host interfaces is tested.
How to Benefit From the Latest CMOS Global Shutter SensorsOctober 6th, 2015
How to benefit from the latest CMOS Global Shutter sensors by Sony and ON Semi:
After the success of its first CMOS Global Shutter sensor IMX174, Sony has released the next two major models of its Pregius family: The IMX250 (5 MP, 100 fps) and the IMX252 (3.2 MP, 120 fps). With the same pixel size and format, the IMX250 is the ideal replacement of its CCD-predecessor ICX625, which provided just 15 fps.
ONSemi also released its new Python 5000 (5.3 MP, 100 fps) and Python 2000 (2.3 MP, 230 fps) image sensors. These sensors deliver a video stream of up to 6.3 Gbps at 12 bit pixel depth.
None of the IT/consumer based camera interfaces such as GigE or USB3.0 can cope with this amount of data. System engineers need to use interfaces such as CoaXPress (CXP) or CameraLink (CL), which have been specifically developed for Machine Vision. With scalable speeds from 600 MBytes/sec per link, CoaXPress offers plenty of bandwidth to support the resolution and frame rates of these latest sensors and future generations.
Active Silicon, a pioneer in Machine Vision since 1988 and one of the founding members of the CoaXPress Consortium (which resulted in the CoaXPress standard), offer a wide range of frame grabbers for Camera Link and CoaXPress. Please contact our technical team to find out what we can do for you…
Frame Grabber Market OverviewOctober 2nd, 2015
Frame grabbers are a key requirement in the growing imaging market. The latest special issue of the German magazine inVISION on industrial cameras and interface technologies offers a market overview on CoaXPress frame grabbers (page 70) and Camera Link frame grabbers (page 76).
Applications with high resolution, high frame rates and/or real-time constraints become possible only with the right frame grabber technology.
Find out more in this issue (in German).
Solutions in Vision – VSD Readers SurveySeptember 23rd, 2015
What’s one of the biggest concerns in the design of machine vision systems?
The magazine Vision Systems Design (VSD) has surveyed over 600 industry system integrators and the answer is fairly clear: Product Obsolescence.
Get the full article here including the statement by our CEO, Colin Pearce, who describes how Active Silicon approaches the challenge of obsolescence to enable product lifetimes of 20 years or more for medical and industrial applications.
The Dubai Overpayment ScamSeptember 10th, 2015
Their tricks have been known for more than five years now, presumably with some success since this particular scam continues to go on. Just recently Active Silicon was targeted by one of their not so good attempts.
Following this we would like to raise awareness for this particular fraud tactic amongst our friends, customers and suppliers in our industry and beyond. This particular fraud is known as the Dubai Overpayment Scam.
When you receive a good sized order after a rather brief set of email exchanges, best to take the view that “if it sounds too good to be true, it probably isn’t” – especially when the customer…
a) claims to work for a Dubai company, yet he is using a personal type email address.
b) does limited bargaining and doesn’t seem that interested in technical details of the product.
c) places an order and asks for your bank details in order to pay.
d) offers to arrange the shipment himself.
If you accept this order, you are likely to be overpaid by a large amount using a fraudulent cheque deposited by their local agent.
The customer will then ask you to refund the difference hoping that you may be fooled by seeing uncleared funds on your bank account…
Fortunately for us, after a minute or two of excitement, we declined the order and never heard back from them.
Welcome to New Software EngineerSeptember 8th, 2015
We are pleased to welcome new employee Alex Fagrell to the software application development team at Active Silicon’s UK headquarters.
Alex has expertise in application development and image processing and will work on software applications to support our imaging product range.
GenICam Guarantees Flexibility and Easy IntegrationAugust 27th, 2015
Looking for smooth camera integration and universal interoperability for your vision system? Stick to GenICam compliant cameras and frame grabbers!
GenICam provides a unified application programming interface (API) for video acquisition, regardless of vendor, feature set or implementation details. It is an international standard, hosted by the EMVA, and specified as part of common industrial camera standards such as CoaXPress and is now also available with CameraLink.
Active Silicon’s FireBird frame grabbers are provided with GenICam drivers and ensure the fast and simple integration of CoaXPress or Camera Link compliant cameras with our frame grabbers and major image processing software libraries. Find out more on our website and see our GenICam solutions datasheet.
GPUDirect for VideoAugust 13th, 2015
Back in the days where USB2.0, FireWire and Gigabit Ethernet were the primary interfaces of industrial cameras, the bandwidth for video transmission was the bottleneck of professional image processing systems.
Today, with bandwidths of up to 25 Gpbs of 4x CoaXPress, even the latest CPUs are overwhelmed by the enormous amount of data. However, many filter, convolution and matrix-vector-operations can easily be performed on modern GPUs.
With Active Silicon’s frame grabbers and NVIDIA’s CUDA programming model for parallel computing, it is possible to leverage the GPUDirect™ for Video technology, which bypasses the CPU and optimizes the transfer of video frames from our grabber to the GPU of NVIDIA Tesla™ and Quadro™ cards. Our well documented API and SDK sample code allow for the hassle-free integration of parallel computing techniques on standard computer hardware. Thus, system developers can fully benefit from the high resolution and frame rate of modern industrial cameras and minimize computation time and hardware costs.
Learn more about NVIDIA’S GPUDirect for Video here.
Benefit from CoaXPressJuly 28th, 2015
High-speed inspection of semiconductor panels, sorting of recyclable materials and motion analysis in wind channels – modern machine vision applications like these require an easy-to-use camera interface for the reliable transmission of high image resolutions at fast frame rates with real-time control features.
Active Silicon recognized this demand back in 2009. The solution is CoaXPress, the most advanced machine vision standard fully dedicated to the needs of demanding imaging applications: 6.25 Gbps transfer rate on a single line, 25 Gbps and beyond on multiple lines, cable lengths of up to 200 m, deterministic latency and the transmission of power as well as video, control and trigger signals over low cost cables.
Our CTO, Chris Beynon, is one of the main authors of the CoaXPress standard and Chair of the Technical Committee. Learn more about the triggering and real-time capabilities of CoaXPress from this video here.
Engineers of machine vision systems can build on Active Silicon’s five years of experience with a leading-edge implementation of the CoaXPress standard in our frame grabbers.
Long-term product life and supply as well as personal support before and after your purchase – these are just some of the additional benefits we provide to our global customers.
Which of your imaging applications could benefit from the rich capabilities of CoaXPress? Find out and get in touch with us today!
Webcast on Machine Vision Standards OnlineJuly 20th, 2015
How the latest Machine Vision Standards will affect you – webcast now available for on-demand viewing.
On July 15th, five experts involved in the International Machine Vision Standards gave a round-up of the latest developments of CoaXPress, USB3 Vision, GigE Vision, Camera Link and Camera Link HS. Chris Beynon, CTO at Active Silicon, contributed to the updates of the CoaXPress standard and the benefits for modern imaging applications.
More than 200 registrants have shown their interest already. http://www.imveurope.com/webcasts/
Research Project “Eyes of Things”July 15th, 2015
The EU-funded research project “Eyes of Things” started the development of new concepts for embedded vision systems. The researchers consider especially the requirements of Cyber-Physical Systems (CPS), as elements of the Internet of Things. CPSs need sensors to serve their genuine purpose, e.g. monitoring and controlling the physical world like in a production line of the emerging Industry 4.0.
Active Silicon has a long history in the design of embedded vision systems for various applications and is looking forward to the publication of the first results of the project.
Active Silicon Passed This Year’s ISO9001 AuditJune 7th, 2015
Active Silicon is pleased to announce it has passed its ISO 9001 audit, carried out by external auditor ISOQAR, themselves accredited by the UK Accreditation Service.
“With regard in particular to our activities in the industrial and medical market, it is important to have an external audit by a reputable ISO auditor” explained Colin Pearce, Active Silicon’s CEO, “It is one aspect of many, which demonstrates we are serious about quality”. The annual audit includes the auditor spending time with people in all departments to ensure the day to day practice is consistent with the Quality Manual.
New Article About Custom Embedded SystemsMay 7th, 2015
New article about opportunities and benefits of Customized Embedded Vision Solutions
Imaging is a key technology of numerous applications in medical devices, pharmaceutical packaging, industrial quality inspection and many others. These systems require compact and reliable electronics with long-term availability – the characteristics of industrial embedded systems. This article by Active Silicon’s CEO provides an introduction to typical applications, the challenges and the added value of Embedded Solutions with integrated Machine Vision interfaces and image processing functionality.
Successful IVSM Spring 2015May 1st, 2015
The Spring 2015 International Vision Standards Meeting (IVSM) was hosted by Active Silicon in London last week (April 27th to May 1st).
The event was a great success with over 70 participants, from 49 different companies in the machine vision industry, meeting to discuss current and future standards. These standards include Camera Link, CoaXPress, USB3 Vision and GigE Vision, as well as discussing future standards.
The social highlight of the week was the Group Dinner at the Royal Institution of Great Britain, sponsored by Novus Light Technologies Today. The evening provided an opportunity to discuss progress during the week in the grand surroundings of a building famous for great scientific achievements.
Active Silicon to Host the International Vision Standards Meeting (IVSM) April 13th, 2015
The standardization of hardware and software interfaces form the basis of the smooth interoperability between cameras, cabling, interface cards and software libraries. In addition to that, MachineVision standards and compliant components dramatically simplify and accelerate the development of imaging systems to the benefit of the industry as a whole.
With a long track record of standards involvement and leading edge product development, we are proud to host the upcoming International Vision Standards Meeting (IVSM) from April 27th to May 1st in London. During workshops throughout the week, more than 70 leading experts from the global machine vision industry will discuss and decide about improvements and new releases to standards such as GigE Vision GEV, USB3 Vision, CoaXPress and CameraLink, as well as discussing options for future standards in general.
The social highlight of the week will be a group dinner held at the prestigious Royal Institution of Great Britain surrounded by historical artifacts from the history of science and engineering such as 19th century optics and Faraday’s actual lab.
Visit Active Silicon at SPIE DSS in BaltimoreApril 4th, 2015
Active Silicon will be exhibiting for the 12th year running at the SPIEDSS in Baltimore, USA, April 21-23, booth 749.
We showcase e.g., our embedded vision architecture COM Express to integrate any type of video acquisition along with the standard interfaces such as USB 3.0, HDMI, GigE, eSATA, plus expansion options using PCI Express. The mezzanine standard allows a variety of third-party processor modules to be fitted to the custom carrier card.
We offer these embedded vision solutions with especially long product life cycles suitable for various military and other demanding applications.