New-Tech Europe Magazine | August 2017

reVISION Stack: Accelerating your Embedded Vision development

Nick Ni and Adam Taylor

Embedded Vision is ubiquitous across a range of industries and applications, from ADAS and Guided Robotics to medical imaging and augmented reality. The breadth of embedded vision penetration across multiple market segments is staggering. In most of these applications the downstream image processing pipeline is very similar. This downstream pipeline contains functions such as Image Sensor / Camera interfacing and reconstruction of the image in a format suitable for further processing. Commonly used algorithms within downstream processing are colour reconstruction (Bayer Filter), colour space conversion and noise reduction algorithms. It is the application specific algorithms where the differences between applications become apparent. Implementing these is where the embedded vision developer expends significant time and effort. These application algorithms are

edge as increasingly Embedded Vision applications are autonomous and cannot depend upon a connection to the cloud. One example of this would be vision guided robotics, which are required to process and act on information gleaned from its sensors to navigate within its environment. Many applications also implement sensor fusion, fusing several different sensor modalities to provide an enhanced understanding of the environment and further aid the decision-making, bringing with it increased processing demands. Due to the rapid evolution of both sensors and image processing algorithms the system must also be able to be upgraded to support the latest requirements of the product roadmap. The rise of autonomous and remote applications also brings with it the challenges of efficient power dissipation and security to prevent unauthorized modification attempts. To address these challenges,

often complex to implement, using techniques such as object detection and classification, filtering and computational operations. Increasingly these application algorithms are developed using open source frameworks like OpenCV and Caffe. The use of these open source frameworks enables the Embedded Vision developer to focus on implementing the algorithm. Using the provided pre-defined functions and IP contained within, removes the need to start from scratch which significantly reduces the development time. Depending upon the application, the challenge faced by the designer is not only how to implement the desired algorithms. The Embedded Vision developer must also address both challenges faced by the application and its environment while considering future market trends. These challenges and trends include processing and decision making at the

48 l New-Tech Magazine Europe

Made with