New-Tech Europe | February 2019

Re-Defining Embedded Vision SMART IMAGING FOR THE VISION OF THINGS Darren Bessette, FRAMOS

Embedded Vision has been the buzz word in the imaging industry for quite a while. Unquestionably, there is a huge potential for Embedded Vision to change industries’ business models, to take vision to the next level, and to allows devices to see and think in all industrial and consumer markets. But how is this different than the classic vision technology? How can all industries, virtually all devices and every_thing, leverage and benefit from the embedded Vision of Things? The Internet of Things (IoT) creates the swarm intelligence of holistic systems by connecting all devices among one another to interact accordingly. Embedded Vision technologies provide the eyes and brain power (AI) for autonomous decision making without any human interaction to empower the Vision of Things (VoT) to act intelligently within the Internet of Things. What differentiates Embedded Vision from Classic Vision? Regular vision systems are mainly built

with a camera that is connected to a host PC with a known data interface. The system is mostly separated into the machine that run and the controlling process that do the inspection. The processing of the video stream and images are mostly outsourced and often needs user interaction for validation and/or decision making. A surveillance application may recognizes people, but a security officer needs to validate any abnormal occurrence to determine if it is a threat that needs an immediate response. In comparison, a security based Embedded Vision application would be able to assess the thread, determine that the threat is a person of interest and alert the authorities to react accordingly. In this case, the vision technology inside of a device, a complete system with intelligent on-board processing, is able to provide an appropriate response without any human operator oversight. Embedded Vision is not only part of the device, it is its smart eye. In its entirety Embedded Vision minimizes or removes

human interactions within the imaging pipeline and allows machines to make their own decisions by capturing, analyzing and interpreting the data all-in-one. From a developer’s standpoint, classic vision systems were mostly made to support numerous verticals with multitude of possible tasks to be programmed. This broad variety is the main reason for the large processing space requirements needed off-board. Embedded Vision tends to be more laser-focused in its applications, it is designed for a specific task. This “purpose-build” approach opens new possibilities and frees processing space to be used for neural intelligence algorithms. From the vision manufacturer perspective, he does not have to provide a one- fits-all product and cover all possible use cases, but can be specializes and focus his development on the “how” of a specific system which will be customized later by the OEM developer to satisfy his unique requirements.

64 l New-Tech Magazine Europe

Made with FlippingBook Online newsletter