Previous Page  25 / 84 Next Page
Information
Show Menu
Previous Page 25 / 84 Next Page
Page Background

Figure 1: Example Applications (Top: facial detection and

classification, Bottom: Optical Flow)

in the processing algorithm. This

bottleneck increases as the frame

rate and resolution of the image

increases.

This bottleneck is removed when

the solution is implemented using

a Zynq-7000 or Zynq UltraScale+

MPSoC device. These devices allow

the designer to implement the

image processing pipeline within

the PL of the device. Creating a true

image pipeline in parallel within the

PL where the output of one stage is

passed to the input of another. This

allows for a deterministic response

time with a reduced latency and

power optimal solution.

The use of the PL to implement

the image processing pipeline also

brings with it a wider interfacing

capability than traditional CPU/GPU

SoC approaches, which come with

fixed interfaces. The flexible nature

of PL IO structures allows for

any to any connectivity, enabling

industry standard interfaces such

as MIPI, Camera Link, HDMI, etc.

The flexible nature also enables

bespoke legacy interfaces to be

implemented along with the ability

to upgrade to support the latest

interface standards. Use of the PL

also enables the system to be able

to interface with multiple cameras

in parallel.

What is critical however is the

ability to implement the application

algorithms without the need to

rewrite all the high level algorithms

in a hardware description language

like Verilog or VHDL. This is where

the reVISION™ Stack comes into

play.

reVISION Stack

The reVISION stack enables

developers to implement computer

vision and machine learning

techniques. This is possible using

the same high level frame works and

libraries when targeting the Zynq-

7000 and Zynq UltraScale+ MPSoC.

To enable this, reVISION combines

a wide range of resources enabling

platform, application and algorithm

development. As such, the stack is

aligned into three distinct levels:

1.

Platform Development - This is

the lowest level of the stack and

is the one on which the remaining

layers of the stack are built.

This layer provides the platform

definition for the SDSoC™ tool.

2.

Algorithm Development -

The middle layer of the stack

provides support implementing the

algorithms required. This layer also

provides support for acceleration of

both image processing and machine

learning inference engines into the

programmable logic.

3.

Application Development – The

highest layer of the stack provides

support for industry standard

frameworks. These allow for the

New-Tech Magazine Europe l 25