Previous Page  58 / 84 Next Page
Information
Show Menu
Previous Page 58 / 84 Next Page
Page Background

Figure 1:

HSA provides a unified view

of fundamental computing elements,

allowing a programmer to write

applications that seamlessly integrate

CPUs with GPUs while benefiting from

the best attributes of each.

Figure 2:

Unibap’s mission-critical

stereo Intelligent Vision System

(IVS) with 70 mm baseline features

advanced heterogeneous processing.

Extensive error correction is enabled

on the electronics and particularly on

the integrated AMD G-Series SoC and

Microsemi SmartFusion2 FPGA.

sensors and the compute units. These

systems not only provide high speed

and high resolution to compete with

our human vision, they also provide

accurate spatial information on where

landmarks or objects are located. To

achieve this, stereoscopic vision is the

natural choice. Industrial applications

for this type of stereoscopic vision

system can be found, for example,

in item-picking from unsorted bins.

Mounted on a robot arm, a vision

system can carry out ‘visual servoing’

with 50 fps and identify the most

suitable item to pick at the same

time the gripper of the robot arm

is approaching the bin. This makes

scanning - which can take a couple

of seconds – and reprogramming the

robot arm superfluous. Autonomous

cars are another obvious application

for vision technologies, as well as

a whole range of domestic robot

applications.

The artificial visual cortex

So how does this process work in

detail? The first stages of information

handling are strictly localized to each

pixel, and are therefore executed

in a FPGA. Common to all machine

vision is the fact that color cameras

think in RGB (and the pixels are

Red, Green and Blue) just like the

human eye, but this method is not

suitable for accurately calculating an

image. Thus, firstly RGB has to be

transferred into HIS (Hue, Saturation

and Intensity). Rectifying the image

to compensate for distortion in the

lenses is the next necessary step.

Following this, stereo matching

can be performed between the two

cameras. These steps are executed

within an FPGA that is seconding the

x86 core processor. All the following

calculations are application-specific

and best executed on the integrated,

highly flexible programmable x86

processor platform which has to fulfill

quite challenging tasks to understand

and interpret the content of a picture.

To understand how complex

these tasks are, it is necessary to

understand that interpreting picture

content is extremely complex for

software programmers and that,

until recently, the human visual

cortex has been superior to computer

technology. These days, however,

technological advancements are,

quite literally, changing the game:

An excellent example of computer

technology improvement is Google’s

AlphaGo computer which managed to

beat the world’s best Go player. And

this was achieved by executing neural

network algorithms. Is this really so

revolutionary? Haven’t we seen neural

network algorithms in the recent past?

Indeed we have. Neural networks are

not new. They are just one of many

AI (Artificial Intelligence) methods.

Although exactly this kind of network

was considered very promising in the

nineties it is even more promising

today as all the basic technologies

now have far more computing power.

Progress simply came to a halt due to

the limited performance. To be even

more precise, the barrier came down

partly due to a lack of compute power

and partly due to problems with

networks with too many hidden layers.

Recent methods use even more layers

in building the neural networks and

today the term deep-learning means a

neural network with many more layers

than were used previously. Plus, the

heterogeneous system architecture

of modern SoCs allows deep-learning

algorithms to be used efficiently (e.g.

with the Deep Learning Framework

Caffe from Berkley).

x86 technology is also interesting

for intelligent stereoscopic machine

vision systems due to its optimized

streaming and vector instructions

developed over a long period of

time and very extensive and mature

software ecosystem, vision system

algorithms and driver base. Plus, new

initiatives like Shared Virtual Memory

(SVM) and the Heterogeneous

System Architecture (HSA) now offer

an additional important companion

technology to x86 systems by

increasing the raw throughput

capacities needed for intelligent

machine vision.

HSA enables efficient use

of all resources

With the introduction of latest

generation AMD SoCs, a hardware

ecosystem is now in place which

accelerates artificial intelligence

58 l New-Tech Magazine Europe