Previous Page  58 / 84 Next Page
Information
Show Menu
Previous Page 58 / 84 Next Page
Page Background

I pointed out recently that although

La La Land is a romance, the movie

opens with cars. The semiconductor

industry is like that, too—no matter

which way you turn it is automotive.

It may not show yet in manufacturing

volume and revenue, since it is

about 10% of the market. However,

the newer parts of automotive,

those associated with autonomous

driving, have ~30% growth rates

(which is close to doubling every

two years, by the rule of 70). There

are several really big changes, such

as automotive Ethernet or security,

which I won't discuss today. But

probably the biggest change is the

need for vision processing.

There are two separate reasons that

this is such a big change. Firstly,

vision processing has to be done

on-vehicle. The amounts of data

are insanely large, too large to

upload to the cloud for processing.

But more fundamentally, a vehicle

cannot require network connectivity

to decide whether a light is green

or red, or whether that thing ahead

is a pedestrian or a mailbox. This

is a level of computation that cars

have never required before, so

is a challenge for the automotive

semiconductor ecosystem. The

traditional suppliers don't understand

high-performance processors and

leading-edge processes. The mobile

semiconductor ecosystem does, but

it doesn't understand automotive

reliability and only recently heard the

magic number 26262.

(For more on ISO 26262, see my

recent post "The Safest Train Is One

That Never Leaves the Station".

For an introduction to convolutional

neural nets (CNN), see Why is Google

So Good at Recognizing Cats?. Also,

last year Cadence ran a seminar in

Vegas that I wrote up in a full week of

posts here, starting on Monday with

Power Efficient Recognition Systems

for Embedded Applications.)

The second change is with vision

processing itself. If you go back only

a few years, vision processing was

algorithmic, with the focus of research

on edge-detection algorithms,

building 3D models from 2D data,

and so on. Now the whole field has

switched to convolutional neural

nets (CNN). But it is not just vision

processing that has gone neural, a

lot of the decision processing has,

too. Arguably, vision processing has

advanced more in the last two to

three years then since...cue dramatic

music...the dawn of time.

Embedded Vision Summit

embedded vision summit badgeToday

Vision C5 DSP for Standalone Neural Network

Processing

Paul McLellan, Cadence

Embedded Solutions

Special Edition

58 l New-Tech Magazine Europe