Previous Page  49 / 84 Next Page
Information
Show Menu
Previous Page 49 / 84 Next Page
Page Background

Cadence presented a lot of

information about the OpenVX

standard, and how it has complete

support on the Tensilica Vision P5

and P6 cores. See my earlier post See

Further by Standing on the Shoulders

of...OpenVX.

Linley also had some

information on specific

automotive processors:

Mobileye EyeQ3 dominates ADAS

vision processors today. EyeQ4 rated

at 2 trillion operations per second

(TOPS) at just 3W. EyeQ5 expected

to sample in 2018 with production in

2020, delivering 12 TOPS at 3W. One

interesting wrinkle, that Linley didn't

mention, is the EyeQx designs are

MIPS-based (I don't think Intel was a

MIPS licensee and the future of MIPS

is unclear with Apple moving away

from Imagination GPUs).

NVIDIA is developing a single-chip

solution of their DRIVEPX2 called

Xavier that combines 8 custom CPUs,

512-shader Volta GPU delivering

>3TFLOPS, new integer only 30 TOPS

vision processor, and a 30W power

budget (sampling late this year and

could be in 2020 cars).

NXP has a reference design called

BlueBox with a vision-processing

chip and an 8-core A-57 and a 40W

power budget. Qualcomm is expected

to boost R&D in this area. I covered

BlueBox in passing in the DVCon

Europe keynote.

Renasas has a new automotive

platform called Autonomy, although

Linley didn't mention it. That's

because it was announced between

the conference and me writing this

post, that's how fast things are

moving.

Lexus Lane Valet

It's way past April 1, so a bit late for a

prank video, but Lexus came up with

a new feature for advanced driver

automation, with its lane valet:

earth-shattering announcement, but

instead that it was really a bit boring.

It was boring because everyone said

the same thing. That in itself is a

story. The future is going to be cars

with lots of sensors (lots of cameras,

because they are cheap, some radar,

some lidar) and high-performance

chips that perform the sensor fusion,

do the vision recognition, and handle

the policy aspects of driving.

A decade ago, every presentation in

EDA opened with a graph illustrating

the design gap, before going on to

show how whatever product was

being presented would close it. Today,

every automotive presentation opens

with a picture showing the complexity

of future automotive electronics. Here

are a selection from the day:

Linley's opening keynote gave a good

high-level overview of the space.

He started off talking about how

autonomous technology drives many

markets such as planes and drones.

But really it is all about cars (and

trucks, but they are mostly just big

cars). He covered a lot of the basics,

such as SAE autonomous mode levels,

that I have covered in numerous posts

here already. Since Linley Group talks

to a lot more people than I do, it is

interesting to see what he considers

the timescales for introduction:

Level 3 vehicles to debut this year

in high-end ($50K+) vehicles and in

trucks

Level 3 BOM cost will drop below

$5K by 2022, and market may be

lubricated by reduction in insurance

cost

Level 4 vehicles in 2022 in high-

end brands and commercial vehicles

(taxis/uber)

True level 5 may take 10 years to

develop

I think that everything may happen

faster than this, since progress is

being made so fast. It is not much

more than a decade ago that

autonomous vehicles couldn't go

ten miles and required so much

electronics that they required extra

air conditioners on the roof. Deep

learning has only become dominant

in the last five years, perhaps fewer.

Fully autonomous trucks have been

in use in open cast mining for some

years. Planes can land themselves,

although the automotive people all

claim that there are several orders of

magnitude more code in a high-end

car than a plane. That may be true,

but there is also probably a reason we

let 15 years olds behind the wheel of

a car but not a 777.

New-Tech Magazine Europe l 49