Previous Page  45 / 84 Next Page
Information
Show Menu
Previous Page 45 / 84 Next Page
Page Background

as of recent the main driving force

in popularizing artificial intelligence.

Architecture: Choosing

the correct tools

The application requirements and

constraints are what drives the

specification of the final product

that incorporates an Artificial

Intelligence-related

algorithm.

These are related to robustness,

inference time, hardware resources

and quality of service. This is

especially

when

considering

edge deployment and choosing

appropriate the embedded platform.

Robustness is the accuracy of the

model's output and the ability to

generalize, e.g. the likelihood of

computing a correct output and

avoiding overfitting. Typically, the

more complex the model (or deeper,

more layered) and the richer the

dataset, the more robust the model

tends to be.

Defining a desired inference time

is entirely dependant on the

application. In some cases, for

example in automotive, it is crucial

for safety reasons to get a response

from a machine vision system under

a millisecond. This not the case for

a sensor fusion system with slow-

changing measurements where one

could infer only every a minute or

so. Inference speed depends on

the model complexity - more layers

correspond to more computations

and that results in longer inference

time. This can be offset by selecting

and using more powerful compute

resources, e.g. embedded GPUs,

DSPs, Neural accelerators with

OpenCL kernels to fully utilize the

available resources.

In addition, the model memory

footprint grows with the number of

neurons and weights. Each weight

is a number that must be stored

in memory. To reduce the size of

the model, and often to address

hardware specifics, one can convert

the weights from floats or doubles

and use integers instead.

Quality of service and reliability

of the system depends on the

deployment model. In a cloud-based

approach, the fact that a connection

is needed, can result in the system

is unreliability. What happens if

the server is unreachable? Still, a

decision must be made. In such

cases, the edge may be the only

viable solution, e.g. in autonomous

cars, isolated environments. It is

also essential to understand that the

Machine Learning-based algorithms

are inherently probabilistic systems

and the output is the likelihood

with a certain dose of uncertainty.

However, for many use cases, the

accuracy or reliability of predictions

made by AI systems already

exceeds those made by humans.

Whether the system designer should

consider a 90% or 99% probability

to be high enough depends on the

application and its requirements.

Finally, when considering an

appropriate hardware and software,

a designer should realize that

the difficulty of development and

scalability of certain solutions may

differ.

AI is not new to Arrow Electronics

but now we believe that this is

the time to drive this technology

bottom-up meaning that we need

to address all options available and

to fit it to the customer demand and

requirements.

In September 2018 , Arrow

Electronics and NVIDIA have signed

a global agreement to bring the

NVIDIA

®

Jetson™ Xavier™, a first-

of-its-kind computer designed for

AI, robotics and edge computing,

to companies worldwide to create

next-generation

autonomous

machines.

Jetson Xavier — available as a

developer kit that customers can use

to prototype designs — is supported

by comprehensive software for

building AI applications.

This includes the NVIDIA JetPack™

and DeepStream SDKs, as well as

CUDA

®

, cuDNN and TensorRT™

software libraries. At its heart is the

new NVIDIA Xavier processor, which

provides more computing capability

than a powerful workstation and

comes in three energy-efficient

operating modes.

The Tech Specs for Jetson AGX

Xavier is GPU 512-core Volta GPU

with Tensor Cores , CPU 8-core ARM

Picture Nu1:

NVIDIA Jetson Xavier System On Module and Development Kit

New-Tech Magazine Europe l 45