Previous Page  46 / 84 Next Page
Information
Show Menu
Previous Page 46 / 84 Next Page
Page Background

v8.2 64-bit CPU, 8MB L2 + 4MB L3,

Memory 16GB 256-Bit LPDDR4x |

137GB/s , Storage 32GB eMMC 5.1,

DL Accelerator (2x) NVDLA Engines,

Vision Accelerator 7-way VLIW

Vision Processor ,Encoder/Decoder

(2x) 4Kp60 | HEVC/(2x) 4Kp60 |

12-Bit Support ,Size 105 mm x 105

mm as Deployment Module (Jetson

AGX Xavier). 

Data & Training: get the

right answer

Data is the true currency of

Artificial Intelligence. By collecting,

processing and analyzing data

companies can get important and

meaningful insights into business

processes, human behavior or

recognize patterns. No wonder

many internet-based companies

like Google or Amazon invest so

heavily into storing and processing

the data they have access to. In

deep learning, the datasets are

used to train neural networks. In

general, the larger the dataset,

the better the accuracy and more

robust the model. To make it even

less susceptible to environmental

factors (sunlight, dirt on lenses,

noise, vibration, etc), the data is

typically augmented, for instance by

rotating images, cropping, adding

artificial noise.

There are different approaches

to training a model and these are

briefly the supervised, unsupervised

and reinforced learning. In the first,

the dataset is labeled and, for image

classification, constitues of pairs of

images and labels. The image is

forward propagated through the

model's layers, each layer adding

a bit more abstraction to finally get

the classification value. The output

is compared to the label, and the

error is then backpropagated from

the end to the start to update the

weights. In unsupervised learning,

the dataset is unlabeled and the

model finds patterns on its own.

Reinforced learning is best explained

by taking an example of a video

game. The goal is to maximize the

score by taking a set of subsequent

actions and responding to feedback

from environment, for instance

performing a series of consecutive

control decisions to move from one

place to another.

Deployment and

Inference: the unsolved

challenge

Most of the training of deep neural

networks typically takes place

on large GPUs. When it comes to

inference, i.e. forward propagation

of the neural network to obtain a

prediction or classification on a

single sample, there are various

platforms that can be used.

Depending on the requirements, it

is possible to deploy and run models

on devices like Cortex-M, Cortex-A

with GPUs or Neural accelerators,

FPGAs or specialized ASICs. These

obviously vary by processing power,

energy consumption and cost. The

tricky part is how to efficiently and

easily deploy a model. The models

are typically trained using deep

learning frameworks like Tensorflow

or Caffe. These models must be

converted to a format that can be

run by the inference engine on the

edge device, for example using

Open Neural Network Exchange

format (ONNX) or to a plain file

with weights for ARM CMSIS-NN

on Cortex-M. To further optimize,

weights may optimized by pruning

(removing close to zero values),

quantization (moving from float32

to integer) or compression.

Finally, the heavy-lifting on the

device is done by an inference

engine. It is mainly up to vendors

to provide support the target

processors and components for

frameworks like OpenCL or OpenCV.

Unfortunately, the market at the

moment is very fragmented and we

can see various proprietary SDKs or

tools, and no single standard how

to deploy and infer on the edge.

What is promising is that with

standards like ONNX there is an

increasing interest in the industry

for standardization.

Conclusion: the Edge is

getting smarter

The Artificial Intelligence has been

the biggest trend in recent years.

For the edge devices, the key

obstacles to adoption are the lack

of understanding and difficulty in

deploying and running. As suppliers

compete to attract customers and

establish their solution as the

go-to standard, Arrow has the

unique possibility to understand

the different approaches from

our partners and recognize where

different platforms may be the

most useful for our customers. We

are using our expertise in Artificial

Intelligence to aid the customers

and demystify the edge computing.

Łukasz Grzymkowski, AI/ML Software

Engineer, Arrow Electronics

46 l New-Tech Magazine Europe