![Show Menu](styles/mobile-menu.png)
![Page Background](./../common/page-substrates/page0027.jpg)
Figure 4: Caffe Flow Integration
Machine learning in
reVISION
reVISION provides integration
with Caffe providing the ability
to implement machine learning
inference engines. This integration
with Caffe takes place at both
the algorithm development and
application development layers.
The Caffe framework provides
developers with a range of libraries,
models and pre-trained weights
within a C++ library, along with
Python™ and MATLAB® bindings.
This framework enables the user
to create networks and train
them to perform the operations
desired, without the need to start
from scratch. To aid reuse, Caffe
users can share their models via
the model zoo, which provides
several network models that can
be implemented and updated
for a specialised task if desired.
These networks and weights are
defined within a prototxt file, when
deployed in the machine learning
environment it is this file which is
used to define the inference engine.
reVISION provides integration with
Caffe, which makes implementing
machine learning inference engines
as easy as providing a prototxt file;
the framework handles the rest.
This prototxt file is then used to
configure the processing system
and the hardware optimised
libraries within the programmable
logic. The programmable logic is
used to implement the inference
engine and contains such functions
as Conv, ReLu, Pooling and more.
The number representation systems
used within machine learning
inference engine implementations
also play a significant role in its
performance. Machine learning
applications are increasingly using
more efficient, reduced precision
fixed point number systems, such
as INT8 representation. The use
of fixed point reduced precision
number systems comes without
a significant loss in accuracy
when compared with a traditional
floating point 32 (FP32) approach.
As fixed point mathematics
are also considerably easier to
implement than floating point,
this move to INT8 provides for
more efficient, faster solutions in
some implementations. This use
of fixed point number systems is
ideal for implementation within
a programmable logic solution,
reVISION provides the ability to
work with INT8 representations in
the PL. These INT8 representations
enable the use of dedicated
DSP blocks within the PL. The
architecture of these DSP blocks
enables up to two concurrent INT8
Multiply Accumulate operations
to be performed when using the
same kernel weights. This provides
not only a high-performance
implementation, but also one
which provides a reduced power
dissipation. The flexible nature of
programmable logic also enables
easy implementation of further
reduced precision fixed point
number representation systems as
they are adopted.
Conclusion
reVISION provides developers with
the ability to leverage the capability
provided by Zynq-7000 and Zynq
UltraScale+ MPSoC devices.
This is especially true as there is
no need to be a specialist to be
able to implement the algorithms
using programmable logic. These
algorithms and machine learning
applications can be implemented
using high-level industry standard
frameworks,
reducing
the
development time of the system.
This allows the developer to deliver
a system which provides increased
responsivity, is reconfigurable, and
presents a power optimal solution.
For more information, please visit:
http://www.xilinx.com/products/design-tools/embedded-vision-
zone.html
New-Tech Magazine Europe l 27