![Show Menu](styles/mobile-menu.png)
![Page Background](./../common/page-substrates/page0050.jpg)
at relative CPU time, this time we
have recorded the relative GPU
Vertex jobs time. Again, the smaller
the better, regular stereo in blue
and multiview in red.
The savings are immediately visible
on this chart as we are no longer
computing most of the shader
twice.
Wrap it up
From our measurements multiview
is the perfect extension for CPU
bound applications, in which you
can expect between 40% and 50%
improvements. If your application
is not yet CPU bound multiview
should not be overlooked as it can
also somewhat improve your vertex
processing time at a very limited
cost.
It is noteworthy that multiview is
rendering to an array of textures
inside a framebuffer, thus the
result is not directly ready for the
front buffer. You will first need to
render the two views side by side,
this composition step is mandatory,
but in most cases the time needed
to do so is small compared to the
rendering time, and can thus be
neglected. Moreover, this step can
be integrated directly in the lens
deformation or timewarp process.
Multiview Applications
The obvious way, and the one
already discussed in this article, is to
use multiview in your VR rendering
pipeline. Both of your views are then
rendered using the same draw calls
onto a shared framebuffer. If we try
to think outside the box though, it
opens up a whole new field in which
we can innovate.
Foveated Rendering
Each year sees our device screen
Fig. 6: Relative GPU time between multiview and regular stereo.
The smaller the better, with the number of cubes on the x-axis
and the relative time on the y-axis.
Multiview in red, and regular stereo in blue
Fig. 7: Example of an application using foveated rendering
Fig. 8: A different reflection for each eye, demonstrated here in
Ice Cave VR
50 l New-Tech Magazine Europe