Previous Page  47 / 84 Next Page
Information
Show Menu
Previous Page 47 / 84 Next Page
Page Background

Fig. 1: Stereo camera setup

We can obviously see a bit of a pattern here as we are emitting two draw

calls, and sending the same geometries twice. If Vertex Buffer Objects can

mitigate the latter, doubling the draw calls is still a major issue as it is adding

an important overhead on your CPU. That is where multiview kicks in, as it

allows you in that case, to render the same scene with multiple points of view

with one draw call.

Multiview Double Action Extension

Before going into the details of the expected improvements, I would like

to have a quick look at the code needed to get multiview up and running.

Multiview currently exists in two major flavors: OVR_multiview and OVR_

multiview2. If they share the same underlying construction, OVR_multiview

restricts the usage of the gl_ViewID_OVR variable to the computation of gl_

Position. This means you can only use the view ID inside the vertex shader

position computation step, if you want to use it inside your fragment shader

or in other parts of your shader you will need to use multiview2.

As antialiasing is one of the key requirements of VR, multiview also comes in a

version with multisampling called

OVR_multiview_multisampled_render_to_

texture

. This extension is built against the specification of OVR_multiview2

and

EXT_multisampled_render_to_texture

.

Some devices might only support some of the multiview extensions, so

remember to always query your OpenGL ES driver before using one of them.

This is the code snippet you may want to use to test if OVR_multiview is

available in your driver:

In your code multiview manifests itself on two fronts; during the creation

of your frame buffer and inside your shaders, and you will be amazed how

simple it is to use it.

That is more or less all you need to change in your engine code. More or less,

because instead of sending a single view matrix uniform to your shader you

need to send an array filled with the different view matrices.

Now for the shader part:

Simple isn’t it?

Multiview will automatically run the shader multiple times, and increment gl_

ViewID_OVR to make it correspond

to the view currently being

processed.

For more in depth information on

how to implement multiview, see

the sample code and article "Using

Multiview Rendering".

Why using Multiview?

Now that you know how to

implement multiview, I will try to

give you some insights as to what

kind of performance improvements

you can expect.

The Multiview Timeline

Before diving into the numbers, let’s

discuss the theory.

In this timeline, we can see how

our CPU-GPU system is interacting

in order to render a frame using

regular stereo. For more in depth

information on how GPU scheduling

works on Mali, please see Peter

Harris’ blogs.

First the CPU is working to get all

the information ready, then the

vertex jobs are executed and finally

the fragment jobs. On this timeline

the light blue are all the jobs related

to the left eye, the dark blue to the

right eye and the orange to the

composition (rendering our two

eyes side by side on a buffer).

In comparison, this is the same

New-Tech Magazine Europe l 47