5 Putting it all Together: Integrating Head Tracking and 3D Rendering
This chapter covers
- • building a simple scene with elementary lighting and camera controls, displayed on the conventional screen.
- • modifying our scene to split its image for stereoscopic viewing, reading the offset of each eye from the Rift’s user settings.
- • modifying our scene to display on the Rift, targeting the Rift display and enabling distortion.
- • enabling head-tracking, producing a fully immersive Rift experience.
- • timewarp, Oculus’ advanced technique for decreasing immersion by updating distortion at, almost literally, the last millisecond.
Let’s take stock of all the aspects of computer graphics we’ve seen in the book so far, because now it’s time to put them into play. We’ll build up a complete example in this chapter, from basic rendering to advanced Riftiness. The scene itself is going to be very simple—just a cube on a stick in space—but artistic skill in scene design isn’t the focus here.
Let’s start with the basics. To render a 3D scene using Direct3D or OpenGL for a conventional monitor you need a number of elements:
· A view matrix to position the camera within the scene
· A projection matrix to define the view frustum[1], which contains the field of view and aspect ratio
· Shaders which will transform scene geometry into view geometry, and from there into real pixels