CS 184: COMPUTER GRAPHICS


PREVIOUS < - - - - > CS 184 HOME < - - - - > CURRENT < - - - - > NEXT

Lecture #18 -- Wed 4/1/2009.

Testing Your Skills:  Phong Shading Calculations



You are processing the shape on the left and with Phong interpolation. The computed dot-products between the averaged vertex normals and the light are as indicated. Compute the resulting bright­ness values at the indicated points A, B, C, and D, 
assuming ka = ks = 0;   kd = 0.5.


PREPARATION FOR MIDTERM (next Wednesday !)

The time to prepare for the midterm exam is NOW -- not Tuesday night !
So, what should you be doing ?  See online info on exam preparation.
Review quizzes. Review notes.  Think through topic list.
Prepare your one sheet of notes.
Follow up on stuff that is still unclear after the weekend and ask questions in the discussion/review sessions,
or in my office hours on Monday  (Friday I will be out of town).


Subdivision Curves & Surfaces

see: Powerpoint slides

Implementing texture mapping on CC-subdivision surfaces of arbitrary genus would be a nice part of a final project.
A good introductory booklet on Splines with interactive demos:
"Interactive Curves and Surfaces," (with Multimedia Tutorial on CAGD), A. Rockwood and P. Chambers, Morgan Kaufman Publishers, Inc.
[ This will be used in CS 284 in Fall 2009. ]


The Classical Rendering Pipeline

The main transformation steps

Scene Hierarchy --> Rendering Hierarchy
Put the Camera node "above" the World node.
Use inverse transformations whenever you need to move "upwards" in the scene tree.

Finding the proper sequence ...
We need to think about the exact order in which we want to do all the necessary operations:
-- culling, backface elimination, clipping, shading, rasterizing ...
General principles:
-- Do least expensive, most work-saving steps first.
-- Don't throw away information you may need later.

One possible practical 3D Rendering Pipeline

Some issues that need special consideration:

Viewing / Rendering

Rendering means to take a snapshot of a part of the World from the point of view of the eye or the camera.
I.e., we ask the question: "What does world look like from the point of view of the camera?"
The key parameters of camera placement are its position (3 DOF) and orientation (3 DOF); ( These are the 6 DOF of a rigid body in 3D ).
This can be conveniently specified with a Look_at Transformation:
-- the position/origin of the new system (eye),
-- a  view reference point  that will lie on the -n-axis (vrp)
-- and an up vector that should project onto the v-axis in the uv-plane (up).

All this defines the View Reference Coordinate System (VRCS):
The VRCS has its origin at the camera lens,
its n-axis pointing through the lens straight into the camera,
its v-axis pointing (typically) upwards,
and its u-axis at right angle to both other axes, so as to form a right-handed coordinate system.

For the rendering process, we transform the parts of the world to be rendered into this new reference frame, and then project onto the image plane.

The desired transformation into the VRCS can most easily be computed by modifying the scene hierarchy so that the camera becomes its "root."
We then calculate the way the World lies in the camera system by inverting the compound matrix string that leads from the world to the camera.
Now every instanced polygon in the scene can be described in the framework of the camera with a single compound matrix,
and we can easily determine whether it can be seen and how it would appear to the camera.

A technique similar to this Reverse Camera Path will be used when we will have to deal with the individual illuminations produced by one or more light sources:
We will make each one in turn temporarily the root of the hierarchy and determine how each polygon appears in that special reference coordinate system for a particular light, so that we can determine how much light from that source ends up on each polygon.

Projections (~ "batch processing" of all the operations that were done with individual rays in ray casting).

In the simplest case we may use a Parallel-Projection Camera
In this case our 3D to 2D transformation is simply to ignore the z-coordinate values, once we have found the properly oriented VRCS.

More often we will use a Perspective Projection
In this case additional parameters to describe the camera are needed
"Focal length" --> determines the opening angle of viewing pyramid;
"Film or light sensor geometry" --> positioning and size of the imaging plane and the window of interest; also front and back clipping planes.
These camera parameters can be described with 6 numbers -- specifying a 3D "world window box"
-- a rectangle in the plane z = -1, and 2 z-values for clipping planes (these will get normalized to the back and front faces of the canonical half-cube).
This leads to a "Unified" Camera Model
--
If the center of that view rectangle lies on the -z-axis (-n-axis), we get a symmetrical view volume (else we get a somewhat slanted view).
-- A slanted view in parallel projection allows us to do oblique projections (this may require a shear transformation to get such a view volume into the canonical viewing box.


NEXT TIME ...

Mathematics of Planar Geometric Projections

How do the coordinates coming from an original object get changed during the projection step ?
In the perspective case, the size of the image depends on the distance between camera and original.
See: Camera Specifications and Mapping of the Viewing Volume into the Canonical Half-Cube.
Rather than just carying out projections, we do a full 3D->3D transformation
that produces the same effect on the x- and y- coordinate values but also preserves the relative ordering of the geometry in the z-direction.
This is called the perspective transformation.

AND MORE ON PERSPECTIVE...


Reading Assignments:

Study:   Shirley, 2nd Ed:  Ch 9; Ch 12.1-12.7.


Programming Assignment 8: due (electronically submitted) before Saturday 4/11, 11pm  ==> Can be done in pairs !

In-class Mid-term-Exam: WED 4/8, 2:40-4:00pm


PREVIOUS < - - - - > CS 184 HOME < - - - - > CURRENT < - - - - > NEXT

Page Editor: Carlo H. Séquin