# Warm-up Exercise:

## List several causes that can produce a non-uniform illumination on an object in a scene.

E.g., will a directional light shining on a flat surface always cause uniform illumination?

Will it always cause unifom apparent brightness?

Can you think of a case where a directional light creates uniform illumination on a curved surface?

PREVIOUS < - - - - > CS 184 HOME < - - - - > CURRENT < - - - - > NEXT

# Lecture #10 -- Wed 2/25/2009.

## Crucial Concepts from Last Lecture:

The Sphere-World of Assignment #5
How to move up an down this scene hierarchy.
Move rays from camera all the way to unit spheres at the leaves of the tree, do intersections there.
Then move the found intersection point and the normal at that point back up to the WORLD coordinate system to cast secondary rays.
Need to calculate proper reflection and refraction angles.
Later we will start moving the camera around this world.
==> Build a robust transformation data structure and become proficient with it.

## Extension of the Scene Description format

``` (sphere id ```   ```(radius radius_float) ```   ```(material material_id) ) ```(material id
(color  color_triple )
(ka ka_float)     # diffuse reflection coefficient for ambient light (hack!)
(kd kd_float)     # diffuse reflection coefficient
(ks ks_float)     # specular reflection coefficient, aka "kr"
(ksp ksp_float)   # specular angle fall-off
(ksm ksm_float)   # metalness
(kt kt_float)     # transmission coefficient
(ktn ktn_float)   # refractive index
)

(camera id
(perspective  0|1 )   # (perspective 0) means parallel projection
(l l_float)           # left   boundary of window in the image/near-clipping plane
(r r_float)
# right  boundary of window in the image/near-clipping plane
(b b_float)           # bottom boundary of window in the image/near-clipping plane
(t t_float)           # top    boundary of window in the image/near-clipping plane
(n n_float)           `# sets the -z coordinate of the image plane, and of the near-clipping plane`
(f f_float)           ```# sets the far clipping plane; this is typically not used for raytracing. )```
) ``` ```(light id
(type  lighttype_flag)
(color  color_triple )
(falloff  falloff_float)                 # falloff exponent for point- and spot-lights
(angularfalloff  angularfalloff_float)   # exponent on cosine for spot-lights
# by default localized lights are positioned at (0,0,0)
)        # and directed lights are shining in the direction (0,0,-1).

## Finishing up Remarks on 3D Transformations:

Most 3D transformation matrices can be readily be derived from the 2D versions. Add an additional row and column in all matrices to make them 4x4.
For detailed view of matrices; see  [Shirley, Chapter 6.2].

Scaling ==> trivial; just add the 3rd coordinate.

Translation -- not really a linear transform ! ==> need to resort to homogeneous coordinates again.
A shear transformation in 4-dimensional space  has the effect of a net translation in the plane w=1.

Rotation around one coordinate axis ==> just like 2D, involving only those two axes.    Cyclically move through xy, yz, zx - planes to get all three cases.
Positive rotations are CCW when looking down the corresponding rotation axis (right hand rule: thumb in axis direction ==> fingers point in positive rotation sense).
(Angles in the "matrix machinery" should be expressed in radians; at the user interface level, degrees are more convenient).

Rotations around an arbitrary axis deserve some extra attention. They are useful for building kinematic scenes, e.g., with wheels spinning around skewed axes.
3 conceptual approaches:

1.)  Classical approach: build a compound transformation in the following way:
--  move the rotating object so that its rotation axis goes through origin;
--  turn rotation axis into the z-axis (in a 2-step rotation around z-axis and y-axis);
--  apply the desired rotation around the current z-axis;
--  rotate the axis back to its former orientation (inverse of the 2 rotations in step two);
--  move object back to its original position (inverse of original translation).

2.)  Another way to achieve the above effect is to use the "change of basis" matrix.
Define a local coordinate system with its z-axis aligned with the rotation axis.
Do a coordinate transform that expresses the world coordinates in this new coordinate system, in which the desired rotation can be done easily (around the z-axis).
(This can be achieved with a matrix that describes the X-, Y-, Z- axis unit vectors as the first 3 columns, and the coordinates xO, yO, zO of the origin as the fourth.)
After doing the desired rotation, do a change of basis back to the old coordinate system.

3.)  A third way is to use vector algebra to decompose the vector from the origin to the point to be transformed
into two components: one parallel to the rotation axis, and the other perpendicular to it.
Only that second component needs to be rotated, and this can be done in a plane perpendicular to the rotation axis.
The overall transformation matrix then results from a composition of these vector components (see lab notes).

Mathematically, this results in the following formulation (assuming a is an axis through the origin):
First, create the matrix A which is the linear transformation that computes the cross product of the vector a with any other vector, v.

a x v =
 ayvz - azvy azvx - axvz axvy - ayvx
=
 0 -az ay az 0 -ax -ay ax 0
 vx vy vz
= Av, with A =
 0 -az ay az 0 -ax -ay ax 0

Now, the rotation matrix can be written in terms of A as:   Q = eAb = I + A sin(b) + A2 [1 - cos(b)]     ---- This is also known as Rodrigues Formula.

The framework that the TAs are building will do all this work for you !
( later in the course: Quaternions!).

Inverse transformations are also straight forward; see  [Shirley, Chapter 6.4].

SVD still works the same way: i.e., any arbitrary affine 3D transformation can also be expressed as a compound matrix of pure rotations and non-uniform scalings.
The  eigenvectors of a matrix M still correspond to those vectors that do not change direction when M is applied,
(e.g. vectors parallel to the rotation axis when M is an arbitrary 3D rotation).

Transformation of normal vectors deserves again some extra attention. (You need that to compute the proper illumination intensities and reflection angles.)
The proper transformation matrix is the transpose of the inverse matrix, i.e., (M-1)T [Shirley, Chapter 6.2.2].
An intuitive explanation why that makes some sense (Thanks to Jimmy Andrews):
Let's assume that M is the compound matrix that that inserts a leaf-node into WORLD.
Doing an SVD on that Matrix will decompose it into the form R1*S*R2.
The inverse of a compound matrix:  (A*B*C)-1 = C-1*B-1*A-1      ===>   Thus,  M-1 = R2-1*S-1*R1-1
The transpose of a compound matrix:  (A*B*C)T = CT*BT*AT    ===>   Thus,  (M-1)T = (R1-1)T*(S-1)T*(R2-1)T = R1*(S-1)*R2
Thus,  (M-1)T leaves all rotation components untouched, but inverts the non-uniform scaling component.

## Distribution Ray-tracing -- improving resolution quality, reducing aliasing

Shooting only a single ray per pixel can result in objectionable aliasing effects; these can be reduced by shooting multiple rays per pixels and averaging the returned (r,g,b) intensities.
In the simplest case, each pixel is sampled according to a regular grid (i.e. 2, 3, or 4 samples per pixel edge). This is equivalent of assuming that you have screen in which the number of pixels in the horizontal and vertical directions are increased by an interger multiple.
In a more sophisticated program the individual super-sampling rays would be randomized in their positions within a pixel.

This one basic trick: shooting several distributed rays and then averaging the result, can simulate several useful effects:
Fuzzy Reflections
Depth of Field
Motion Blur