CS184 Lecture 15 summary

Texture mapping

Texture mapping is a process of adding surface detail to a 3D object using a 2D image file. The image file is normally in jpeg, gif or png format.

In the simplest case, you can think of the texture image being "wrapped" onto the object's surface. In VRML, the wrapping process for each basic shape is different. Here is a sample texture from the "VRML Sourcebook" and here are examples from that book of the texture mapped onto cube, cone, cylinder and sphere. In these examples, there are several copies of the texture wrapped onto the object. e.g. for the cube, there is one copy of the texture image for each face.

More interesting cases are the IndexedFaceSet and the ElevationGrid. For these surfaces, the texture map is projected orthographically, "from below" the surface, onto it. Below for the IndexedFaceSet is defined by taking the smallest dimension of its bounding box as the vertical. Here are examples for the IndexedFaceSet with the image used to texture map it, and an ElevationGrid and the image used to map it.

For extrusion shapes, one axis of the texture image is along the spine, while the other wraps around the cross-section. Here is an example shape (a donut) and the texture image used to map it.

VRML also supports animated textures. These can be very useful for simulating small shape changes such as facial animation, or this whirlpool example.

Texture Coordinates

The texture mapping schemes described above provide a limit range of possibilities for texturing objects, especially in the case of complex shapes like ElevationGrid or IndexedFaceSet. Texture coordinates provide a way to explicitly control the way the texture image is mapped onto the shape.

For texture coordinates, we assume that the texture image has X and Y coordinates in the range 0 to 1. Each point on the surface (ElevationGrid or IndexedFaceSet) is assigned an explicit coordinate (xi, yi) in the texture image. Points on the surface derive their color from the corresponding coordinate in the texture image. A face of the object will map to a face with the corresponding number of sides in the texture image. We will assume the face is triangular, or triangulated. The color of any point inside the face is found by linearly interpolating the texture coordinates of the vertices that bound the face, and looking into the texture image for that point.

To put this another way, a point on a triangular face maps to a point in the texture image with the same barycentric coordinates, relative to the 3 vertices that bound the face. Let's explore those for a minute:

Barycentric Coordinates

For a triangular face on vertices a, b, and c in 2D or 3D, its possible to express any point p inside the face as a linear combination

p = u a + v b + w c

where u, v, w lie between 0 and 1, and such that their sum is 1. The tuple (u, v, w) is referred to as the barycentric coordinates of the point p.

To compute the texture image coordinates of an arbitrary point p in a triangular face a, b, c, we first determine the barycentric coordinates (u, v, w) of p. Then we look up the texture coordinates a', b', c' of a, b, c and compute the point

p' = u a' + v b' + w c'

The color of this point in the texture image determines the color of p.

This mapping scheme provides a lot of generality in mapping from texture images to shapes. This is particularly useful for mappings where there is large shape change. It is also possible to use a patch of texture image in multiple object faces, or to have the two sides of an edge map to different edges in the texture image. This is useful e.g. in mapping a spheres (example).

Here is a square face mapped to a subset of the texture image. For a non-triangular face, the coordinates would normally be determined by the barycentric coordinates in a triangulation of the face.