Data-Driven Geometry, Lighting and Materials

       

Realistic image synthesis requires accurate models for object geometry, illumination and material properties. Today, these are often the limiting factor in realism, and we therefore often acquire them from the real world, using methods commonly referred to as inverse or image-based modeling and rendering. The first challenge is developing efficient and robust acquisition methods. However, even if we do acquire measured geometry, illumination and materials, it is difficult to work with them, since they often represent unstructured high-dimensional data. Hence, a challenge is determining compact structured representations. Finally, we seek to develop efficient Monte Carlo rendering algorithms that can incorporate measured illumination or reflectance for rendering. A new project is to consider the challenge of different resolutions and develop multiscale representations of appearance.

Primary Current Participants
Aner Ben-Artzi (graduated May 2007) Jinwei Gu
Pieter Peers (ICT) Dhruv Mahajan
Peter Belhumeur Shree Nayar
Todd Zickler (Harvard) James Davis (UC Santa Cruz)
Diego Nehab (Princeton, now MSR) Jason Lawrence (Princeton, now UVA)
Wojciech Matusik (MERL, now Adobe)       Szymon Rusinkiewicz (Princeton)
Ravi Ramamoorthi Charles Han
Acquisition

Our goal has been to acquire reflectance under much less structured conditions than previously. We have shown how to acquire material properties under complex lighting using our signal-processing framework. More recently, we have developed image-based rendering techniques for spatially-varying reflectance that enable the use of many fewer images than previously. Most recently, we have used dilution in water to easily acquire scattering properties of many materials in the single scattering regime, and a new theory of compressive structured light to efficiently acquire inhomogeneous participating media including dynamic scenes.
A Signal-Processing Framework for Inverse Rendering: Siggraph 01, pages 117-128

This paper is the most mathematical so far and derives the theory for the general 3D case with arbitrary isotropic BRDFs. It also applies the results to the practical problem of inverse rendering under complex illumination.
Full Paper:     gzipped PS (3.7M)    PDF (1M)    Talk:    PPT (1.3M)    
SIGGRAPH 2002 Course Notes: Acquiring Material Models Using Inverse Rendering


Reflectance Sharing: Image-Based Rendering from a Sparse Set of Images PAMI Aug 06, pages 1287-1302 , EGSR 05, pages 253-264
We develop the theoretical framework and practical results for image-based rendering of spatially-varying reflectance from a very small number of images. In doing so, we trade off some spatial variation of the reflectance for an increased number of angular samples.

Paper:     EGSR 05 (PDF)     Video (83M)     PAMI 06 (PDF)
Acquiring Scattering Properties of Participating Media by Dilution SIGGRAPH 06, pages 1003-1012.
We present a simple device and technique for robustly estimating the properties of a broad class of participating media that can be either (a) diluted in water such as juices or beverages, (b) dissolved in water such as powders and sugar/salt crystals, or (c) suspended in water, such as impurities.

Paper:     PDF   
Compressive Structured Light for Recovering Inhomogeneous Participating Media ECCV 2008.
Recovering dynamic inhomogeneous participating media is a significant challenge in vision and graphics. We introduce a new framework of compressive structured light, where patterns are emitted to obtain a line integral of the volume density at each camera pixel. The framework of compressive sensing is then used to recover the density from a sparse set of patterns.

Paper:     PDF     Video (25M)
Compressive Light Transport Sensing ACM Transactions on Graphics 28(1), Article 3, pages 1-18, Jan 2009.
In this article we propose a new framework for capturing light transport data of a real scene, based on the recently developed theory of compressive sensing for sparse signals. We develop a novel hierarchical decoding algorithm that improves reconstruction quality by exploiting interpixel coherency relations. Additionally, we design new nonadaptive illumination patterns that minimize measurement noise.

Paper:     PDF
Structured Representations and Editing

Once we have acquired reflectance, we need to find structured representations that are compact, highly accurate, and easily editable . In 2006, we made significant progress on this problem, with 4 papers at SIGGRAPH. Our first approach, inverse shade trees uses a new linear hierarchical matrix factorization algorithm to create intuitive, editable decompositions of spatially-varying reflectance (SVBRDFs). A related project explores editing of measured and analytic BRDFs in complex illumination with cast shadows (more recent work extends this to editing a full global illumination solution). We have also made the first comprehensive study of time-varying surface appearance (TSVBRDFs) with a novel non-linear space-time appearance factorization. Finally, we have developed an efficient compact representation for heterogeneous subsurface scattering (BSSRDFs).
Inverse Shade Trees for Non-Parametric Material Representation and Editing SIGGRAPH 06, pages 735-745.
We develop an inverse shade tree framework of hierarchical matrix factorizations to provide intuitive, editable representations of high-dimensional measured reflectance datasets of spatially-varying appearance. We introduce a new alternating constrained least squares framework for these decompositions, that preserves the key features of linearity, positivity, sparsity and domain-specific constraints. The SVBRDF is decomposed onto 1D curves and 2D maps, that are easily edited.

Paper:     PDF    Video (24M)
Real-Time BRDF Editing in Complex Lighting SIGGRAPH 06, pages 945-954.
Inverse shade trees develops structured non-parametric curve-based BRDF representations. This allows for data-driven editing, but only with point lighting. In this project, we develop the theory and algorithms to for the first time allow users to edit these BRDFs in real time to design materials in their final placement in a scene with complex natural illumination and cast shadows.

Paper:     PDF (20M)     Video (59M)    
Time-Varying Surface Appearance: Acquisition, Modeling and Rendering SIGGRAPH 06, pages 762-771.
We conduct the first comprehensive study of time-varying surface appearance, including acquisition of the first database of time-varying processes like burning, drying and decay. We then develop a nonlinear space-time appearance factorization that allows easy editing or manipulation such as control, transfer and texture synthesis. We demonstrate a variety of novel time-varying rendering applications.

Paper:     PDF     Video QT (64M)     Video AVI (46M)    
A Compact Factored Representation of Heterogeneous Subsurface Scattering SIGGRAPH 06, pages 746-753.
Heterogeneous subsurface scattering in translucent materials is one of the most beautiful but complex effects. We acquire spatial BSSRDF datasets using a projector, and develop a novel nonlinear factorization that separates a homogeneous kernel, and heterogeneous discontinuities.

Paper:     PDF (11M)


Time-Varying BRDFs IEEE Transactions on Visualization and Computer Graphics 13, 3 pages 595-609, 2007.
The properties of virtually all real-world materials change with time, causing their BRDFs to be time-varying. In this work, we address the acquisition, analysis, modeling and rendering of a wide range of time-varying BRDFs, including the drying of various types of paints (watercolor, spray, and oil), the drying of wet rough surfaces (cement, plaster, and fabrics), the accumulation of dusts (household and joint compound) on surfaces, and the melting of materials (chocolate). Analytic BRDF functions are fit to these measurements and the model parameters variations with time are analyzed. Each category exhibits interesting and sometimes non-intuitive parameter trends. These parameter trends are then used to develop analytic time-varying BRDF (TVBRDF) models.

Paper:     PDF     Video (49MB)


A Precomputed Polynomial Representation for Interactive BRDF Editing with Global Illumination ACM Transactions on Graphics 27(2), Article 13, pages 1-13. Presented at SIGGRAPH 2008.
We develop a mathematical framework and practical algorithms to edit BRDFs with global illumination in a complex scene. A key challenge is that light transport for multiple bounces is non-linear in the scene BRDFs. We address this by developing a new bilinear representation of the reflection operator, deriving a polynomial multi-bounce tensor precomputed framework, and reducing the complexity of further bounces.

Paper:     PDF     Video (7M)

Rendering: Monte Carlo Sampling

Having acquired illumination (environment maps) and measured BRDFs, we still need to be able to use them for image synthesis. A related project deals with interactive rendering. Here, we focus on Monte Carlo sampling for more traditional global illumination. The challenge is that we need to stratify and importance sample high-dimensional measured datasets. While Monte Carlo sampling is a mature area in both statistics and computer graphics, this problem has not been addressed before. We have developed effective techniques for importance sampling both illumination and materials.
Structured Importance Sampling of Environment Maps Siggraph 03, pages 605-612
We introduce structured importance sampling, a new technique for efficiently rendering scenes illuminated by distant natural illumination given in an environment map.

PDF     Video    
Efficient Shadows from Sampled Environment Maps Journal of Graphics Tools 11(1), pages 13-36, 2006.
We evaluate various possibilities and show how coherence can be used to speed up shadow testing with environment maps by an order of magnitude or more.

Paper:     PDF   
Efficient BRDF Importance Sampling Using a Factored Representation Siggraph 04, pages 494-503
We introduce a Monte Carlo Importance sampling technique for general analytic and measured BRDFs based on a new BRDF factorization.

PDF (8M)

Adaptive Numerical Cumulative Distribution Functions for Efficient Importance Sampling EGSR 05, pages 11-20
Importance sampling high-dimensional functions like lighting and BRDFs is increasingly important, but a direct tabular representation has storage cost exponential in the number of dimensions. By placing samples non-uniformly, we show that we can develop compact CDFs that enable new applications like sampling from oriented environment maps and multiple importance sampling.

Paper:     PDF   

MultiScale Appearance

Scale is one of the most important, but often neglected, features of appearance throughout the processing pipeline. If we zoom into the earth from space, our perception will be widely varying depending on if we are looking continent-scale, city wide, at the street map or in our rooms. Even simple zooms in and out of computer graphics models can lead to widely varying appearances, giving rise to the need for proper filtering, representation and synthesis algorithms. At SIGGRAPH 2007 and 2008, we have presented some of the first methods for multiscale (properly filtered) normal maps and multiscale texture synthesis.


Frequency Domain Normal Map Filtering SIGGRAPH 07, article 28, pages 1-11.
While mipmapping textures is commonplace, accurate normal map filtering remains a challenging problem because of nonlinearities in shading--we cannot simply average nearby surface normals. In this paper, we show analytically that normal map filtering can be formalized as a spherical convolution of the normal distribution function (NDF) and the BRDF. This leads to accurate multiscale normal map representations that preserve properly filtered appearance across a range of scales. The introduction of the von Mises-Fisher distribution and spherical EM into graphics, also enables the use of high-frequency materials.

Paper:     PDF     Video (103M)     Very Cool Trailer (MOV 54M)
Multiscale Texture Synthesis SIGGRAPH 08.
The appearance of many textures changes dramatically with scale; By using an exemplar graph with a few small single-scale exemplars and modifying a standard parallel synthesis method, we develop the first multiscale texture synthesis algorithm. The method is simple and can be implemented in real-time in the GPU, and even enables (by a simple recursive graph) infinite zooms into an image.

Paper:     PDF     Video (175M)

Geometry

While most of our focus has been on reflectance, we have also conducted some research on new geometry acquisition frameworks, and techniques that bridge the gap between active and passive acquisition (Spacetime Stereo). These methods also allow acquisition of geometry under uncontrolled unstructured illumination. We have also developed ways to robustly create compact structured generative models from unstructured range data. More recently, we have shown how to efficiently combine shape and normals for precise 3D geometry, and developed a new viewpoint-coding scheme that uses multiple viewpoints instead of spatial and temporal codes for structured light.
Creating Generative Models from Range Images Siggraph 99, pages 195-204
We have explored the creation of high-level parametric models from low-level range data. Our model-based approach is relatively insensitive to noise and missing data and is fairly robust.

Full Paper:     PS (2.5M)    PDF (1.5M)

Spacetime Stereo: A Unifying Framework for Depth from Triangulation CVPR 03, II-359--II-366 ; PAMI Feb 05, pages 296-302
We propose a common framework, spacetime stereo, which unifies many previous depth from triangulation methods like stereo, laser scanning, and coded structured light. As a practical example, we discuss a new temporal stereo technique for improved shape estimation in static scenes under variable illumination.

Paper:     CVPR 03 ,     PAMI 05

Efficiently Combining Positions and Normals for Precise 3D Geometry Siggraph 05, pages 536-543
We show how depth and normal information, such as from a depth scanner and from photometric stereo, can be efficiently combined to remove the distortions and noise in both, producing very high quality meshes for computer graphics.

Paper:     PDF


Viewpoint-Coded Structured Light CVPR 2007.
We introduce a theoretical framework and practical algorithms for replacing time-coded structured light patterns with viewpoint codes, in the form of additional camera locations. Current structured light methods typically use log(N) light patterns, encoded over time, to unambiguously reconstruct N unique depths. We demonstrate that each additional camera location may replace one frame in a temporal binary code.

Paper:     PDF


Last modified: Wed May 13 15:27:19 PDT 2009