3DDI Project Abstract: |
Acquisition research includes a novel real-time depth camera design, made possible by recent research in large arrays of tunable lasers. This scanner should ultimately provide a depth image at typical image resolutions (500 by 500) in real-time. For large environments (building exteriors) in daylight where the scanner is not feasible, ordinary photographs from multiple views provide input data for 3d reconstruction. Previous work by one PI on constructing texture-mapped 3d models from multiple views will be extended to depth-camera data.
Modeling includes radiance modeling (shading, shadows, mutual illumination) and motion modeling. Previous work by two PIs on radiance modeling will be extended to efficient but accurate global illumination models. Our physical simulation method with full-contact physics generates the most physically realistic motion among current simulation methods.
Three dimensional rendering for holographic or lenticular displays requires frame-rate generation of many views. Rendering research will include further enhancements to the ``optic flow'' hardware renderer invented by one of the PIs, and special hardware to support the volumetric display.
We will study 3d representation using three display types: (i) an improved lenticular (specular-optical) display (ii) a holographic display using acoustic-optical modulators (iii) a volumetric scattering display using cholesteric liquid crystals. Of these, two prototypes have already been constructed, and one is to be developed in this project. The lenticular display and the holographic display are prototypes at the Spatial Imaging lab at MIT. The third is a volumetric self-occluding scattering display to be developed at UC Berkeley.
We will research several applications of 3d technology, namely tele-medecine, collaboration, and crisis simulation.