MURI Visualization Seminar
Friday, May 9, 4:00pm, 306 Soda Hall
Michael Gleicher
(formerly of Apple Research Laboratories)
Animation for the Rest of Us
Abstract:
With computer animation, anything is possible: desk lamps jump, tigers change
into cars, and dinosaurs rip open sport-utility vehicles. However, the time,
skill, and talent required to create animation means that very few can express
themselves in the media. Just as a better paintbrush does not eliminate the
need for art classes, simply refining existing computer animation tools and
techniques is not enough.
In this talk, I will describe some projects that aim to make it easier for the
rest of us to create animation and special effects. In each, we recast some
part of an animation task as a non-linear constrained optimization problem, so
we can use the machine to do some of the work. All of these projects were done
in the graphics research group at Apple Computer:
- Motion Adaptation and Editing: animated motion tends to be very special
purpose and not reusable: it almost always is a specific character a specific
action. We developed methods for repurposing motions, with the intent that it
could enable clip-motion libraries. I will describe a constraint-based
approach to motion adaptation/editing that attempts to preserve as much
of the original motion as possible while creating a new motion that meets new
needs. This requires solving a single (large) constrained optimization
problem over the entire motion. However, with a bit of care, it is possible
to solve these spacetime constraints problems fast enough to provide
real-time motion editing, even on 3D motion-captured data.
- Generating animation from performance: given observations of an actor
(and not just a human actor) performing some motion, how do we make a
graphical model perform the same action? This project developed ways to
more automatically process motion capture data.
- Projective tracking and registration: this work developed a method to
watch groups of pixels as they moved from frame to frame in a video sequence.
Because the technique determines a proper projective transformation between
frames, the motion can be reconstructed. I will show how this can be used to
create "virtual graffiti," where we can paint on one frame of a video, and
have the changes propagated to later frames.