Every so often, someone asks us about projection mapping a person. Or a dress. Or a dancer. And the answer is usually: “Sure, as long as they only move very, very slowly.”
It’s not difficult to calibrate a camera with a projector. A computer looks at the camera, finds the part of the frame that matches the subject, and draws that outline to the projector where it is projected back precisely onto the subject.
The problem is that the pipeline of data from the camera to the projector is relatively slow. So this works when the subject is stationary, or moving only slowly, but with faster movements, the alignment breaks down.
Sometimes this works as a stylistic choice. But often this is not the effect that is desired.
Now, with a lot of the projection mapping projects that we implement at WorldStage, we may indeed use cameras to track an object. But rather than simply looking for a deforming shape in the camera, we use the camera to detect the position of a particular point, or beacon, on a “rigid body.” A rigid body is a structure like a square or a car that may change orientation but maintains a known shape. So we might track a set of points, and use these points to calculate the position of the body in space so that we can project onto it.
Alternatively, a “soft body” is a structure that may change shape like a human or a sheet of fabric falling through space. One difference between rigid body tracking and soft body tracking is that a rigid body usually follows relatively predictable Newtonian physics in its movements, while a soft body may not. Most of the time a rigid body will accelerate and rotate smoothly, which means that we can actually predict where it is likely to be 50 or 80 milliseconds in the future. Even though there may be latency in our graphics pipeline to the projector, we can anticipate where our subject will be when our graphics are projected, and we can better align the mapped graphics to the moving subject.
We were recently approached with a request to projection map a piece of fabric, so the usual question arose: Will our system be fast enough to project onto this moving subject? We decided to try something new.
We’d like to easily track a set of points around the perimeter of our subject. Fortunately, we know quite a lot about the subject that we wish to track. I know that if I start from a central point on the subject and search out in any direction, I’ll encounter the edge of the subject exactly once. (It doesn’t curve back in on itself anywhere.)
For example, Form A works for this, while Form B does not. For our experiment, I know that our subject is closer to Form A.
So the trick here is that I will start at roughly the center of my subject, search out in every direction and find a set of points on the edges. I will use these points the way we use beacons to track rigid bodies: by applying some motion prediction to them. Then we will project back onto the subject the PREDICTED position so that we might better align our projection to the actual location of the subject.
I tested this at The Lab on a small scale. Take a look.
It may be that we can use this technique to better map projections onto human subjects in the future. Stay tuned.