About Kite & Lightning
We are a creative development studio with cinematic film sensibilities forging experiences to move people.
Blog

Character Animation with Kinect Motion Capture

By on February 19, 2013

Last time, I gave an overview on how we can capture 3D photo-quality people for augmented reality executions.  One of the challenges going forward is once we capture them, how do you animate them? Again, our number one constraint is that we are always fighting the time budget. Animating people by hand in a realistic manner is out of the question. So that means we have to quickly turn to Kinect motion capture.

For those that don’t know what motion capture is, it’s where you get your actors to dress up in those silly black suits in a studio and then record their movements in 3D.

Black suits and white balls for traditional motion capture

The little white balls are what the computer uses to record movement in 3D space

 

Problem solved, right? Well, not exactly.  There are a couple of main problems we’ve found with motion capture: jitter, fidelity, and cost. Most of the data that comes out from motion capture systems tend to have a lot of noise, which shows up as jitter in your animations:

Now this is usually a pretty bad problem because cleaning up jitter is a very painstaking process if done by hand. There are a couple of computer techniques that we can use, but that leads to the second problem: fidelity.  Usually, to reduce the jitter in the animation above, you can smooth out (or average) the animation.  This makes it less jittery but the downside is now you lose out on the high frequency data, which shows up as finely detailed movements.  For example, if you smoothed out all the jitter, then the subtle movements of a character shifting their weight from one leg to the next would be completely lost.  A subtle head nod would be lost. Now our thoughts are that this may or may not be a problem for body movement, but it would definitely be a showstopper for facial capture where humans can detect submillimeter facial movements. So we’re going to have to tackle that in a separate and later post.

To completely get rid of both these problems, we would turn to a feature film motion capture studio. Problem? Money. A high-end motion capture studio rig could easily cost 6 figures. Since we don’t have that kind of cash, we had to get ingenious.

Kinect To The Rescue

Kinect Motion Capture Rig

Turns out, this little company called Microsoft came out with these depth sensing cameras that can track human skeletal data. So we had an idea, well, maybe we can use these Kinect cameras to do motion capture for our 3D characters. Cost wise, these Kinect systems are only a couple of hundred dollars instead of a couple of hundred thousand–so that’s a win. But we also knew that the resolution of the Kinect is immensely low (samples at 30 hz and only captures SD resolution). So, with bated breath, Cory Strassburger from Alienna & I forged ahead to see what kind of quality we could extract out of these systems.

Kinect Mocap Software

Turns out, we’re not the first ones to have this idea. Which is awesome. Because that meant we didn’t have to write our own software to interface with the drivers. We’ve scoured the world and tried out two different systems: iPi Soft & Brekkel.

Our Fancy Mocap Studio

Our Fancy Mocap Studio

Results from initial Kinect motion capture test

Cory checking out our Mocap capture data test

Initial Kinect Motion Capture Results

Conclusion: results indeterminate. The key takeaway was that at this current stage and our current setup, the mocap was good enough to do rapid blocking or coarse animation. Again, for a video game type of experience, it would be fine. But, we’re reaching for the triple gold: feature film quality in realtime in 3D. So, we’re still going to need to experiment some more.  Here’s our breakdown from a day of testing…

iPi Soft

[list style=”5″ underline=”0″]

  • Gets the most hi-fi data out of Kinect because they implement their own tracking algorithm
  • Not real time. On a semi-decent computer, it goes about a frame a second
  • No hand animation
  • Can do head tracking in all orientations
  • Jitter is minimal but subtle motions are not possible
  • Calibration took about 30 minutes to figure out and, even then, we didn’t get it perfectly
  • Might be good enough to do background characters

[/list]

Brekkel

[list style=”5″ underline=”0″]

  • Real-time because it uses the skeletal data coming out of the Kinect
  • Lower fidelity than iPi soft as it does not do any head tracking
  • Can be rigged pretty easily into Motion Builder/Maya so that you can do animation in real time
  • Ideal for doing rapid prototyping or blocking of animation
  • Minimal calibration

[/list]
So this approach didn’t meet our high expectations but still provided some useful tools that we’ve now added to our toolbox.

TAGS
RELATED POSTS
2 Comments
  1. Reply

    Gabriel

    June 8, 2014

    How many devices did you use for mocap? I saw the other day some videos on Youtube with a guy having decent results using 2 Kinect(es).
    Are you willing to test again using the second gen Kinect? Please do try and keep us in the loop!

    Senza Peso’s N1 fan. (congrats)

    • Reply

      Cory Strassburger

      June 9, 2014

      Thanks Gabriel! no mocap on this one, Its stereo shot footage mapped onto billboards.

LEAVE A COMMENT