AGRoL, Avatars GRow Legs at Meta

I love the progress Meta are making (https://dulucas.github.io/agrol/) on animating the lower body from just a VR headset and hand controllers. Personally I would love to use this as a poor man’s option for mocap (Motion Capture), to create custom animation clips for characters.

There are a number of camera based mocap solutions around these days (such as move.ai and deepmotion.com), as well as frameworks such as MediaPipe from Google, but I have generally found using VR headsets and hand controllers to be more robust with less jitter. Sure, it probably won’t be useful for a person climbing over a pile of rocks in a scene, but I find I try to create quick and dirty animations first, only going to more complex approaches if the quick approach is not good enough. It is a matter of efficiency for me (since this is a hobby project, time is limited – I don’t want to have to set up a room with multiple cameras that move.ai recommends for each animation clip I need).

Reading the paper is also a reminder of the difference between learning how to write effective prompts for ChatGPT and doing the data science for underlying models! The linked paper has enough mathematical notation to bring back haunting memories from my PhD many years ago. It is also a reminder of all the great data sets that are already out there, if you know where to look. E.g. 5 hours of dance moves by professional dancers! (AIST++, terms and conditions: not for commercial use without written permission https://google.github.io/aistplusplus_dataset/factsfigures.html)

So I look forward to the Meta work turning up as a library others can use.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s