Why I am excited for Mocopi

Mocopi (I assume a play on “Motion Copy”) is a relatively low cost motion capture device announced by Sony for release in 2023 Q1. There has been a lot of twitter noise about the product, with some promising demonstrations of people using it with a focus on the VTuber community. Do I think it will be an amazing generic product? No, but I am still looking forward to it. Here are my reasons why.

My Goals

I am creating an animated cartoon as a hobbyist, and learning a lot along the way. Mocopi interests me because I think it will be a very useful product for the way I already work in creating these animations as a hobbyist indie creator.

My primary goal is to create content, efficiently. My secondary goal is to have fun and learn the technology along the way. Okay, I admit it, the second goal often overtakes the first – but I rationalize that on the efficiency front. If technology can mean I can create faster, then it’s a win. I am investing up front for ultimate efficiency.

But I would love the day when I can just write text and an AI turns my script and written scene directions into characters moving around in the scene in an entertaining way. I think this is very possible, but would take a larger investment than I can afford today. So I watch the AI Art scene with interest, but shy away from jumping in myself. So, motion capture it is for now.

My Tool Chain

I am using Unity (the game engine) with the Sequences package to organize my shots into a hierarchy of Timelines. It does not meet my needs exactly, but it is overall a big timesaver. I then use pre-existing animation clips as much as possible (e.g. for walking and sitting), and weave in custom mocap recordings for specific shots where I don’t have a suitable animation clip.

Why Unity? I know it, there is an active VTuber community, and I can extend it with scripts. Unreal Engine looks pretty good too though.

Finger and Face Tracking

Mocopi does not track finger movements or facial expressions. Isn’t that bad? Well, personally, none of the tools I have used have done a great job here. So I use “override” tracks in Unity Timelines with pose animation clips and animate them that way. It’s reasonably efficient and I can edit the timing later easily.

Other motion capture solutions

I have tried a number of different tracking solutions such as Leap Motion (now UltraLeap) cameras to track fingers, webcam for body tracking (I like VSeeFace myself), and VR headsets and controllers. They are pretty good, and I do use them all at times. But they also have limitations. E.g. Leap Motion tracks fingers, but it’s based on cameras so it has to see your hands. It frequently gets it wrong, or my hands go out of camera frame, and so on. VR gives me the most precise movement tracking, but I have to keep putting headset on and off so I cannot see my computer screen as I record. It’s annoying.

Mocopi

This is why, for me, Mocopi is interesting. I can sit at my desk, looking at my computer screen, with it turned on. It does not block my eyes with a VR headset. It does not have to worry about field of view considerations or lighting – important for camera based solutions. I can leave it on while I work with minimal interference.

But what is the tracking quality like? I don’t know! That is a big question for me. I guess I will have to wait and see, but initial demo clips on twitter look promising. (Demos on twitter never lie, right?)

Competition

I should point out that there is another similar product already on the market that I only became aware of when seeing the Mocopi discussions, SlimeVR. If I knew of it sooner, I might have bought this. But a Sony backed product I am hoping will see a larger market share, and hence attract more active surrounding development. So I am going to hold off and wait for the Mocopi release.

Conclusion

I am very interested in Mocopi because I think it will fit the way I am already working well. It hopefully will give me a low cost motion capture solution (important as a hobbyist!) that I can use while still using my normal computer (no VR headset to put on/off). I am not a VTuber, but I borrow a lot of their technology. While not as cool as AI, I think it is still a lot faster than what I see of most animation today, where you adjust hand position movements etc individually. It takes a lot of skill and a lot of time to do well.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s