MaskedMimic: Using Masked Motion Inpainting to Unify Physics-Based Character Control.
The MaskedMimic, a unified controller for humanoids that are physically mimicked. With simple user-defined intentions, the system can produce a broad variety of movements on a variety of terrains. It demonstrate a number of applications in this work, such as path following, object interaction, generating full-body motion from partial joint target positions, responding to joystick steering, interpreting text commands, and even combining these modalities to perform text-stylized path following.
Motion Tracking
Motion tracking is the first application to look at. In this case, MaskedMimic is given a list of target joint orientations and/or locations, and it has to produce a full-body motion that complies with these limitations. Examples of similar tasks are vr-tracking, which aims to provide realistic full-body motion based on sensors on a VR headset and hand controllers, and scene-retargeting, which aims to replicate a reference motion in a new scenario.
Complete Body Tracking
It approach can recreate motion capture recordings from flat terrains over a variety of varied terrains.
Sparse Tracking
To approach can recreate realistic full-body movements using only a fraction of the joints. Here, the developers demonstrate tracking a cartwheel motion using head and hand restrictions (similar to VR tracking) and a jogging motion from head-only constraints.
Assignments: Objective Development
User-generated limitations can be handled by the system again. This method is known as goal-engineering. To approach creates a motion that fulfils the user’s basic logical criteria for what they want the character to accomplish.
Locomotion
It can produce a large range of locomotor behaviors by limiting the head position (x,y,z) and orientation (w,x,y,z). To facilitate long-term planning, the controller is given a single long-term frame in addition to a collection of near-frames.
Object Interaction
It is possible to learn object-specific behaviors by using motions that include items in the scene. The motions produced by MaskedMimic when it is told to “interact with that object” are displayed here. MaskedMimic creates object-interaction movements that are in line with the physical characteristics of the item based on its condition.
Abstraction
An intriguing new area in character animation is the creation of a single, adaptable physics-based controller that can bring interactive characters to life in a variety of settings. Diverse control modalities, including written instructions, scene information, and sparse target keyframes, should be supported by the perfect controller.
Although physically simulated, scene-aware control models have been developed in earlier studies, these systems have mostly concentrated on creating controllers that are each experts in a limited number of tasks and control modalities.
This paper introduces MaskedMimic, a new method that formulates character control based on physics as a generic motion inpainting issue. It main discovery is that movements may be synthesized from partial (masked) motion descriptions, including masked keyframes, objects, text descriptions, or any combination of these, by training a single, cohesive model. This is accomplished through the use of motion tracking data and the development of a scalable training technique that can efficiently employ a variety of motion descriptions to generate logical animations.
By doing this, to method discovers a physics-based controller that offers a user-friendly control interface for every behavior of interest without the need for laborious reward engineering.
The resultant controller facilitates smooth transitions between various activities and supports a broad variety of control modes. MaskedMimic generates flexible virtual characters by integrating character control with motion inpainting. More immersive and engaging experiences are made possible by these characters’ ability to construct a variety of gestures and dynamically react to complicated settings.
Findings
Numerous inputs can be used to train a single unified MaskedMimic controller to generate a broad range of behaviours. Everyone demonstrate some of the various behaviors made possible by MaskedMimic in the films that follow. The same core controller was used to create each and every video.