LAMA: Human motion data to realistic complex 3D model actions
LAMA utilizes a reinforcement learning framework combined with a motion matching algorithm. Reinforcement learning helps the model make appropriate decisions in variou...
Tags:Paper and LLMs3D 3D effects 3D holographic displayPricing Type
- Pricing Type: Open Source
- Price Range Start($):
LAMA: Human motion data to realistic complex 3D model actions
LAMA: Using only human motion capture data, it is possible to synthesize high-quality, realistic 3D human actions including movement, scene interaction and manipulation in complex 3D environments.
In addition to scene interaction, LAMA can also simulate how people manipulate objects, such as picking up, putting down or moving objects.
It can help us understand how people move and interact in complex environments, providing a reference for designing more humane spaces.
LAMA (working principle is based on the following key technologies and methods:
Synthesizing human-scene interactions in real 3D environments has always been a challenging research problem due to its complexity and diversity. For example, there are many objects in a real 3D environment, which makes motion synthesis very complex. Furthermore, the spatial layout of 3D environments and the diversity of human interaction behaviors make generalization of synthesis difficult.
The goal of the LAMA (Locomotion-Action-Manipulation) system is to synthesize natural and reasonable long-term human movements that occur in complex indoor environments. The key motivation of LAMA is to build a unified framework covering daily actions such as movement, scene interaction and object manipulation. This means that LAMA can help simulate how people move, interact with scenes and manipulate objects in complex environments.
Technical details: Unlike existing methods that require supervision with motion data “paired” with a 3D scene, LAMA formulates the problem as test-time optimization. This means that LAMA will use existing human motion capture data for real-time optimization during synthesis, rather than relying on pre-defined data.
To achieve this goal, LAMA utilizes a reinforcement learning framework combined with a motion matching algorithm. Reinforcement learning helps the model make appropriate decisions in various scenarios, while motion matching algorithms ensure that synthesized actions match real human actions. In addition, LAMA also utilizes the motion editing framework of manifold learning to cover various possible changes in interactions and operations.
Projects and demos: https://jiyewise.github.io/projects/LAMA/
Paper: https://arxiv.org/abs/2301.02667
Application scenarios
1. Virtual reality and augmented reality: In VR and AR applications, LAMA can be used to generate a user’s virtual avatar, allowing it to interact naturally with the virtual environment.
2. Film and animation production: In film and animation production, LAMA can be used to automatically generate the movements of background characters, thereby saving production time and costs.
3. Computer games: In computer games, LAMA can be used to generate the actions of non-player characters (NPCs) to make their behavior look more real and natural.
4. Robotics and autonomous driving: LAMA can be used to simulate and predict people’s behavior in complex environments, thereby helping robots and autonomous vehicles better interact with humans.
5. Architecture and interior design: By simulating how people behave in specific environments, LAMA can provide architects and interior designers with valuable feedback to help them design more humane spaces.
Features and Benefits
LAMA (Locomotion-Action-Manipulation): LAMA is a method designed to synthesize natural and realistic long-term human movements in complex indoor environments using human motion capture data[1]. It excels in generating lifelike motions in various challenging scenarios, outperforming previous approaches[6]. This technology has applications in fields such as computer graphics, animation, and robotics.
LaMa (Resolution-robust Large Mask Inpainting): LaMa is a project related to large mask inpainting, focusing on resolution-robust techniques. It is available on GitHub under the Apache-2.0 license and has garnered significant interest with 6.2k stars and 693 forks[2]. The project is a valuable resource for those working on image inpainting and related areas.
Jiye Lee: Jiye Lee is a PhD student in the Department of Computer Science and Engineering at Seoul National University. She is associated with projects like LAMA for synthesizing human movements in complex 3D environments[3]. Her work contributes to advancing the field of computer graphics and animation.
LAMA by Facebook Research: There is also a project named LAMA by Facebook Research, which appears to be related to language model analysis. This project provides code and vectors that can be used in downstream tasks[5]. It may be of interest to researchers and developers working with natural language processing and language models.
Lama (Teaching Language): Lama is used as a teaching language in a compiler course, and it can be accessed on GitHub[7]. It seems to be a tool for educational purposes, particularly for learning about compiler construction.
LAMA by mpi2 (Automatic Phenotyping Tool for 3D Images): Another LAMA project is an automatic phenotyping tool for 3D images. It was developed by Neil Horner in the Bioimage Informatics Group at MRC Harwell UK[10]. This tool is likely relevant to researchers working with 3D image analysis and phenotyping.

jiyewise.github.io.projects.LAMA









