Motion2Motion: Cross-topology Motion Transfer with Sparse Correspondence

Computer Vision & MultiModal AI
Published: arXiv: 2508.13139v1
Authors

Ling-Hao Chen Yuhong Zhang Zixin Yin Zhiyang Dou Xin Chen Jingbo Wang Taku Komura Lei Zhang

Abstract

This work studies the challenge of transfer animations between characters whose skeletal topologies differ substantially. While many techniques have advanced retargeting techniques in decades, transfer motions across diverse topologies remains less-explored. The primary obstacle lies in the inherent topological inconsistency between source and target skeletons, which restricts the establishment of straightforward one-to-one bone correspondences. Besides, the current lack of large-scale paired motion datasets spanning different topological structures severely constrains the development of data-driven approaches. To address these limitations, we introduce Motion2Motion, a novel, training-free framework. Simply yet effectively, Motion2Motion works with only one or a few example motions on the target skeleton, by accessing a sparse set of bone correspondences between the source and target skeletons. Through comprehensive qualitative and quantitative evaluations, we demonstrate that Motion2Motion achieves efficient and reliable performance in both similar-skeleton and cross-species skeleton transfer scenarios. The practical utility of our approach is further evidenced by its successful integration in downstream applications and user interfaces, highlighting its potential for industrial applications. Code and data are available at https://lhchen.top/Motion2Motion.

Paper Summary

Problem
The paper addresses the long-standing problem of transferring a motion from one character (with a specific topology) to another character with a different topology in computer animation. This is a challenging task, especially when dealing with complex characters like those with skirts or hair.
Key Innovation
The key innovation is the introduction of Motion2Motion, a novel, training-free framework that enables cross-topology motion transfer with sparse correspondence. The framework assumes only minimal data availability (a few-shot setting) and a sparse joint correspondence between source and target skeletons. This allows for meaningful transfer while avoiding the need for large-scale annotation.
Practical Impact
The practical impact of this research is significant, as it can be applied in various real-world scenarios where motion transfer is crucial, such as animation creation pipelines. The framework's ability to work with minimal data availability makes it a valuable tool for industries where data is scarce or expensive to collect.
Analogy / Intuitive Explanation
Imagine trying to retarget a dance move from a human to a robot. You wouldn't just copy the exact same movements, but rather try to capture the essence and spirit of the original dance. Motion2Motion does something similar by identifying key points (joints) on both characters' skeletons and aligning them in a way that preserves the core kinematic characteristics of the motion. In other words, it's not just about matching specific bone movements, but also understanding the underlying dynamics and intent behind the original motion. This allows for more flexible and robust motion transfer across different topologies, making it a powerful tool for animators and motion designers.
Paper Information
Categories:
cs.CV
Published Date:

arXiv ID:

2508.13139v1

Quick Actions