Skip to content

bytedance/x-nemo-inference

Repository files navigation

X-NeMo: Expressive Neural Motion Reenactment via Disentangled Latent Attention

Xiaochen Zhao, Hongyi Xu, Guoxian Song, You Xie, Chenxu Zhang, Xiu Li, Linjie Luo, Jinli Suo, Yebin Liu

This repository contains the video generation code of ICLR 2025 paper X-NeMo and the project X-Portrait2.

Arxiv Paper | Project Page

Installation

# Python 3.9, CUDA 12.2
conda create -n xnemo python=3.9
conda activate xnemo
pip install -r requirements.txt

Model

Please download Stable Diffusion 1.5 pre-trained model (i2v-xt and img-variations), and save it under "pretrained_weights/".

Please download X-NeMo pre-trained model from here, and save it under "pretrained_weights/".

Testing

bash eval.sh

License

The use of the released code and model must strictly adhere to the respective licenses. Our code is released under the Apache License 2.0, and our model is released under the Creative Commons Attribution-NonCommercial 4.0 International Public License for academic research purposes only.

This research aims to positively impact the field of Generative AI. Any usage of this method must be responsible and comply with local laws. The developers do not assume any responsibility for any potential misuse.

🎓 Citation

If you find this codebase useful for your research, please use the following entry.

@article{zhao2025x,
  title={X-NeMo: Expressive neural motion reenactment via disentangled latent attention},
  author={Zhao, Xiaochen and Xu, Hongyi and Song, Guoxian and Xie, You and Zhang, Chenxu and Li, Xiu and Luo, Linjie and Suo, Jinli and Liu, Yebin},
  journal={arXiv preprint arXiv:2507.23143},
  year={2025}
}

About

ICLR 2025 paper X-NeMo & Project X-Portrati2

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages