Yuxue Yang1,2, Lue Fan2, Zuzeng Lin3, Feng Wang4, Zhaoxiang Zhang1,2β
1UCASβ 2CASIAβ 3TJUβ 4CreateAIβ β Corresponding author
Official implementation of LayerAnimate: Layer-level Control for Animation, ICCV 2025
Videos on the project website vividly introduces our work and presents qualitative results for an enhanced view experience.
- [25-08-22] Release the Layer Curation Pipeline, including the demo and comprehensive usage guidance.
- [25-06-26] Our work is accepted by ICCV 2025! π
- [25-05-29] We have extended LayerAnimate to the DiT (Wan2.1 1.3B) variant, enabling the generation of 81 frames at 480 Γ 832 resolution. It performs surprisingly well in the Real-World Domain shown in the project website.
- [25-03-31] Release the online demo on Hugging Face.
- [25-03-30] Release a gradio script app.py to run the demo locally. Please raise an issue if you encounter any problems.
- [25-03-22] Release the checkpoint and the inference script. We update layer curation pipeline and support trajectory control for a flexible composition of various layer-level controls.
- [25-01-15] Release the project page and the arXiv preprint.
We have released a comprehensive pipeline for extracting motion-based layers from video sequences. The layer curation pipeline automatically decomposes videos into different layers based on motion patterns, where you can control the number of extracted layers by adjusting the layer capacity parameter to obtain varying levels of motion granularity.
More details can be found in the repo.
Input Videos | Layer Results |
---|---|
sample1.mp4 |
sample1_layer.mp4 |
sample2.mp4 |
sample2_layer.mp4 |
sample3.mp4 |
sample3_layer.mp4 |
sample4.mp4 |
sample4_layer.mp4 |
git clone [email protected]:IamCreateAI/LayerAnimate.git
conda create -n layeranimate python=3.10 -y
conda activate layeranimate
pip install -r requirements.txt
pip install wan@git+https://github.com/Wan-Video/Wan2.1 # If you want to use DiT variant.
Models | Download Link | Video Size |
---|---|---|
UNet variant | Huggingface π€ | 16 x 320 x 512 |
DiT variant | Huggingface π€ | 81 x 480 x 832 |
Download the pretrained weights and put them in checkpoints/
directory as follows:
checkpoints/
ββ LayerAnimate-Mix (UNet variant)
ββ LayerAnimate-DiT
Run the following command to generate a video from input images:
python scripts/animate_Layer.py --config scripts/demo1.yaml --savedir outputs/sample1
python scripts/animate_Layer.py --config scripts/demo2.yaml --savedir outputs/sample2
python scripts/animate_Layer.py --config scripts/demo3.yaml --savedir outputs/sample3
python scripts/animate_Layer.py --config scripts/demo4.yaml --savedir outputs/sample4
python scripts/animate_Layer.py --config scripts/demo5.yaml --savedir outputs/sample5
Note that the layer-level controls are prepared in __assets__/demos
.
You can run the demo locally by executing the following command:
python scripts/app.py --savedir outputs/gradio
Then, open the link in your browser to access the demo interface. The output video and the video with trajectory will be saved in the outputs/gradio
directory.
Run the following command to generate a video from input images:
python scripts/infer_DiT.py --config __assets__/demos/realworld/config.yaml --savedir outputs/realworld
We take the config.yaml
in demos/realworld/
as an example. You can also modify the config file to suit your needs.
- Release the code and checkpoint of LayerAnimate.
- Upload a gradio script to run the demo locally.
- Create a online demo in the huggingface space.
- DiT-based LayerAnimate.
- Release layer curation pipeline.
- Training script for LayerAnimate.
We sincerely thank the great work ToonCrafter, LVCD, AniDoc, and Wan-Video for their inspiring work and contributions to the AIGC community.
Please consider citing our work as follows if it is helpful.
@article{yang2025layeranimate,
author = {Yang, Yuxue and Fan, Lue and Lin, Zuzeng and Wang, Feng and Zhang, Zhaoxiang},
title = {LayerAnimate: Layer-level Control for Animation},
journal = {arXiv preprint arXiv:2501.08295},
year = {2025},
}