Original repository: LivePortrait, thanks to the authors for sharing
New features:
- Achieved real-time running of LivePortrait on RTX 3090 GPU using TensorRT, reaching speeds of 30+ FPS. This is the speed for rendering a single frame, including pre- and post-processing, not just the model inference speed.
- Seamless support for native gradio app, with several times faster speed and support for simultaneous inference on multiple faces and Animal Model.
- Added support for JoyVASA, which can drive videos or images with audio.
If you find this project useful, please give it a star ✨✨
- Anyone want this? Fell free to contact me.
397962532-247fec20-54b6-4496-bef4-92dff16f5180-ff.mp4
- Text-driven video, based on kokoro-82M:
file4.webp-file4.webp-kokoro-82m-tts.wav.pkl-org-audi-ff.mp4
- Audio-driven video (real-time):
s9.jpeg-s9.jpeg-865042464-1-192.wav.pkl-org-audio-ff.mp4
- Animal-driven:
8.10.2.mp4
- Multiple faces driven simultaneously:
0b6957cc-1c3b-42a6-89f9-4d4206ad4849--d14.mp4
- Option 1 (recommended): If you are a Windows user, you can directly download the integrated package.
- You need to install git first, then double-click update.batto update the code.
- Double-click scripts/all_onnx2trt.batto convert onnx files to tensorrt files.
- Double-click webui.batto open the webpage, or double-clickcamera.batto open the camera for real-time operation.
 
- You need to install git first, then double-click 
- Option 2: Docker.A docker image is provided for  eliminating the need to install onnxruntime-gpu and TensorRT manually.
- Install Docker according to your system
- Download the image: docker pull shaoguo/faster_liveportrait:v3
- Execute the command, replace $FasterLivePortrait_ROOTwith the local directory where you downloaded FasterLivePortrait:
 docker run -it --gpus=all \ --name faster_liveportrait \ -v $FasterLivePortrait_ROOT:/root/FasterLivePortrait \ --restart=always \ -p 9870:9870 \ shaoguo/faster_liveportrait:v3 \ /bin/bash
- Option 3: Create a new Python virtual environment and install the necessary Python packages manually.
- First, install ffmpeg
- Run pip install -r requirements.txt
- Then follow the tutorials below to install onnxruntime-gpu or TensorRT. Note that this has only been tested on Linux systems.
 
- (Ignored in Docker) Install TensorRT 8.x (versions >=10.x are not compatible). Remember the installation path of TensorRT.
- (Ignored in Docker) Install the grid_sample TensorRT plugin, as the model uses grid sample that requires 5D input, which is not supported by the native grid_sample operator.
- git clone https://github.com/SeanWangJS/grid-sample3d-trt-plugin
- Modify line 30 in CMakeLists.txtto:set_target_properties(${PROJECT_NAME} PROPERTIES CUDA_ARCHITECTURES "60;70;75;80;86")
- export PATH=/usr/local/cuda/bin:$PATH
- mkdir build && cd build
- cmake .. -DTensorRT_ROOT=$TENSORRT_HOME, replace $TENSORRT_HOME with your own TensorRT root directory.
- make, remember the address of the .so file, replace- /opt/grid-sample3d-trt-plugin/build/libgrid_sample_3d_plugin.soin- scripts/onnx2trt.pyand- src/models/predictor.pywith your own .so file path
 
- Download ONNX model files:huggingface-cli download warmshao/FasterLivePortrait --local-dir ./checkpoints. Convert all ONNX models to TensorRT, runsh scripts/all_onnx2trt.shandsh scripts/all_onnx2trt_animal.sh
- Test the pipeline using tensorrt:
python run.py \ --src_image assets/examples/source/s10.jpg \ --dri_video assets/examples/driving/d14.mp4 \ --cfg configs/trt_infer.yaml 
- To run in real-time using a camera:
python run.py \ --src_image assets/examples/source/s10.jpg \ --dri_video 0 \ --cfg configs/trt_infer.yaml \ --realtime 
- First, download the converted onnx model files:huggingface-cli download warmshao/FasterLivePortrait --local-dir ./checkpoints.
- (Ignored in Docker)If you want to use onnxruntime cpu inference, simply pip install onnxruntime. However, cpu inference is extremely slow and not recommended. The latest onnxruntime-gpu still doesn't support grid_sample cuda, but I found a branch that supports it. Follow these steps to installonnxruntime-gpufrom source:- git clone https://github.com/microsoft/onnxruntime
- git checkout liqun/ImageDecoder-cuda. Thanks to liqun for the grid_sample with cuda implementation!
- Run the following commands to compile, changing cuda_versionandCMAKE_CUDA_ARCHITECTURESaccording to your machine (your cuDNN version must be 8.x, 9.x is not compatible):
 ./build.sh --parallel \ --build_shared_lib --use_cuda \ --cuda_version 11.8 \ --cuda_home /usr/local/cuda --cudnn_home /usr/local/cuda/ \ --config Release --build_wheel --skip_tests \ --cmake_extra_defines CMAKE_CUDA_ARCHITECTURES="60;70;75;80;86" \ --cmake_extra_defines CMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc \ --disable_contrib_ops \ --allow_running_as_root- pip install build/Linux/Release/dist/onnxruntime_gpu-1.17.0-cp310-cp310-linux_x86_64.whl
 
- Test the pipeline using onnxruntime:
python run.py \ --src_image assets/examples/source/s10.jpg \ --dri_video assets/examples/driving/d14.mp4 \ --cfg configs/onnx_infer.yaml
- onnxruntime: python webui.py --mode onnx
- tensorrt: python webui.py --mode trt
- The default port is 9870. Open the webpage: http://localhost:9870/
Hotkeys for webcam mode (when render window is on focus)
Q > exit
S > Stitching
Z > RelativeMotion
X > AnimationRegion
C > CropDrivingVideo
K,L > AdjustSourceScale
N,M > AdjustDriverScale
- Code: This project is licensed under the MIT License - see the LICENSE file for details.
- Models: Any machine learning models used in this project are subject to their respective licenses. Please refer to the original model sources for license information. We do not take responsibility for model license compliance.
Changelog
- 2025/06/29: LivePortrait animal v1.1 onnx models are available. Download from this.
-  2024/12/22: Add API Deployment python api.py, For more information, please refer to the tutorial.
-  2024/12/21: Added support for Kokoro-82M, enabling text-driven video or image generation.
- Updated code: git pull origin masterand install the latest Python dependenciespip install requirements.txt, or simply double-clickupdate.baton Windows.
- Download the model: huggingface-cli download hexgrad/Kokoro-82M --local-dir .\checkpoints\Kokoro-82M.
- For Linux, install espeak-ng:apt-get -qq -y install espeak-ng > /dev/null 2>&1
- For Windows, refer to manual installation instructions and configure the espeak-ngenvironment variables. The current read location is here; modify it if your installation path differs.
- Now you can use it normally in the "Drive Text" tab.
 
- Updated code: 
- 2024/12/16: Added support for JoyVASA, which can drive videos or images with audio. Very cool!
- Update code, then download the models: huggingface-cli download TencentGameMate/chinese-hubert-base --local-dir .\checkpoints\chinese-hubert-baseandhuggingface-cli download jdh-algo/JoyVASA --local-dir ./checkpoints/JoyVASA
- After launching the webui, follow the tutorial below. When the source is a video, it's recommended to only drive the mouth movements
12.16.1.mp4
-  2024/12/14: Added pickle and image driving, as well as region driving animation_region.
- Please update the latest code. Windows users can directly double-click update.batto update, but note that your local code will be overwritten.
- Running python run.pynow automatically saves the corresponding pickle to the same directory as the driving video, allowing for direct reuse.
- After opening webui, you can experience the new pickle and image driving, as well as the region driving animation_region features. Note that for image driving, remember to disable relative motion.
 
- Please update the latest code. Windows users can directly double-click 
-  2024/08/11: Optimized paste_back speed and fixed some bugs.
- Used torchgeometry + cuda to optimize the paste_back function, significantly improving speed. Example: python run.py --src_image assets/examples/source/s39.jpg --dri_video assets/examples/driving/d0.mp4 --cfg configs/trt_infer.yaml --paste_back --animal
- Fixed issues with Xpose ops causing errors on some GPUs and other bugs. Please use the latest docker image: docker pull shaoguo/faster_liveportrait:v3
 
- Used torchgeometry + cuda to optimize the paste_back function, significantly improving speed. Example: 
-  2024/08/11: Optimized paste_back speed and fixed some bugs.
- Used torchgeometry + cuda to optimize the paste_back function, significantly improving speed. Example: python run.py --src_image assets/examples/source/s39.jpg --dri_video assets/examples/driving/d0.mp4 --cfg configs/trt_infer.yaml --paste_back --animal
- Fixed issues with Xpose ops causing errors on some GPUs and other bugs. Please use the latest docker image: docker pull shaoguo/faster_liveportrait:v3
 
- Used torchgeometry + cuda to optimize the paste_back function, significantly improving speed. Example: 
-  2024/08/07: Added support for animal models and MediaPipe models, so you no longer need to worry about copyright issues.
- 
Added support for animal models. - Download the animal ONNX file: huggingface-cli download warmshao/FasterLivePortrait --local-dir ./checkpoints, then convert it to TRT format.
- Update the Docker image: docker pull shaoguo/faster_liveportrait:v3. Using animal model:python run.py --src_image assets/examples/source/s39.jpg --dri_video 0 --cfg configs/trt_infer.yaml --realtime --animal
- Windows users can download the latest Windows all-in-one package from the release page, then unzip and use it.
- Simple usage tutorial:
 0807-3.mp4
- Download the animal ONNX file: 
- 
Using MediaPipe model to replace InsightFace - For web usage: python webui.py --mode trt --mporpython webui.py --mode onnx --mp
- For local webcam: python run.py --src_image assets/examples/source/s12.jpg --dri_video 0 --cfg configs/trt_mp_infer.yaml
 
- For web usage: 
 
- 
-  2024/07/24: Windows integration package, no installation required, one-click run, supports TensorRT and OnnxruntimeGPU. Thanks to @zhanghongyong123456 for their contribution in this issue.
- [Optional] If you have already installed CUDA and cuDNN on your Windows computer, please skip this step. I have only verified on CUDA 12.2. If you haven't installed CUDA or encounter CUDA-related errors, you need to follow these steps:
- Download the installation-free Windows integration package from the release page and extract it.
- Enter FasterLivePortrait-windowsand double-clickscripts/all_onnx2trt.batto convert onnx files, which will take some time.
- For web demo: Double-click webui.bat, open the webpage:http://localhost:9870/
- For real-time camera operation, double-click camera.bat,pressqto stop. If you want to change the target image, run in command line:camera.bat assets/examples/source/s9.jpg
 
-  2024/07/18: macOS support added(No need for Docker, Python is enough). M1/M2 chips are faster, but it's still quite slow 😟
- Install ffmpeg: brew install ffmpeg
- Set up a Python 3.10 virtual environment. Recommend using miniforge: conda create -n flip python=3.10 && conda activate flip
- Install requirements: pip install -r requirements_macos.txt
- Download ONNX files: huggingface-cli download warmshao/FasterLivePortrait --local-dir ./checkpoints
- Test: python webui.py --mode onnx
 
- Install ffmpeg: 
- 2024/07/17: Added support for Docker environment, providing a runnable image.