LivePortrait tutorial: how to install LivePortrait from source code? (also portable package)

Silicon Gamer

30/05/2025

updated 30/05/2025

1. LivePortrait introduction

LivePortrait​ is an ​open-source AI framework​ , designed to animate static portrait photos into realistic talking-face videos.I have tested it on my local Windows PC, and I must say the results are remarkably impressive. Its performance significantly outperforms the majority of tools I previously tested and evaluated.

​​This is a tutorial demonstrating how to install LivePortrait from source code.

 

2. Git clone the code

git clone https://github.com/KwaiVGI/LivePortrait
Jin@DESKTOP-GF0MN1S MINGW64 /c/Workstation/Python/AI
$ git clone https://github.com/KwaiVGI/LivePortrait
Cloning into 'LivePortrait'...
remote: Enumerating objects: 1063, done.
remote: Counting objects: 100% (288/288), done.
remote: Compressing objects: 100% (45/45), done.
remote: Total 1063 (delta 256), reused 243 (delta 243), pack-reused 775 (from 2)
Receiving objects: 100% (1063/1063), 38.76 MiB | 1.62 MiB/s, done.
Resolving deltas: 100% (551/551), done.

then you can find there has a new directory “LivePortrait

C:\Workstation\Python\AI
├ComfyUI_windows_portable
├DeepFaceLab_NVIDIA_RTX3000_series_2021_11_20
├DFL
├facefusion
├GPT-SoVITS-beta0217
├LivePortrait
├lora
├OOTDiffusion
├SadTalker
├StabilityMatrix
├stable-diffusion-webui
├stable-diffusion-webui-portable

3. prepare the Conda environmen

Execute a command in CMD (also called the Terminal)

conda create -n LivePortrait python=3.10

I know CMD is one of the most basic concepts in programming, but occasionally some beginners ask me what it is, so I’d like to briefly explain it.

if everything is ok,you will find:

C:\Users\Jin>conda create -n env_liveportrait python=3.10
Channels:
- defaults
- conda-forge
Platform: win-64
Collecting package metadata (repodata.json): done
Solving environment: done

## Package Plan ##

environment location: C:\Workstation\Environment\envs\env_liveportrait

added / updated specs:
- python=3.10


The following NEW packages will be INSTALLED:

bzip2 anaconda/pkgs/main/win-64::bzip2-1.0.8-h2bbff1b_6
ca-certificates anaconda/pkgs/main/win-64::ca-certificates-2025.2.25-haa95532_0
libffi anaconda/pkgs/main/win-64::libffi-3.4.4-hd77b12b_1
openssl anaconda/pkgs/main/win-64::openssl-3.0.16-h3f729d1_0
pip anaconda/pkgs/main/noarch::pip-25.1-pyhc872135_2
python anaconda/pkgs/main/win-64::python-3.10.16-h4607a30_1
setuptools anaconda/pkgs/main/win-64::setuptools-78.1.1-py310haa95532_0
sqlite anaconda/pkgs/main/win-64::sqlite-3.45.3-h2bbff1b_0
tk anaconda/pkgs/main/win-64::tk-8.6.14-h0416ee5_0
tzdata anaconda/pkgs/main/noarch::tzdata-2025b-h04d1e81_0
vc anaconda/pkgs/main/win-64::vc-14.42-haa95532_5
vs2015_runtime anaconda/pkgs/main/win-64::vs2015_runtime-14.42.34433-hbfb602d_5
wheel anaconda/pkgs/main/win-64::wheel-0.45.1-py310haa95532_0
xz anaconda/pkgs/main/win-64::xz-5.6.4-h4754444_1
zlib anaconda/pkgs/main/win-64::zlib-1.2.13-h8cc25b3_1


Proceed ([y]/n)? y


Downloading and Extracting Packages:

Preparing transaction: done
Verifying transaction: /

Wait several seconds

done
#
# To activate this environment, use
#
# $ conda activate env_liveportrait
#
# To deactivate an active environment, use
#
# $ conda deactivate

 

 

4. active conda environment

conda activate env_liveportrait

C:\Users\Jin>conda activate env_liveportrait
(env_liveportrait) C:\Users\Jin>

A Conda environment acts as an isolated container. When activated, all your operations remain confined within it, ensuring no conflicts with other Python environments

 

5. check CUDA version

Execute a command in CMD

 nvcc -V
C:\Users\Jin>nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Mar_28_02:30:10_Pacific_Daylight_Time_2024
Cuda compilation tools, release 12.4, V12.4.131
Build cuda_12.4.r12.4/compiler.34097967_0

 

6. install the corresponding torch version

Here are examples for different CUDA versions. Visit the PyTorch Official Website for installation commands if your CUDA version is not listed:

# for CUDA 11.1
pip install torch==1.10.1+cu111 torchvision==0.11.2 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html
# for CUDA 11.8
pip install torch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 --index-url https://download.pytorch.org/whl/cu118
# for CUDA 12.1
pip install torch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 --index-url https://download.pytorch.org/whl/cu121
# for CUDA 12.4 
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu124

do not forget cuda and conda environment,my cuda version is 12.4,so I should excute command in env_liveportrait.

pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu124

​​If you encounter errors during installation via pip, you can try conda.

conda install pytorch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 pytorch-cuda=12.4 -c pytorch -c nvidia

​​This may require a slightly longer wait time, depending on your network speed, computer performance, or other unexpected factors.

I only noticed later that the LivePortrait project author had noted that higher CUDA versions might cause issues. After verification, I indeed encountered these problems and ultimately had to downgrade my CUDA version.

Note: On Windows systems, some higher versions of CUDA (such as 12.4, 12.6, etc.) may lead to unknown issues. You may consider downgrading CUDA to version 11.8 for stability. See the downgrade guide by @dimitribarbot.

 

 

7.  install the remaining dependencies

if you got error like this: "ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'"
maybe you need to enter your LivePortrait root directory first via this command
cd C:\Workstation\Python\AI\LivePortrait
pip install -r requirements.txt

 

8. Download pretrained weights

pip install -U "huggingface_hub[cli]"
huggingface-cli download KwaiVGI/LivePortrait --local-dir pretrained_weights --exclude "*.git*" "README.md" "docs"

Alternatively, you can download all pretrained weights from Google Drive or Baidu Yun. Unzip and place them in ./pretrained_weights

Ensuring the directory structure is.

pretrained_weights
├── insightface
│ └── models
│ └── buffalo_l
│ ├── 2d106det.onnx
│ └── det_10g.onnx
├── liveportrait
│ ├── base_models
│ │ ├── appearance_feature_extractor.pth
│ │ ├── motion_extractor.pth
│ │ ├── spade_generator.pth
│ │ └── warping_module.pth
│ ├── landmark.onnx
│ └── retargeting_models
│ └── stitching_retargeting_module.pth
└── liveportrait_animals
├── base_models
│ ├── appearance_feature_extractor.pth
│ ├── motion_extractor.pth
│ ├── spade_generator.pth
│ └── warping_module.pth
├── retargeting_models
│ └── stitching_retargeting_module.pth
└── xpose.pth



9. Inference

python inference.py
If the script runs successfully, you will get an output mp4 file named animations/s6--d0_concat.mp4. This file includes the following results: driving video, input image or video, and generated result.

The basic functionality is straightforward to use:

  1. Upload a source image.
  2. Upload a video containing the desired movements and expressions.
  3. Click the ‘Start’ button – results will be generated within seconds.
  4. You’ll observe the facial features from the photo being seamlessly applied to the video.

LivePortrait

 

 

10. One More Thing

I know that even with the most detailed tutorials and step-by-step demonstrations, others might still encounter various issues during implementation.​​

  • ​Python packages and dependencies
  • PyTorch framework dependencies
  • and NVIDIA’s CUDA stack

these three elements often become a nightmare for non-expert users.​​ so there directly provides a pre-packaged, out-of-the-box installer. Just download, unpack, and you’re ready to go.

 

LivePortrait Portable Package (Google Drive)

 

Leave a Comment