Deep-Live-Cam/README.md

23 KiB

Deep-Live-Cam

Real-time face swap and video deepfake with a single click and only a single image.

hacksider%2FDeep-Live-Cam | Trendshift

Demo GIF

Disclaimer

This deepfake software is designed to be a productive tool for the AI-generated media industry. It can assist artists in animating custom characters, creating engaging content, and even using models for clothing design.

We are aware of the potential for unethical applications and are committed to preventative measures. A built-in check prevents the program from processing inappropriate media (nudity, graphic content, sensitive material like war footage, etc.). We will continue to develop this project responsibly, adhering to the law and ethics. We may shut down the project or add watermarks if legally required.

  • Ethical Use: Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online.

  • Content Restrictions: The software includes built-in checks to prevent processing inappropriate media, such as nudity, graphic content, or sensitive material.

  • Legal Compliance: We adhere to all relevant laws and ethical guidelines. If legally required, we may shut down the project or add watermarks to the output.

  • User Responsibility: We are not responsible for end-user actions. Users must ensure their use of the software aligns with ethical standards and legal requirements.

By using this software, you agree to these terms and commit to using it in a manner that respects the rights and dignity of others.

Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online. We are not responsible for end-user actions.

Exclusive v2.0 Quick Start - Pre-built (Windows)

This is the fastest build you can get if you have a discrete NVIDIA or AMD GPU.
These Pre-builts are perfect for non-technical users or those who don't have time to, or can't manually install all the requirements. Just a heads-up: this is an open-source project, so you can also install it manually. This will be 60 days ahead on the open source version.

TLDR; Live Deepfake in just 3 Clicks

easysteps

  1. Select a face
  2. Select which camera to use
  3. Press live!

Features & Uses - Everything is in real-time

Mouth Mask

Retain your original mouth for accurate movement using Mouth Mask

resizable-gif

Face Mapping

Use different faces on multiple subjects simultaneously

face_mapping_source

Your Movie, Your Face

Watch movies with any face in real-time

movie

Live Show

Run Live shows and performances

show

Memes

Create Your Most Viral Meme Yet

show
Created using Many Faces feature in Deep-Live-Cam

Omegle

Surprise people on Omegle

Installation (Manual)

Please be aware that the installation requires technical skills and is not for beginners. Consider downloading the prebuilt version.

Click to see the process

Installation

This is more likely to work on your computer but will be slower as it utilizes the CPU.

1. Set up Your Platform

2. Clone the Repository

git clone https://github.com/hacksider/Deep-Live-Cam.git
cd Deep-Live-Cam

3. Download the Models

  1. GFPGANv1.4
  2. inswapper_128_fp16.onnx

Place these files in the "models" folder.

4. Install Dependencies

We highly recommend using a venv to avoid issues.

For Windows:

It is highly recommended to use Python 3.10 for Windows for best compatibility with all features and dependencies.

Automated Setup (Recommended):

  1. Run the setup script: Double-click setup_windows.bat or run it from your command prompt:

    setup_windows.bat
    

    This script will:

    • Check if Python is in your PATH.
    • Warn if ffmpeg is not found (see "Manual Steps / Notes" below for ffmpeg help).
    • Create a virtual environment named .venv (consistent with macOS setup).
    • Activate the virtual environment for the script's session.
    • Upgrade pip.
    • Install Python packages from requirements.txt. Wait for the script to complete. It will pause at the end; press any key to close the window if you double-clicked it.
  2. Run the application: After setup, use the provided .bat scripts to run the application. These scripts automatically activate the correct virtual environment:

    • run_windows.bat: Runs the application with the CPU execution provider by default. This is a good starting point if you don't have a dedicated GPU or are unsure.
    • run-cuda.bat: Runs with the CUDA (NVIDIA GPU) execution provider. Requires an NVIDIA GPU and CUDA Toolkit installed (see GPU Acceleration section).
    • run-directml.bat: Runs with the DirectML (AMD/Intel GPU on Windows) execution provider.

    Example: Double-click run_windows.bat to launch the UI, or run from a command prompt:

    run_windows.bat --source path\to\your_face.jpg --target path\to\video.mp4
    

Manual Steps / Notes:

  • Python: Ensure Python 3.10 is installed and added to your system's PATH. You can download it from python.org.
  • ffmpeg:
    • ffmpeg is required for video processing. The setup_windows.bat script will warn if it's not found in your PATH.
    • An easy way to install ffmpeg on Windows is to open PowerShell as Administrator and run:
      Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1')); choco install ffmpeg -y
      
      Alternatively, download from ffmpeg.org, extract the files, and add the bin folder (containing ffmpeg.exe) to your system's PATH environment variable. The original README also linked to a YouTube guide or iex (irm ffmpeg.tc.ht) via PowerShell.
  • Visual Studio Runtimes: If you encounter errors during pip install for packages that compile C code (e.g., some scientific computing or image processing libraries), you might need the Visual Studio Build Tools (or Runtimes). Ensure "C++ build tools" (or similar workload) are selected during installation.
  • Virtual Environment (Manual Alternative): If you prefer to set up the virtual environment manually instead of using setup_windows.bat:
    python -m venv .venv
    .venv\Scripts\activate.bat
    python -m pip install --upgrade pip
    python -m pip install -r requirements.txt
    
    (The new automated scripts use .venv as the folder name for consistency with the macOS setup).

For Linux:

# Ensure you use the installed Python 3.10
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

For macOS:

For a streamlined setup on macOS, use the provided shell scripts:

  1. Make scripts executable: Open your terminal, navigate to the cloned Deep-Live-Cam directory, and run:

    chmod +x setup_mac.sh
    chmod +x run_mac*.sh
    
  2. Run the setup script: This will check for Python 3.9+, ffmpeg, create a virtual environment (.venv), and install required Python packages.

    ./setup_mac.sh
    

    If you encounter issues with specific packages during pip install (especially for libraries that compile C code, like some image processing libraries), you might need to install system libraries via Homebrew (e.g., brew install jpeg libtiff ...) or ensure Xcode Command Line Tools are installed (xcode-select --install).

  3. Activate the virtual environment (for manual runs): After setup, if you want to run commands manually or use developer tools from your terminal session:

    source .venv/bin/activate
    

    (To deactivate, simply type deactivate in the terminal.)

  4. Run the application: Use the provided run scripts for convenience. These scripts automatically activate the virtual environment.

    • ./run_mac.sh: Runs the application with the CPU execution provider by default. This is a good starting point.
    • ./run_mac_cpu.sh: Explicitly uses the CPU execution provider.
    • ./run_mac_coreml.sh: Attempts to use the CoreML execution provider for potential hardware acceleration on Apple Silicon and Intel Macs.
    • ./run_mac_mps.sh: Attempts to use the MPS (Metal Performance Shaders) execution provider, primarily for Apple Silicon Macs.

    Example of running with specific source/target arguments:

    ./run_mac.sh --source path/to/your_face.jpg --target path/to/video.mp4
    

    Or, to simply launch the UI:

    ./run_mac.sh
    

Important Notes for macOS GPU Acceleration (CoreML/MPS):

  • The setup_mac.sh script installs packages from requirements.txt, which typically includes a general CPU-based version of onnxruntime.
  • For optimal performance on Apple Silicon (M1/M2/M3) or specific GPU acceleration, you might need to install a different onnxruntime package after running setup_mac.sh and while the virtual environment (.venv) is active.
  • Example for onnxruntime-silicon (often requires Python 3.10 for older versions like 1.13.1): The original README noted that onnxruntime-silicon==1.13.1 was specific to Python 3.10. If you intend to use this exact version for CoreML:
    # Ensure you are using Python 3.10 if required by your chosen onnxruntime-silicon version
    # After running setup_mac.sh and activating .venv:
    # source .venv/bin/activate
    
    pip uninstall onnxruntime onnxruntime-gpu # Uninstall any existing onnxruntime
    pip install onnxruntime-silicon==1.13.1   # Or your desired version
    
    # Then use ./run_mac_coreml.sh
    
    Check the ONNX Runtime documentation for the latest recommended packages for Apple Silicon.
  • For MPS with ONNX Runtime: This may require a specific build or version of onnxruntime. Consult the ONNX Runtime documentation. For PyTorch-based operations (like the Face Enhancer or Hair Segmenter if they were PyTorch native and not ONNX), PyTorch should automatically try to use MPS on compatible Apple Silicon hardware if available.
  • User Interface (Tkinter): If you encounter errors related to _tkinter not being found when launching the UI, ensure your Python installation supports Tk. For Python installed via Homebrew, this is usually python-tk (e.g., brew install python-tk@3.9 or brew install python-tk@3.10, matching your Python version).

** In case something goes wrong and you need to reinstall the virtual environment **

# Deactivate the virtual environment
rm -rf venv

# Reinstall the virtual environment
python -m venv venv
source venv/bin/activate

# install the dependencies again
pip install -r requirements.txt

Run: If you don't have a GPU, you can run Deep-Live-Cam using python run.py. Note that initial execution will download models (~300MB).

GPU Acceleration

CUDA Execution Provider (Nvidia)

  1. Install CUDA Toolkit 12.8.0
  2. Install cuDNN v8.9.7 for CUDA 12.x (required for onnxruntime-gpu):
    • Download cuDNN v8.9.7 for CUDA 12.x
    • Make sure the cuDNN bin directory is in your system PATH
  3. Install dependencies:
pip install -U torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
pip uninstall onnxruntime onnxruntime-gpu
pip install onnxruntime-gpu==1.16.3
  1. Usage:
python run.py --execution-provider cuda

CoreML Execution Provider (Apple Silicon)

Apple Silicon (M1/M2/M3) specific installation:

  1. Make sure you've completed the macOS setup above using Python 3.10.
  2. Install dependencies:
pip uninstall onnxruntime onnxruntime-silicon
pip install onnxruntime-silicon==1.13.1
  1. Usage (important: specify Python 3.10):
python3.10 run.py --execution-provider coreml

Important Notes for macOS:

  • You must use Python 3.10, not newer versions like 3.11 or 3.13
  • Always run with python3.10 command not just python if you have multiple Python versions installed
  • If you get error about _tkinter missing, reinstall the tkinter package: brew reinstall python-tk@3.10
  • If you get model loading errors, check that your models are in the correct folder
  • If you encounter conflicts with other Python versions, consider uninstalling them:
    # List all installed Python versions
    brew list | grep python
    
    # Uninstall conflicting versions if needed
    brew uninstall --ignore-dependencies python@3.11 python@3.13
    
    # Keep only Python 3.10
    brew cleanup
    

CoreML Execution Provider (Apple Legacy)

  1. Install dependencies:
pip uninstall onnxruntime onnxruntime-coreml
pip install onnxruntime-coreml==1.13.1
  1. Usage:
python run.py --execution-provider coreml

DirectML Execution Provider (Windows)

  1. Install dependencies:
pip uninstall onnxruntime onnxruntime-directml
pip install onnxruntime-directml==1.15.1
  1. Usage:
python run.py --execution-provider directml

OpenVINO™ Execution Provider (Intel)

  1. Install dependencies:
pip uninstall onnxruntime onnxruntime-openvino
pip install onnxruntime-openvino==1.15.0
  1. Usage:
python run.py --execution-provider openvino

Usage

1. Image/Video Mode

  • Execute python run.py.
  • Choose a source face image and a target image/video.
  • Click "Start".
  • The output will be saved in a directory named after the target video.

2. Webcam Mode

  • Execute python run.py.
  • Select a source face image.
  • Click "Live".
  • Wait for the preview to appear (10-30 seconds).
  • Use a screen capture tool like OBS to stream.
  • To change the face, select a new source image.

Tips and Tricks

Check out these helpful guides to get the most out of Deep-Live-Cam:

Visit our official blog for more tips and tutorials.

Command Line Arguments (Unmaintained)

options:
  -h, --help                                               show this help message and exit
  -s SOURCE_PATH, --source SOURCE_PATH                     select a source image
  -t TARGET_PATH, --target TARGET_PATH                     select a target image or video
  -o OUTPUT_PATH, --output OUTPUT_PATH                     select output file or directory
  --frame-processor FRAME_PROCESSOR [FRAME_PROCESSOR ...]  frame processors (choices: face_swapper, face_enhancer, ...)
  --keep-fps                                               keep original fps
  --keep-audio                                             keep original audio
  --keep-frames                                            keep temporary frames
  --many-faces                                             process every face
  --map-faces                                              map source target faces
  --mouth-mask                                             mask the mouth region
  --video-encoder {libx264,libx265,libvpx-vp9}             adjust output video encoder
  --video-quality [0-51]                                   adjust output video quality
  --live-mirror                                            the live camera display as you see it in the front-facing camera frame
  --live-resizable                                         the live camera frame is resizable
  --max-memory MAX_MEMORY                                  maximum amount of RAM in GB
  --execution-provider {cpu} [{cpu} ...]                   available execution provider (choices: cpu, ...)
  --execution-threads EXECUTION_THREADS                    number of execution threads
  -v, --version                                            show program's version number and exit

Looking for a CLI mode? Using the -s/--source argument will make the run program in cli mode.

Press

We are always open to criticism and are ready to improve, that's why we didn't cherry-pick anything.

Credits

Stargazers

Contributions

Alt

Stars to the Moon 🚀

Star History Chart