Compare commits
12 Commits
25a961fc32
...
585f58fbad
Author | SHA1 | Date |
---|---|---|
|
585f58fbad | |
|
ec6d7d2995 | |
|
e791f2f18a | |
|
ea59ef7635 | |
|
ba008e72eb | |
|
348e6c424e | |
|
5f2e54552c | |
|
d2794038f7 | |
|
521cad145d | |
|
3d8af5180d | |
|
2e617c9401 | |
|
37486f03e7 |
128
README.md
128
README.md
|
@ -134,12 +134,57 @@ Place these files in the "**models**" folder.
|
||||||
We highly recommend using a `venv` to avoid issues.
|
We highly recommend using a `venv` to avoid issues.
|
||||||
|
|
||||||
|
|
||||||
For Windows:
|
**For Windows:**
|
||||||
```bash
|
|
||||||
python -m venv venv
|
It is highly recommended to use Python 3.10 for Windows for best compatibility with all features and dependencies.
|
||||||
venv\Scripts\activate
|
|
||||||
pip install -r requirements.txt
|
**Automated Setup (Recommended):**
|
||||||
```
|
|
||||||
|
1. **Run the setup script:**
|
||||||
|
Double-click `setup_windows.bat` or run it from your command prompt:
|
||||||
|
```batch
|
||||||
|
setup_windows.bat
|
||||||
|
```
|
||||||
|
This script will:
|
||||||
|
* Check if Python is in your PATH.
|
||||||
|
* Warn if `ffmpeg` is not found (see "Manual Steps / Notes" below for ffmpeg help).
|
||||||
|
* Create a virtual environment named `.venv` (consistent with macOS setup).
|
||||||
|
* Activate the virtual environment for the script's session.
|
||||||
|
* Upgrade pip.
|
||||||
|
* Install Python packages from `requirements.txt`.
|
||||||
|
Wait for the script to complete. It will pause at the end; press any key to close the window if you double-clicked it.
|
||||||
|
|
||||||
|
2. **Run the application:**
|
||||||
|
After setup, use the provided `.bat` scripts to run the application. These scripts automatically activate the correct virtual environment:
|
||||||
|
* `run_windows.bat`: Runs the application with the CPU execution provider by default. This is a good starting point if you don't have a dedicated GPU or are unsure.
|
||||||
|
* `run-cuda.bat`: Runs with the CUDA (NVIDIA GPU) execution provider. Requires an NVIDIA GPU and CUDA Toolkit installed (see GPU Acceleration section).
|
||||||
|
* `run-directml.bat`: Runs with the DirectML (AMD/Intel GPU on Windows) execution provider.
|
||||||
|
|
||||||
|
Example: Double-click `run_windows.bat` to launch the UI, or run from a command prompt:
|
||||||
|
```batch
|
||||||
|
run_windows.bat --source path\to\your_face.jpg --target path\to\video.mp4
|
||||||
|
```
|
||||||
|
|
||||||
|
**Manual Steps / Notes:**
|
||||||
|
|
||||||
|
* **Python:** Ensure Python 3.10 is installed and added to your system's PATH. You can download it from [python.org](https://www.python.org/downloads/).
|
||||||
|
* **ffmpeg:**
|
||||||
|
* `ffmpeg` is required for video processing. The `setup_windows.bat` script will warn if it's not found in your PATH.
|
||||||
|
* An easy way to install `ffmpeg` on Windows is to open PowerShell as Administrator and run:
|
||||||
|
```powershell
|
||||||
|
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1')); choco install ffmpeg -y
|
||||||
|
```
|
||||||
|
Alternatively, download from [ffmpeg.org](https://ffmpeg.org/download.html), extract the files, and add the `bin` folder (containing `ffmpeg.exe`) to your system's PATH environment variable. The original README also linked to a [YouTube guide](https://www.youtube.com/watch?v=OlNWCpFdVMA) or `iex (irm ffmpeg.tc.ht)` via PowerShell.
|
||||||
|
* **Visual Studio Runtimes:** If you encounter errors during `pip install` for packages that compile C code (e.g., some scientific computing or image processing libraries), you might need the [Visual Studio Build Tools (or Runtimes)](https://visualstudio.microsoft.com/visual-cpp-build-tools/). Ensure "C++ build tools" (or similar workload) are selected during installation.
|
||||||
|
* **Virtual Environment (Manual Alternative):** If you prefer to set up the virtual environment manually instead of using `setup_windows.bat`:
|
||||||
|
```batch
|
||||||
|
python -m venv .venv
|
||||||
|
.venv\Scripts\activate.bat
|
||||||
|
python -m pip install --upgrade pip
|
||||||
|
python -m pip install -r requirements.txt
|
||||||
|
```
|
||||||
|
(The new automated scripts use `.venv` as the folder name for consistency with the macOS setup).
|
||||||
|
|
||||||
For Linux:
|
For Linux:
|
||||||
```bash
|
```bash
|
||||||
# Ensure you use the installed Python 3.10
|
# Ensure you use the installed Python 3.10
|
||||||
|
@ -150,22 +195,64 @@ pip install -r requirements.txt
|
||||||
|
|
||||||
**For macOS:**
|
**For macOS:**
|
||||||
|
|
||||||
Apple Silicon (M1/M2/M3) requires specific setup:
|
For a streamlined setup on macOS, use the provided shell scripts:
|
||||||
|
|
||||||
```bash
|
1. **Make scripts executable:**
|
||||||
# Install Python 3.10 (specific version is important)
|
Open your terminal, navigate to the cloned `Deep-Live-Cam` directory, and run:
|
||||||
brew install python@3.10
|
```bash
|
||||||
|
chmod +x setup_mac.sh
|
||||||
|
chmod +x run_mac*.sh
|
||||||
|
```
|
||||||
|
|
||||||
# Install tkinter package (required for the GUI)
|
2. **Run the setup script:**
|
||||||
brew install python-tk@3.10
|
This will check for Python 3.9+, ffmpeg, create a virtual environment (`.venv`), and install required Python packages.
|
||||||
|
```bash
|
||||||
|
./setup_mac.sh
|
||||||
|
```
|
||||||
|
If you encounter issues with specific packages during `pip install` (especially for libraries that compile C code, like some image processing libraries), you might need to install system libraries via Homebrew (e.g., `brew install jpeg libtiff ...`) or ensure Xcode Command Line Tools are installed (`xcode-select --install`).
|
||||||
|
|
||||||
# Create and activate virtual environment with Python 3.10
|
3. **Activate the virtual environment (for manual runs):**
|
||||||
python3.10 -m venv venv
|
After setup, if you want to run commands manually or use developer tools from your terminal session:
|
||||||
source venv/bin/activate
|
```bash
|
||||||
|
source .venv/bin/activate
|
||||||
|
```
|
||||||
|
(To deactivate, simply type `deactivate` in the terminal.)
|
||||||
|
|
||||||
# Install dependencies
|
4. **Run the application:**
|
||||||
pip install -r requirements.txt
|
Use the provided run scripts for convenience. These scripts automatically activate the virtual environment.
|
||||||
```
|
* `./run_mac.sh`: Runs the application with the CPU execution provider by default. This is a good starting point.
|
||||||
|
* `./run_mac_cpu.sh`: Explicitly uses the CPU execution provider.
|
||||||
|
* `./run_mac_coreml.sh`: Attempts to use the CoreML execution provider for potential hardware acceleration on Apple Silicon and Intel Macs.
|
||||||
|
* `./run_mac_mps.sh`: Attempts to use the MPS (Metal Performance Shaders) execution provider, primarily for Apple Silicon Macs.
|
||||||
|
|
||||||
|
Example of running with specific source/target arguments:
|
||||||
|
```bash
|
||||||
|
./run_mac.sh --source path/to/your_face.jpg --target path/to/video.mp4
|
||||||
|
```
|
||||||
|
Or, to simply launch the UI:
|
||||||
|
```bash
|
||||||
|
./run_mac.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
**Important Notes for macOS GPU Acceleration (CoreML/MPS):**
|
||||||
|
|
||||||
|
* The `setup_mac.sh` script installs packages from `requirements.txt`, which typically includes a general CPU-based version of `onnxruntime`.
|
||||||
|
* For optimal performance on Apple Silicon (M1/M2/M3) or specific GPU acceleration, you might need to install a different `onnxruntime` package *after* running `setup_mac.sh` and while the virtual environment (`.venv`) is active.
|
||||||
|
* **Example for `onnxruntime-silicon` (often requires Python 3.10 for older versions like 1.13.1):**
|
||||||
|
The original `README` noted that `onnxruntime-silicon==1.13.1` was specific to Python 3.10. If you intend to use this exact version for CoreML:
|
||||||
|
```bash
|
||||||
|
# Ensure you are using Python 3.10 if required by your chosen onnxruntime-silicon version
|
||||||
|
# After running setup_mac.sh and activating .venv:
|
||||||
|
# source .venv/bin/activate
|
||||||
|
|
||||||
|
pip uninstall onnxruntime onnxruntime-gpu # Uninstall any existing onnxruntime
|
||||||
|
pip install onnxruntime-silicon==1.13.1 # Or your desired version
|
||||||
|
|
||||||
|
# Then use ./run_mac_coreml.sh
|
||||||
|
```
|
||||||
|
Check the ONNX Runtime documentation for the latest recommended packages for Apple Silicon.
|
||||||
|
* **For MPS with ONNX Runtime:** This may require a specific build or version of `onnxruntime`. Consult the ONNX Runtime documentation. For PyTorch-based operations (like the Face Enhancer or Hair Segmenter if they were PyTorch native and not ONNX), PyTorch should automatically try to use MPS on compatible Apple Silicon hardware if available.
|
||||||
|
* **User Interface (Tkinter):** If you encounter errors related to `_tkinter` not being found when launching the UI, ensure your Python installation supports Tk. For Python installed via Homebrew, this is usually `python-tk` (e.g., `brew install python-tk@3.9` or `brew install python-tk@3.10`, matching your Python version).
|
||||||
|
|
||||||
** In case something goes wrong and you need to reinstall the virtual environment **
|
** In case something goes wrong and you need to reinstall the virtual environment **
|
||||||
|
|
||||||
|
@ -188,7 +275,10 @@ pip install -r requirements.txt
|
||||||
**CUDA Execution Provider (Nvidia)**
|
**CUDA Execution Provider (Nvidia)**
|
||||||
|
|
||||||
1. Install [CUDA Toolkit 11.8.0](https://developer.nvidia.com/cuda-11-8-0-download-archive)
|
1. Install [CUDA Toolkit 11.8.0](https://developer.nvidia.com/cuda-11-8-0-download-archive)
|
||||||
2. Install dependencies:
|
2. Install [cuDNN v8.9.7 for CUDA 11.x](https://developer.nvidia.com/rdp/cudnn-archive) (required for onnxruntime-gpu):
|
||||||
|
- Download cuDNN v8.9.7 for CUDA 11.x
|
||||||
|
- Make sure the cuDNN bin directory is in your system PATH
|
||||||
|
3. Install dependencies:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip uninstall onnxruntime onnxruntime-gpu
|
pip uninstall onnxruntime onnxruntime-gpu
|
||||||
|
|
|
@ -41,3 +41,4 @@ show_mouth_mask_box = False
|
||||||
mask_feather_ratio = 8
|
mask_feather_ratio = 8
|
||||||
mask_down_size = 0.50
|
mask_down_size = 0.50
|
||||||
mask_size = 1
|
mask_size = 1
|
||||||
|
enable_hair_swapping = True # Default state for enabling/disabling hair swapping
|
||||||
|
|
|
@ -0,0 +1,102 @@
|
||||||
|
import torch
|
||||||
|
import numpy as np
|
||||||
|
from PIL import Image
|
||||||
|
from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation
|
||||||
|
import cv2 # Imported for BGR to RGB conversion, though PIL can also do it.
|
||||||
|
|
||||||
|
# Global variables for caching
|
||||||
|
HAIR_SEGMENTER_PROCESSOR = None
|
||||||
|
HAIR_SEGMENTER_MODEL = None
|
||||||
|
MODEL_NAME = "isjackwild/segformer-b0-finetuned-segments-skin-hair-clothing"
|
||||||
|
|
||||||
|
def segment_hair(image_np: np.ndarray, device: str = "cpu", hair_label_index: int = None) -> np.ndarray:
|
||||||
|
"""
|
||||||
|
Segments hair from an image.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
image_np: NumPy array representing the image (BGR format from OpenCV).
|
||||||
|
device: Device to run the model on ("cpu" or "cuda").
|
||||||
|
hair_label_index: Optional; index of the hair label in the segmentation map. If not provided, will use model config or default to 2.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
NumPy array representing the binary hair mask.
|
||||||
|
"""
|
||||||
|
global HAIR_SEGMENTER_PROCESSOR, HAIR_SEGMENTER_MODEL
|
||||||
|
|
||||||
|
if HAIR_SEGMENTER_PROCESSOR is None or HAIR_SEGMENTER_MODEL is None:
|
||||||
|
print(f"Loading hair segmentation model and processor ({MODEL_NAME}) for the first time...")
|
||||||
|
try:
|
||||||
|
HAIR_SEGMENTER_PROCESSOR = SegformerImageProcessor.from_pretrained(MODEL_NAME)
|
||||||
|
HAIR_SEGMENTER_MODEL = SegformerForSemanticSegmentation.from_pretrained(MODEL_NAME)
|
||||||
|
HAIR_SEGMENTER_MODEL = HAIR_SEGMENTER_MODEL.to(device)
|
||||||
|
print(f"Hair segmentation model and processor loaded successfully. Model moved to device: {device}")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Failed to load hair segmentation model/processor: {e}")
|
||||||
|
return np.zeros((image_np.shape[0], image_np.shape[1]), dtype=np.uint8)
|
||||||
|
|
||||||
|
if HAIR_SEGMENTER_PROCESSOR is None or HAIR_SEGMENTER_MODEL is None:
|
||||||
|
print("Error: Hair segmentation models are not available.")
|
||||||
|
return np.zeros((image_np.shape[0], image_np.shape[1]), dtype=np.uint8)
|
||||||
|
|
||||||
|
image_rgb = cv2.cvtColor(image_np, cv2.COLOR_BGR2RGB)
|
||||||
|
image_pil = Image.fromarray(image_rgb)
|
||||||
|
|
||||||
|
inputs = HAIR_SEGMENTER_PROCESSOR(images=image_pil, return_tensors="pt")
|
||||||
|
if device == "cuda" and hasattr(HAIR_SEGMENTER_MODEL, "device") and HAIR_SEGMENTER_MODEL.device.type == "cuda":
|
||||||
|
inputs = {k: v.to("cuda") for k, v in inputs.items()}
|
||||||
|
|
||||||
|
with torch.no_grad():
|
||||||
|
outputs = HAIR_SEGMENTER_MODEL(**inputs)
|
||||||
|
|
||||||
|
logits = outputs.logits
|
||||||
|
upsampled_logits = torch.nn.functional.interpolate(
|
||||||
|
logits,
|
||||||
|
size=(image_np.shape[0], image_np.shape[1]),
|
||||||
|
mode='bilinear',
|
||||||
|
align_corners=False
|
||||||
|
)
|
||||||
|
segmentation_map = upsampled_logits.argmax(dim=1).squeeze().cpu().numpy().astype(np.uint8)
|
||||||
|
|
||||||
|
if hair_label_index is None:
|
||||||
|
hair_label_index = getattr(HAIR_SEGMENTER_MODEL, "hair_label_index", 2)
|
||||||
|
return np.where(segmentation_map == hair_label_index, 255, 0).astype(np.uint8)
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
# This is a conceptual test.
|
||||||
|
# In a real scenario, you would load an image using OpenCV or Pillow.
|
||||||
|
# For example:
|
||||||
|
# sample_image_np = cv2.imread("path/to/your/image.jpg")
|
||||||
|
# if sample_image_np is not None:
|
||||||
|
# hair_mask_output = segment_hair(sample_image_np)
|
||||||
|
# cv2.imwrite("hair_mask_output.png", hair_mask_output)
|
||||||
|
# print("Hair mask saved to hair_mask_output.png")
|
||||||
|
# else:
|
||||||
|
# print("Failed to load sample image.")
|
||||||
|
|
||||||
|
print("Conceptual test: Hair segmenter module created.")
|
||||||
|
# Create a dummy image for a basic test run if no image is available.
|
||||||
|
dummy_image_np = np.zeros((100, 100, 3), dtype=np.uint8) # 100x100 BGR image
|
||||||
|
dummy_image_np[:, :, 1] = 255 # Make it green to distinguish from black mask
|
||||||
|
|
||||||
|
try:
|
||||||
|
print("Running segment_hair with a dummy image...")
|
||||||
|
hair_mask_output = segment_hair(dummy_image_np)
|
||||||
|
print(f"segment_hair returned a mask of shape: {hair_mask_output.shape}")
|
||||||
|
# Check if the output is a 2D array (mask) and has the same H, W as input
|
||||||
|
assert hair_mask_output.shape == (dummy_image_np.shape[0], dummy_image_np.shape[1])
|
||||||
|
# Check if the mask is binary (0 or 255)
|
||||||
|
assert np.all(np.isin(hair_mask_output, [0, 255]))
|
||||||
|
print("Dummy image test successful. Hair mask seems to be generated correctly.")
|
||||||
|
|
||||||
|
# Attempt to save the dummy mask (optional, just for visual confirmation if needed)
|
||||||
|
# cv2.imwrite("dummy_hair_mask_output.png", hair_mask_output)
|
||||||
|
# print("Dummy hair mask saved to dummy_hair_mask_output.png")
|
||||||
|
|
||||||
|
except ImportError as e:
|
||||||
|
print(f"An ImportError occurred: {e}. This might be due to missing dependencies like transformers, torch, or Pillow.")
|
||||||
|
print("Please ensure all required packages are installed by updating requirements.txt and installing them.")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"An error occurred during the dummy image test: {e}")
|
||||||
|
print("This could be due to issues with model loading, processing, or other runtime errors.")
|
||||||
|
|
||||||
|
print("To perform a full test, replace the dummy image with a real image path.")
|
|
@ -1,4 +1,4 @@
|
||||||
from typing import Any, List
|
from typing import Any, List, Optional, Tuple
|
||||||
import cv2
|
import cv2
|
||||||
import insightface
|
import insightface
|
||||||
import threading
|
import threading
|
||||||
|
@ -9,6 +9,7 @@ import modules.processors.frame.core
|
||||||
from modules.core import update_status
|
from modules.core import update_status
|
||||||
from modules.face_analyser import get_one_face, get_many_faces, default_source_face
|
from modules.face_analyser import get_one_face, get_many_faces, default_source_face
|
||||||
from modules.typing import Face, Frame
|
from modules.typing import Face, Frame
|
||||||
|
from modules.hair_segmenter import segment_hair
|
||||||
from modules.utilities import (
|
from modules.utilities import (
|
||||||
conditional_download,
|
conditional_download,
|
||||||
is_image,
|
is_image,
|
||||||
|
@ -17,10 +18,20 @@ from modules.utilities import (
|
||||||
from modules.cluster_analysis import find_closest_centroid
|
from modules.cluster_analysis import find_closest_centroid
|
||||||
import os
|
import os
|
||||||
|
|
||||||
|
# --- CONFIGURABLE PARAMETERS FOR PERFORMANCE & BLENDING ---
|
||||||
|
BLEND_MASK_BLUR_KERNEL = (9, 9) # Larger kernel for smoother mask edges
|
||||||
|
BLEND_MASK_BLUR_SIGMA = 5 # Higher sigma for more feathering
|
||||||
|
SEAMLESS_CLONE_MODE = cv2.NORMAL_CLONE # Try cv2.MIXED_CLONE for different effect
|
||||||
|
PROFILE_FACE_SWAP = True # Set to True to enable timing logs
|
||||||
|
|
||||||
FACE_SWAPPER = None
|
FACE_SWAPPER = None
|
||||||
THREAD_LOCK = threading.Lock()
|
THREAD_LOCK = threading.Lock()
|
||||||
NAME = "DLC.FACE-SWAPPER"
|
NAME = "DLC.FACE-SWAPPER"
|
||||||
|
|
||||||
|
# Add a face similarity threshold (90%) for live webcam swaps.
|
||||||
|
# Only swap if the cosine similarity between source and detected face is above threshold.
|
||||||
|
FACE_SIMILARITY_THRESHOLD = 0.90 # Only swap if similarity > 90%
|
||||||
|
|
||||||
abs_dir = os.path.dirname(os.path.abspath(__file__))
|
abs_dir = os.path.dirname(os.path.abspath(__file__))
|
||||||
models_dir = os.path.join(
|
models_dir = os.path.join(
|
||||||
os.path.dirname(os.path.dirname(os.path.dirname(abs_dir))), "models"
|
os.path.dirname(os.path.dirname(os.path.dirname(abs_dir))), "models"
|
||||||
|
@ -61,44 +72,146 @@ def get_face_swapper() -> Any:
|
||||||
with THREAD_LOCK:
|
with THREAD_LOCK:
|
||||||
if FACE_SWAPPER is None:
|
if FACE_SWAPPER is None:
|
||||||
model_path = os.path.join(models_dir, "inswapper_128_fp16.onnx")
|
model_path = os.path.join(models_dir, "inswapper_128_fp16.onnx")
|
||||||
|
# Prefer GPU if available
|
||||||
|
providers = modules.globals.execution_providers
|
||||||
|
if 'CUDAExecutionProvider' in providers:
|
||||||
|
chosen_providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
|
||||||
|
else:
|
||||||
|
chosen_providers = providers
|
||||||
FACE_SWAPPER = insightface.model_zoo.get_model(
|
FACE_SWAPPER = insightface.model_zoo.get_model(
|
||||||
model_path, providers=modules.globals.execution_providers
|
model_path, providers=chosen_providers
|
||||||
)
|
)
|
||||||
return FACE_SWAPPER
|
return FACE_SWAPPER
|
||||||
|
|
||||||
|
|
||||||
def swap_face(source_face: Face, target_face: Face, temp_frame: Frame) -> Frame:
|
def _prepare_warped_source_material_and_mask(
|
||||||
|
source_face_obj: Face,
|
||||||
|
source_frame_full: Frame,
|
||||||
|
matrix: np.ndarray,
|
||||||
|
dsize: tuple # Built-in tuple is fine here for parameter type
|
||||||
|
) -> Tuple[Optional[Frame], Optional[Frame]]:
|
||||||
|
"""
|
||||||
|
Prepares warped source material (full image) and a combined (face+hair) mask for blending.
|
||||||
|
Returns (None, None) if essential masks cannot be generated.
|
||||||
|
"""
|
||||||
|
# Generate Hair Mask
|
||||||
|
hair_only_mask_source_raw = segment_hair(source_frame_full)
|
||||||
|
if hair_only_mask_source_raw.ndim == 3 and hair_only_mask_source_raw.shape[2] == 3:
|
||||||
|
hair_only_mask_source_raw = cv2.cvtColor(hair_only_mask_source_raw, cv2.COLOR_BGR2GRAY)
|
||||||
|
_, hair_only_mask_source_binary = cv2.threshold(hair_only_mask_source_raw, 127, 255, cv2.THRESH_BINARY)
|
||||||
|
|
||||||
|
# Generate Face Mask
|
||||||
|
face_only_mask_source_raw = create_face_mask(source_face_obj, source_frame_full)
|
||||||
|
_, face_only_mask_source_binary = cv2.threshold(face_only_mask_source_raw, 127, 255, cv2.THRESH_BINARY)
|
||||||
|
|
||||||
|
# Combine Face and Hair Masks
|
||||||
|
if face_only_mask_source_binary.shape != hair_only_mask_source_binary.shape:
|
||||||
|
logging.warning("Resizing hair mask to match face mask for source during preparation.")
|
||||||
|
hair_only_mask_source_binary = cv2.resize(
|
||||||
|
hair_only_mask_source_binary,
|
||||||
|
(face_only_mask_source_binary.shape[1], face_only_mask_source_binary.shape[0]),
|
||||||
|
interpolation=cv2.INTER_NEAREST
|
||||||
|
)
|
||||||
|
|
||||||
|
actual_combined_source_mask = cv2.bitwise_or(face_only_mask_source_binary, hair_only_mask_source_binary)
|
||||||
|
actual_combined_source_mask_blurred = cv2.GaussianBlur(actual_combined_source_mask, (5, 5), 3)
|
||||||
|
|
||||||
|
# Warp the Combined Mask and Full Source Material
|
||||||
|
warped_full_source_material = cv2.warpAffine(source_frame_full, matrix, dsize)
|
||||||
|
warped_combined_mask_temp = cv2.warpAffine(actual_combined_source_mask_blurred, matrix, dsize)
|
||||||
|
_, warped_combined_mask_binary_for_clone = cv2.threshold(warped_combined_mask_temp, 127, 255, cv2.THRESH_BINARY)
|
||||||
|
|
||||||
|
return warped_full_source_material, warped_combined_mask_binary_for_clone
|
||||||
|
|
||||||
|
|
||||||
|
def _blend_material_onto_frame(
|
||||||
|
base_frame: Frame,
|
||||||
|
material_to_blend: Frame,
|
||||||
|
mask_for_blending: Frame
|
||||||
|
) -> Frame:
|
||||||
|
"""
|
||||||
|
Blends material onto a base frame using a mask.
|
||||||
|
Uses seamlessClone if possible, otherwise falls back to simple masking.
|
||||||
|
"""
|
||||||
|
x, y, w, h = cv2.boundingRect(mask_for_blending)
|
||||||
|
output_frame = base_frame
|
||||||
|
|
||||||
|
if w > 0 and h > 0:
|
||||||
|
center = (x + w // 2, y + h // 2)
|
||||||
|
if material_to_blend.shape == base_frame.shape and \
|
||||||
|
material_to_blend.dtype == base_frame.dtype and \
|
||||||
|
mask_for_blending.dtype == np.uint8:
|
||||||
|
try:
|
||||||
|
# Use configurable blur for mask
|
||||||
|
blurred_mask = cv2.GaussianBlur(mask_for_blending, BLEND_MASK_BLUR_KERNEL, BLEND_MASK_BLUR_SIGMA)
|
||||||
|
_, mask_bin = cv2.threshold(blurred_mask, 127, 255, cv2.THRESH_BINARY)
|
||||||
|
output_frame = cv2.seamlessClone(material_to_blend, base_frame, mask_bin, center, SEAMLESS_CLONE_MODE)
|
||||||
|
except cv2.error as e:
|
||||||
|
logging.warning(f"cv2.seamlessClone failed: {e}. Falling back to simple blending.")
|
||||||
|
boolean_mask = mask_for_blending > 127
|
||||||
|
output_frame[boolean_mask] = material_to_blend[boolean_mask]
|
||||||
|
else:
|
||||||
|
logging.warning("Mismatch in shape/type for seamlessClone. Falling back to simple blending.")
|
||||||
|
boolean_mask = mask_for_blending > 127
|
||||||
|
output_frame[boolean_mask] = material_to_blend[boolean_mask]
|
||||||
|
else:
|
||||||
|
logging.info("Warped mask for blending is empty. Skipping blending.")
|
||||||
|
return output_frame
|
||||||
|
|
||||||
|
|
||||||
|
def swap_face(source_face_obj: Face, target_face: Face, source_frame_full: Frame, temp_frame: Frame) -> Frame:
|
||||||
|
import time
|
||||||
face_swapper = get_face_swapper()
|
face_swapper = get_face_swapper()
|
||||||
|
start_time = time.time() if PROFILE_FACE_SWAP else None
|
||||||
|
swapped_frame = face_swapper.get(temp_frame, target_face, source_face_obj, paste_back=True)
|
||||||
|
final_swapped_frame = swapped_frame
|
||||||
|
|
||||||
# Apply the face swap
|
def do_hair_blending():
|
||||||
swapped_frame = face_swapper.get(
|
if not (source_face_obj.kps is not None and target_face.kps is not None and source_face_obj.kps.shape[0] >= 3 and target_face.kps.shape[0] >= 3):
|
||||||
temp_frame, target_face, source_face, paste_back=True
|
logging.warning(
|
||||||
)
|
f"Skipping hair blending due to insufficient keypoints. "
|
||||||
|
f"Source kps: {source_face_obj.kps.shape if source_face_obj.kps is not None else 'None'}, "
|
||||||
|
f"Target kps: {target_face.kps.shape if target_face.kps is not None else 'None'}."
|
||||||
|
)
|
||||||
|
return swapped_frame
|
||||||
|
source_kps_float = source_face_obj.kps.astype(np.float32)
|
||||||
|
target_kps_float = target_face.kps.astype(np.float32)
|
||||||
|
matrix, _ = cv2.estimateAffinePartial2D(source_kps_float, target_kps_float, method=cv2.LMEDS)
|
||||||
|
if matrix is None:
|
||||||
|
logging.warning("Failed to estimate affine transformation matrix for hair. Skipping hair blending.")
|
||||||
|
return swapped_frame
|
||||||
|
dsize = (temp_frame.shape[1], temp_frame.shape[0])
|
||||||
|
warped_material, warped_mask = _prepare_warped_source_material_and_mask(
|
||||||
|
source_face_obj, source_frame_full, matrix, dsize
|
||||||
|
)
|
||||||
|
if warped_material is not None and warped_mask is not None:
|
||||||
|
out = swapped_frame.copy()
|
||||||
|
color_corrected_material = apply_color_transfer(warped_material, out)
|
||||||
|
return _blend_material_onto_frame(out, color_corrected_material, warped_mask)
|
||||||
|
return swapped_frame
|
||||||
|
|
||||||
if modules.globals.mouth_mask:
|
def do_mouth_mask(frame):
|
||||||
# Create a mask for the target face
|
out = frame.copy() if frame is swapped_frame else frame
|
||||||
face_mask = create_face_mask(target_face, temp_frame)
|
face_mask = create_face_mask(target_face, temp_frame)
|
||||||
|
mouth_mask, mouth_cutout, mouth_box, lower_lip_polygon = create_lower_mouth_mask(target_face, temp_frame)
|
||||||
# Create the mouth mask
|
out = apply_mouth_area(out, mouth_cutout, mouth_box, face_mask, lower_lip_polygon)
|
||||||
mouth_mask, mouth_cutout, mouth_box, lower_lip_polygon = (
|
|
||||||
create_lower_mouth_mask(target_face, temp_frame)
|
|
||||||
)
|
|
||||||
|
|
||||||
# Apply the mouth area
|
|
||||||
swapped_frame = apply_mouth_area(
|
|
||||||
swapped_frame, mouth_cutout, mouth_box, face_mask, lower_lip_polygon
|
|
||||||
)
|
|
||||||
|
|
||||||
if modules.globals.show_mouth_mask_box:
|
if modules.globals.show_mouth_mask_box:
|
||||||
mouth_mask_data = (mouth_mask, mouth_cutout, mouth_box, lower_lip_polygon)
|
mouth_mask_data = (mouth_mask, mouth_cutout, mouth_box, lower_lip_polygon)
|
||||||
swapped_frame = draw_mouth_mask_visualization(
|
out = draw_mouth_mask_visualization(out, target_face, mouth_mask_data)
|
||||||
swapped_frame, target_face, mouth_mask_data
|
return out
|
||||||
)
|
|
||||||
|
|
||||||
return swapped_frame
|
if modules.globals.enable_hair_swapping:
|
||||||
|
final_swapped_frame = do_hair_blending()
|
||||||
|
if modules.globals.mouth_mask:
|
||||||
|
final_swapped_frame = do_mouth_mask(final_swapped_frame)
|
||||||
|
|
||||||
|
if PROFILE_FACE_SWAP:
|
||||||
|
elapsed = time.time() - start_time
|
||||||
|
logging.info(f"Face swap+blend time: {elapsed:.3f}s")
|
||||||
|
return final_swapped_frame
|
||||||
|
|
||||||
|
|
||||||
def process_frame(source_face: Face, temp_frame: Frame) -> Frame:
|
def process_frame(source_face_obj: Face, source_frame_full: Frame, temp_frame: Frame) -> Frame:
|
||||||
if modules.globals.color_correction:
|
if modules.globals.color_correction:
|
||||||
temp_frame = cv2.cvtColor(temp_frame, cv2.COLOR_BGR2RGB)
|
temp_frame = cv2.cvtColor(temp_frame, cv2.COLOR_BGR2RGB)
|
||||||
|
|
||||||
|
@ -106,152 +219,211 @@ def process_frame(source_face: Face, temp_frame: Frame) -> Frame:
|
||||||
many_faces = get_many_faces(temp_frame)
|
many_faces = get_many_faces(temp_frame)
|
||||||
if many_faces:
|
if many_faces:
|
||||||
for target_face in many_faces:
|
for target_face in many_faces:
|
||||||
if source_face and target_face:
|
if source_face_obj and target_face:
|
||||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
|
||||||
else:
|
else:
|
||||||
print("Face detection failed for target/source.")
|
print("Face detection failed for target/source.")
|
||||||
else:
|
else:
|
||||||
target_face = get_one_face(temp_frame)
|
target_face = get_one_face(temp_frame)
|
||||||
if target_face and source_face:
|
if target_face and source_face_obj:
|
||||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
|
||||||
else:
|
else:
|
||||||
logging.error("Face detection failed for target or source.")
|
logging.error("Face detection failed for target or source.")
|
||||||
return temp_frame
|
return temp_frame
|
||||||
|
|
||||||
|
|
||||||
|
# process_frame_v2 needs to accept source_frame_full as well
|
||||||
|
|
||||||
def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
|
def _process_image_target_v2(source_frame_full: Frame, temp_frame: Frame) -> Frame:
|
||||||
if is_image(modules.globals.target_path):
|
if modules.globals.many_faces:
|
||||||
if modules.globals.many_faces:
|
source_face_obj = default_source_face()
|
||||||
source_face = default_source_face()
|
if source_face_obj:
|
||||||
for map in modules.globals.source_target_map:
|
for map_item in modules.globals.source_target_map:
|
||||||
target_face = map["target"]["face"]
|
target_face = map_item["target"]["face"]
|
||||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
|
||||||
|
else: # not many_faces
|
||||||
elif not modules.globals.many_faces:
|
for map_item in modules.globals.source_target_map:
|
||||||
for map in modules.globals.source_target_map:
|
if "source" in map_item:
|
||||||
if "source" in map:
|
source_face_obj = map_item["source"]["face"]
|
||||||
source_face = map["source"]["face"]
|
target_face = map_item["target"]["face"]
|
||||||
target_face = map["target"]["face"]
|
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
|
||||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
|
||||||
|
|
||||||
elif is_video(modules.globals.target_path):
|
|
||||||
if modules.globals.many_faces:
|
|
||||||
source_face = default_source_face()
|
|
||||||
for map in modules.globals.source_target_map:
|
|
||||||
target_frame = [
|
|
||||||
f
|
|
||||||
for f in map["target_faces_in_frame"]
|
|
||||||
if f["location"] == temp_frame_path
|
|
||||||
]
|
|
||||||
|
|
||||||
for frame in target_frame:
|
|
||||||
for target_face in frame["faces"]:
|
|
||||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
|
||||||
|
|
||||||
elif not modules.globals.many_faces:
|
|
||||||
for map in modules.globals.source_target_map:
|
|
||||||
if "source" in map:
|
|
||||||
target_frame = [
|
|
||||||
f
|
|
||||||
for f in map["target_faces_in_frame"]
|
|
||||||
if f["location"] == temp_frame_path
|
|
||||||
]
|
|
||||||
source_face = map["source"]["face"]
|
|
||||||
|
|
||||||
for frame in target_frame:
|
|
||||||
for target_face in frame["faces"]:
|
|
||||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
|
||||||
|
|
||||||
else:
|
|
||||||
detected_faces = get_many_faces(temp_frame)
|
|
||||||
if modules.globals.many_faces:
|
|
||||||
if detected_faces:
|
|
||||||
source_face = default_source_face()
|
|
||||||
for target_face in detected_faces:
|
|
||||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
|
||||||
|
|
||||||
elif not modules.globals.many_faces:
|
|
||||||
if detected_faces:
|
|
||||||
if len(detected_faces) <= len(
|
|
||||||
modules.globals.simple_map["target_embeddings"]
|
|
||||||
):
|
|
||||||
for detected_face in detected_faces:
|
|
||||||
closest_centroid_index, _ = find_closest_centroid(
|
|
||||||
modules.globals.simple_map["target_embeddings"],
|
|
||||||
detected_face.normed_embedding,
|
|
||||||
)
|
|
||||||
|
|
||||||
temp_frame = swap_face(
|
|
||||||
modules.globals.simple_map["source_faces"][
|
|
||||||
closest_centroid_index
|
|
||||||
],
|
|
||||||
detected_face,
|
|
||||||
temp_frame,
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
detected_faces_centroids = []
|
|
||||||
for face in detected_faces:
|
|
||||||
detected_faces_centroids.append(face.normed_embedding)
|
|
||||||
i = 0
|
|
||||||
for target_embedding in modules.globals.simple_map[
|
|
||||||
"target_embeddings"
|
|
||||||
]:
|
|
||||||
closest_centroid_index, _ = find_closest_centroid(
|
|
||||||
detected_faces_centroids, target_embedding
|
|
||||||
)
|
|
||||||
|
|
||||||
temp_frame = swap_face(
|
|
||||||
modules.globals.simple_map["source_faces"][i],
|
|
||||||
detected_faces[closest_centroid_index],
|
|
||||||
temp_frame,
|
|
||||||
)
|
|
||||||
i += 1
|
|
||||||
return temp_frame
|
return temp_frame
|
||||||
|
|
||||||
|
def _process_video_target_v2(source_frame_full: Frame, temp_frame: Frame, temp_frame_path: str) -> Frame:
|
||||||
|
if modules.globals.many_faces:
|
||||||
|
source_face_obj = default_source_face()
|
||||||
|
if source_face_obj:
|
||||||
|
for map_item in modules.globals.source_target_map:
|
||||||
|
target_frames_data = [f for f in map_item.get("target_faces_in_frame", []) if f.get("location") == temp_frame_path]
|
||||||
|
for frame_data in target_frames_data:
|
||||||
|
for target_face in frame_data.get("faces", []):
|
||||||
|
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
|
||||||
|
else: # not many_faces
|
||||||
|
for map_item in modules.globals.source_target_map:
|
||||||
|
if "source" in map_item:
|
||||||
|
source_face_obj = map_item["source"]["face"]
|
||||||
|
target_frames_data = [f for f in map_item.get("target_faces_in_frame", []) if f.get("location") == temp_frame_path]
|
||||||
|
for frame_data in target_frames_data:
|
||||||
|
for target_face in frame_data.get("faces", []):
|
||||||
|
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
|
||||||
|
return temp_frame
|
||||||
|
|
||||||
|
def _process_live_target_v2(source_frame_full: Frame, temp_frame: Frame) -> Frame:
|
||||||
|
detected_faces = get_many_faces(temp_frame)
|
||||||
|
if not detected_faces:
|
||||||
|
return temp_frame
|
||||||
|
|
||||||
|
if modules.globals.many_faces:
|
||||||
|
if source_face_obj := default_source_face():
|
||||||
|
swapped_faces = set()
|
||||||
|
for target_face in detected_faces:
|
||||||
|
face_id = id(target_face)
|
||||||
|
if face_id in swapped_faces:
|
||||||
|
continue
|
||||||
|
# Similarity check for many_faces mode
|
||||||
|
if hasattr(source_face_obj, 'normed_embedding') and hasattr(target_face, 'normed_embedding'):
|
||||||
|
similarity = float(np.dot(source_face_obj.normed_embedding, target_face.normed_embedding))
|
||||||
|
if similarity < FACE_SIMILARITY_THRESHOLD:
|
||||||
|
continue # Skip if not similar enough
|
||||||
|
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
|
||||||
|
swapped_faces.add(face_id)
|
||||||
|
else: # not many_faces (apply simple_map logic)
|
||||||
|
if not modules.globals.simple_map or \
|
||||||
|
not modules.globals.simple_map.get("target_embeddings") or \
|
||||||
|
not modules.globals.simple_map.get("source_faces"):
|
||||||
|
logging.warning("Simple map is not configured correctly. Skipping face swap.")
|
||||||
|
return temp_frame
|
||||||
|
|
||||||
|
target_embeddings = modules.globals.simple_map["target_embeddings"]
|
||||||
|
source_faces_from_map = modules.globals.simple_map["source_faces"]
|
||||||
|
|
||||||
|
if len(detected_faces) <= len(target_embeddings):
|
||||||
|
for detected_face in detected_faces:
|
||||||
|
closest_centroid_index, _ = find_closest_centroid(target_embeddings, detected_face.normed_embedding)
|
||||||
|
if closest_centroid_index < len(source_faces_from_map):
|
||||||
|
source_face_obj_from_map = source_faces_from_map[closest_centroid_index]
|
||||||
|
# Similarity check for mapped faces
|
||||||
|
if hasattr(source_face_obj_from_map, 'normed_embedding') and hasattr(detected_face, 'normed_embedding'):
|
||||||
|
similarity = float(np.dot(source_face_obj_from_map.normed_embedding, detected_face.normed_embedding))
|
||||||
|
if similarity < FACE_SIMILARITY_THRESHOLD:
|
||||||
|
continue # Skip if not similar enough
|
||||||
|
temp_frame = swap_face(source_face_obj_from_map, detected_face, source_frame_full, temp_frame)
|
||||||
|
else:
|
||||||
|
logging.warning(f"Centroid index {closest_centroid_index} out of bounds for source_faces_from_map.")
|
||||||
|
else: # More detected faces than target embeddings in simple_map
|
||||||
|
detected_faces_embeddings = [face.normed_embedding for face in detected_faces]
|
||||||
|
for i, target_embedding in enumerate(target_embeddings):
|
||||||
|
if i < len(source_faces_from_map):
|
||||||
|
closest_detected_face_index, _ = find_closest_centroid(detected_faces_embeddings, target_embedding)
|
||||||
|
source_face_obj_from_map = source_faces_from_map[i]
|
||||||
|
target_face_to_swap = detected_faces[closest_detected_face_index]
|
||||||
|
# Similarity check for mapped faces
|
||||||
|
if hasattr(source_face_obj_from_map, 'normed_embedding') and hasattr(target_face_to_swap, 'normed_embedding'):
|
||||||
|
similarity = float(np.dot(source_face_obj_from_map.normed_embedding, target_face_to_swap.normed_embedding))
|
||||||
|
if similarity < FACE_SIMILARITY_THRESHOLD:
|
||||||
|
continue # Skip if not similar enough
|
||||||
|
temp_frame = swap_face(source_face_obj_from_map, target_face_to_swap, source_frame_full, temp_frame)
|
||||||
|
# Optionally, remove the swapped detected face to prevent re-swapping if one source maps to multiple targets.
|
||||||
|
# This depends on desired behavior. For now, simple independent mapping.
|
||||||
|
else:
|
||||||
|
logging.warning(f"Index {i} out of bounds for source_faces_from_map in simple_map else case.")
|
||||||
|
return temp_frame
|
||||||
|
|
||||||
|
|
||||||
|
def process_frame_v2(source_frame_full: Frame, temp_frame: Frame, temp_frame_path: str = "") -> Frame:
|
||||||
|
if is_image(modules.globals.target_path):
|
||||||
|
return _process_image_target_v2(source_frame_full, temp_frame)
|
||||||
|
elif is_video(modules.globals.target_path):
|
||||||
|
return _process_video_target_v2(source_frame_full, temp_frame, temp_frame_path)
|
||||||
|
else: # This is the live cam / generic case
|
||||||
|
return _process_live_target_v2(source_frame_full, temp_frame)
|
||||||
|
|
||||||
|
|
||||||
def process_frames(
|
def process_frames(
|
||||||
source_path: str, temp_frame_paths: List[str], progress: Any = None
|
source_path: str, temp_frame_paths: List[str], progress: Any = None
|
||||||
) -> None:
|
) -> None:
|
||||||
|
source_img = cv2.imread(source_path)
|
||||||
|
if source_img is None:
|
||||||
|
logging.error(f"Failed to read source image from {source_path}")
|
||||||
|
return
|
||||||
|
|
||||||
if not modules.globals.map_faces:
|
if not modules.globals.map_faces:
|
||||||
source_face = get_one_face(cv2.imread(source_path))
|
source_face_obj = get_one_face(source_img) # Use source_img here
|
||||||
|
if not source_face_obj:
|
||||||
|
logging.error(f"No face detected in source image {source_path}")
|
||||||
|
return
|
||||||
for temp_frame_path in temp_frame_paths:
|
for temp_frame_path in temp_frame_paths:
|
||||||
temp_frame = cv2.imread(temp_frame_path)
|
temp_frame = cv2.imread(temp_frame_path)
|
||||||
|
if temp_frame is None:
|
||||||
|
logging.warning(f"Failed to read temp_frame from {temp_frame_path}, skipping.")
|
||||||
|
continue
|
||||||
try:
|
try:
|
||||||
result = process_frame(source_face, temp_frame)
|
result = process_frame(source_face_obj, source_img, temp_frame)
|
||||||
cv2.imwrite(temp_frame_path, result)
|
cv2.imwrite(temp_frame_path, result)
|
||||||
except Exception as exception:
|
except Exception as exception:
|
||||||
print(exception)
|
logging.error(f"Error processing frame {temp_frame_path}: {exception}", exc_info=True)
|
||||||
pass
|
pass
|
||||||
if progress:
|
if progress:
|
||||||
progress.update(1)
|
progress.update(1)
|
||||||
else:
|
else: # This is for map_faces == True
|
||||||
|
# In map_faces=True, source_face is determined per mapping.
|
||||||
|
# process_frame_v2 will need source_frame_full for hair,
|
||||||
|
# which should be the original source_path image.
|
||||||
for temp_frame_path in temp_frame_paths:
|
for temp_frame_path in temp_frame_paths:
|
||||||
temp_frame = cv2.imread(temp_frame_path)
|
temp_frame = cv2.imread(temp_frame_path)
|
||||||
|
if temp_frame is None:
|
||||||
|
logging.warning(f"Failed to read temp_frame from {temp_frame_path}, skipping.")
|
||||||
|
continue
|
||||||
try:
|
try:
|
||||||
result = process_frame_v2(temp_frame, temp_frame_path)
|
# Pass source_img (as source_frame_full) to process_frame_v2
|
||||||
|
result = process_frame_v2(source_img, temp_frame, temp_frame_path)
|
||||||
cv2.imwrite(temp_frame_path, result)
|
cv2.imwrite(temp_frame_path, result)
|
||||||
except Exception as exception:
|
except Exception as exception:
|
||||||
print(exception)
|
logging.error(f"Error processing frame {temp_frame_path} with map_faces: {exception}", exc_info=True)
|
||||||
pass
|
pass
|
||||||
if progress:
|
if progress:
|
||||||
progress.update(1)
|
progress.update(1)
|
||||||
|
|
||||||
|
|
||||||
def process_image(source_path: str, target_path: str, output_path: str) -> None:
|
def process_image(source_path: str, target_path: str, output_path: str) -> None:
|
||||||
|
source_img = cv2.imread(source_path)
|
||||||
|
if source_img is None:
|
||||||
|
logging.error(f"Failed to read source image from {source_path}")
|
||||||
|
return
|
||||||
|
|
||||||
|
target_frame = cv2.imread(target_path)
|
||||||
|
if target_frame is None:
|
||||||
|
logging.error(f"Failed to read target image from {target_path}")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Read the original target frame once at the beginning
|
||||||
|
original_target_frame = cv2.imread(target_path)
|
||||||
|
if original_target_frame is None:
|
||||||
|
logging.error(f"Failed to read original target image from {target_path}")
|
||||||
|
return
|
||||||
|
|
||||||
|
result = None # Initialize result
|
||||||
|
|
||||||
if not modules.globals.map_faces:
|
if not modules.globals.map_faces:
|
||||||
source_face = get_one_face(cv2.imread(source_path))
|
source_face_obj = get_one_face(source_img) # Use source_img here
|
||||||
target_frame = cv2.imread(target_path)
|
if not source_face_obj:
|
||||||
result = process_frame(source_face, target_frame)
|
logging.error(f"No face detected in source image {source_path}")
|
||||||
cv2.imwrite(output_path, result)
|
return
|
||||||
else:
|
result = process_frame(source_face_obj, source_img, original_target_frame)
|
||||||
|
else: # map_faces is True
|
||||||
if modules.globals.many_faces:
|
if modules.globals.many_faces:
|
||||||
update_status(
|
update_status(
|
||||||
"Many faces enabled. Using first source image. Progressing...", NAME
|
"Many faces enabled. Using first source image. Progressing...", NAME
|
||||||
)
|
)
|
||||||
target_frame = cv2.imread(output_path)
|
# process_frame_v2 takes the original target frame for processing.
|
||||||
result = process_frame_v2(target_frame)
|
# target_path is passed as temp_frame_path for consistency with process_frame_v2's signature,
|
||||||
|
# used for map lookups in video context but less critical for single images.
|
||||||
|
result = process_frame_v2(source_img, original_target_frame, target_path)
|
||||||
|
|
||||||
|
if result is not None:
|
||||||
cv2.imwrite(output_path, result)
|
cv2.imwrite(output_path, result)
|
||||||
|
else:
|
||||||
|
logging.error(f"Processing image {target_path} failed, result was None.")
|
||||||
|
|
||||||
|
|
||||||
def process_video(source_path: str, temp_frame_paths: List[str]) -> None:
|
def process_video(source_path: str, temp_frame_paths: List[str]) -> None:
|
||||||
|
|
172
modules/ui.py
172
modules/ui.py
|
@ -105,6 +105,7 @@ def save_switch_states():
|
||||||
"show_fps": modules.globals.show_fps,
|
"show_fps": modules.globals.show_fps,
|
||||||
"mouth_mask": modules.globals.mouth_mask,
|
"mouth_mask": modules.globals.mouth_mask,
|
||||||
"show_mouth_mask_box": modules.globals.show_mouth_mask_box,
|
"show_mouth_mask_box": modules.globals.show_mouth_mask_box,
|
||||||
|
"enable_hair_swapping": modules.globals.enable_hair_swapping,
|
||||||
}
|
}
|
||||||
with open("switch_states.json", "w") as f:
|
with open("switch_states.json", "w") as f:
|
||||||
json.dump(switch_states, f)
|
json.dump(switch_states, f)
|
||||||
|
@ -129,6 +130,9 @@ def load_switch_states():
|
||||||
modules.globals.show_mouth_mask_box = switch_states.get(
|
modules.globals.show_mouth_mask_box = switch_states.get(
|
||||||
"show_mouth_mask_box", False
|
"show_mouth_mask_box", False
|
||||||
)
|
)
|
||||||
|
modules.globals.enable_hair_swapping = switch_states.get(
|
||||||
|
"enable_hair_swapping", True # Default to True if not found
|
||||||
|
)
|
||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
# If the file doesn't exist, use default values
|
# If the file doesn't exist, use default values
|
||||||
pass
|
pass
|
||||||
|
@ -284,6 +288,22 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
||||||
)
|
)
|
||||||
show_fps_switch.place(relx=0.6, rely=0.75)
|
show_fps_switch.place(relx=0.6, rely=0.75)
|
||||||
|
|
||||||
|
# Hair Swapping Switch (placed below "Show FPS" on the right column)
|
||||||
|
segmentation_model_available = getattr(modules.globals, "segmentation_model_available", True)
|
||||||
|
hair_swapping_value = ctk.BooleanVar(value=modules.globals.enable_hair_swapping)
|
||||||
|
hair_swapping_switch = ctk.CTkSwitch(
|
||||||
|
root,
|
||||||
|
text=_("Swap Hair"),
|
||||||
|
variable=hair_swapping_value,
|
||||||
|
cursor="hand2",
|
||||||
|
command=lambda: (
|
||||||
|
setattr(modules.globals, "enable_hair_swapping", hair_swapping_value.get()),
|
||||||
|
save_switch_states(),
|
||||||
|
),
|
||||||
|
state="normal" if segmentation_model_available else "disabled"
|
||||||
|
)
|
||||||
|
hair_swapping_switch.place(relx=0.6, rely=0.80)
|
||||||
|
|
||||||
mouth_mask_var = ctk.BooleanVar(value=modules.globals.mouth_mask)
|
mouth_mask_var = ctk.BooleanVar(value=modules.globals.mouth_mask)
|
||||||
mouth_mask_switch = ctk.CTkSwitch(
|
mouth_mask_switch = ctk.CTkSwitch(
|
||||||
root,
|
root,
|
||||||
|
@ -306,24 +326,26 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
||||||
)
|
)
|
||||||
show_mouth_mask_box_switch.place(relx=0.6, rely=0.55)
|
show_mouth_mask_box_switch.place(relx=0.6, rely=0.55)
|
||||||
|
|
||||||
|
# Adjusting placement of Start, Stop, Preview buttons due to new switch
|
||||||
start_button = ctk.CTkButton(
|
start_button = ctk.CTkButton(
|
||||||
root, text=_("Start"), cursor="hand2", command=lambda: analyze_target(start, root)
|
root, text=_("Start"), cursor="hand2", command=lambda: analyze_target(start, root)
|
||||||
)
|
)
|
||||||
start_button.place(relx=0.15, rely=0.80, relwidth=0.2, relheight=0.05)
|
start_button.place(relx=0.15, rely=0.85, relwidth=0.2, relheight=0.05) # rely from 0.80 to 0.85
|
||||||
|
|
||||||
stop_button = ctk.CTkButton(
|
stop_button = ctk.CTkButton(
|
||||||
root, text=_("Destroy"), cursor="hand2", command=lambda: destroy()
|
root, text=_("Destroy"), cursor="hand2", command=lambda: destroy()
|
||||||
)
|
)
|
||||||
stop_button.place(relx=0.4, rely=0.80, relwidth=0.2, relheight=0.05)
|
stop_button.place(relx=0.4, rely=0.85, relwidth=0.2, relheight=0.05) # rely from 0.80 to 0.85
|
||||||
|
|
||||||
preview_button = ctk.CTkButton(
|
preview_button = ctk.CTkButton(
|
||||||
root, text=_("Preview"), cursor="hand2", command=lambda: toggle_preview()
|
root, text=_("Preview"), cursor="hand2", command=lambda: toggle_preview()
|
||||||
)
|
)
|
||||||
preview_button.place(relx=0.65, rely=0.80, relwidth=0.2, relheight=0.05)
|
preview_button.place(relx=0.65, rely=0.85, relwidth=0.2, relheight=0.05) # rely from 0.80 to 0.85
|
||||||
|
|
||||||
# --- Camera Selection ---
|
# --- Camera Selection ---
|
||||||
|
# Adjusting placement of Camera selection due to new switch
|
||||||
camera_label = ctk.CTkLabel(root, text=_("Select Camera:"))
|
camera_label = ctk.CTkLabel(root, text=_("Select Camera:"))
|
||||||
camera_label.place(relx=0.1, rely=0.86, relwidth=0.2, relheight=0.05)
|
camera_label.place(relx=0.1, rely=0.91, relwidth=0.2, relheight=0.05) # rely from 0.86 to 0.91
|
||||||
|
|
||||||
available_cameras = get_available_cameras()
|
available_cameras = get_available_cameras()
|
||||||
camera_indices, camera_names = available_cameras
|
camera_indices, camera_names = available_cameras
|
||||||
|
@ -342,7 +364,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
||||||
root, variable=camera_variable, values=camera_names
|
root, variable=camera_variable, values=camera_names
|
||||||
)
|
)
|
||||||
|
|
||||||
camera_optionmenu.place(relx=0.35, rely=0.86, relwidth=0.25, relheight=0.05)
|
camera_optionmenu.place(relx=0.35, rely=0.91, relwidth=0.25, relheight=0.05) # rely from 0.86 to 0.91
|
||||||
|
|
||||||
live_button = ctk.CTkButton(
|
live_button = ctk.CTkButton(
|
||||||
root,
|
root,
|
||||||
|
@ -362,16 +384,16 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
||||||
else "disabled"
|
else "disabled"
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
live_button.place(relx=0.65, rely=0.86, relwidth=0.2, relheight=0.05)
|
live_button.place(relx=0.65, rely=0.91, relwidth=0.2, relheight=0.05) # rely from 0.86 to 0.91
|
||||||
# --- End Camera Selection ---
|
# --- End Camera Selection ---
|
||||||
|
|
||||||
status_label = ctk.CTkLabel(root, text=None, justify="center")
|
status_label = ctk.CTkLabel(root, text=None, justify="center")
|
||||||
status_label.place(relx=0.1, rely=0.9, relwidth=0.8)
|
status_label.place(relx=0.1, rely=0.96, relwidth=0.8) # rely from 0.9 to 0.96
|
||||||
|
|
||||||
donate_label = ctk.CTkLabel(
|
donate_label = ctk.CTkLabel(
|
||||||
root, text="Deep Live Cam", justify="center", cursor="hand2"
|
root, text="Deep Live Cam", justify="center", cursor="hand2"
|
||||||
)
|
)
|
||||||
donate_label.place(relx=0.1, rely=0.95, relwidth=0.8)
|
donate_label.place(relx=0.1, rely=0.99, relwidth=0.8) # rely from 0.95 to 0.99
|
||||||
donate_label.configure(
|
donate_label.configure(
|
||||||
text_color=ctk.ThemeManager.theme.get("URL").get("text_color")
|
text_color=ctk.ThemeManager.theme.get("URL").get("text_color")
|
||||||
)
|
)
|
||||||
|
@ -880,7 +902,102 @@ def create_webcam_preview(camera_index: int):
|
||||||
PREVIEW.deiconify()
|
PREVIEW.deiconify()
|
||||||
|
|
||||||
frame_processors = get_frame_processors_modules(modules.globals.frame_processors)
|
frame_processors = get_frame_processors_modules(modules.globals.frame_processors)
|
||||||
source_image = None
|
|
||||||
|
# --- Source Image Loading and Validation (Moved before the loop) ---
|
||||||
|
source_face_obj_for_cam = None
|
||||||
|
source_frame_full_for_cam = None
|
||||||
|
source_frame_full_for_cam_map_faces = None
|
||||||
|
|
||||||
|
if not modules.globals.map_faces:
|
||||||
|
if not modules.globals.source_path:
|
||||||
|
update_status("Error: No source image selected for webcam mode.")
|
||||||
|
cap.release()
|
||||||
|
PREVIEW.withdraw()
|
||||||
|
def wait_for_withdraw():
|
||||||
|
if PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
|
||||||
|
ROOT.update_idletasks()
|
||||||
|
ROOT.update()
|
||||||
|
PREVIEW.after(50, wait_for_withdraw)
|
||||||
|
wait_for_withdraw()
|
||||||
|
return
|
||||||
|
if not os.path.exists(modules.globals.source_path):
|
||||||
|
update_status(f"Error: Source image not found at {modules.globals.source_path}")
|
||||||
|
cap.release()
|
||||||
|
PREVIEW.withdraw()
|
||||||
|
def wait_for_withdraw():
|
||||||
|
if PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
|
||||||
|
ROOT.update_idletasks()
|
||||||
|
ROOT.update()
|
||||||
|
PREVIEW.after(50, wait_for_withdraw)
|
||||||
|
wait_for_withdraw()
|
||||||
|
return
|
||||||
|
source_frame_full_for_cam = cv2.imread(modules.globals.source_path)
|
||||||
|
if source_frame_full_for_cam is None:
|
||||||
|
update_status(f"Error: Could not read source image at {modules.globals.source_path}")
|
||||||
|
cap.release()
|
||||||
|
PREVIEW.withdraw()
|
||||||
|
def wait_for_withdraw():
|
||||||
|
if PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
|
||||||
|
ROOT.update_idletasks()
|
||||||
|
ROOT.update()
|
||||||
|
PREVIEW.after(50, wait_for_withdraw)
|
||||||
|
wait_for_withdraw()
|
||||||
|
return
|
||||||
|
source_face_obj_for_cam = get_one_face(source_frame_full_for_cam)
|
||||||
|
if source_face_obj_for_cam is None:
|
||||||
|
update_status(f"Error: No face detected in source image {modules.globals.source_path}")
|
||||||
|
cap.release()
|
||||||
|
PREVIEW.withdraw()
|
||||||
|
def wait_for_withdraw():
|
||||||
|
if PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
|
||||||
|
ROOT.update_idletasks()
|
||||||
|
ROOT.update()
|
||||||
|
PREVIEW.after(50, wait_for_withdraw)
|
||||||
|
wait_for_withdraw()
|
||||||
|
return
|
||||||
|
else: # modules.globals.map_faces is True
|
||||||
|
if not modules.globals.source_path:
|
||||||
|
update_status("Error: No global source image selected (for hair/background in map_faces mode).")
|
||||||
|
cap.release()
|
||||||
|
PREVIEW.withdraw()
|
||||||
|
def wait_for_withdraw():
|
||||||
|
if PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
|
||||||
|
ROOT.update_idletasks()
|
||||||
|
ROOT.update()
|
||||||
|
PREVIEW.after(50, wait_for_withdraw)
|
||||||
|
wait_for_withdraw()
|
||||||
|
return
|
||||||
|
if not os.path.exists(modules.globals.source_path):
|
||||||
|
update_status(f"Error: Source image (for hair/background) not found at {modules.globals.source_path}")
|
||||||
|
cap.release()
|
||||||
|
PREVIEW.withdraw()
|
||||||
|
def wait_for_withdraw():
|
||||||
|
if PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
|
||||||
|
ROOT.update_idletasks()
|
||||||
|
ROOT.update()
|
||||||
|
PREVIEW.after(50, wait_for_withdraw)
|
||||||
|
wait_for_withdraw()
|
||||||
|
return
|
||||||
|
source_frame_full_for_cam_map_faces = cv2.imread(modules.globals.source_path)
|
||||||
|
if source_frame_full_for_cam_map_faces is None:
|
||||||
|
update_status(f"Error: Could not read source image (for hair/background) at {modules.globals.source_path}")
|
||||||
|
cap.release()
|
||||||
|
PREVIEW.withdraw()
|
||||||
|
def wait_for_withdraw():
|
||||||
|
if PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
|
||||||
|
ROOT.update_idletasks()
|
||||||
|
ROOT.update()
|
||||||
|
PREVIEW.after(50, wait_for_withdraw)
|
||||||
|
wait_for_withdraw()
|
||||||
|
return
|
||||||
|
|
||||||
|
if not modules.globals.source_target_map and not modules.globals.simple_map:
|
||||||
|
update_status("Warning: No face map defined for map_faces mode. Swapper may not work as expected.")
|
||||||
|
# This is a warning, not a fatal error for the preview window itself. Processing will continue.
|
||||||
|
# No persistent loop here, as it's a warning about functionality, not a critical load error.
|
||||||
|
|
||||||
|
# --- End Source Image Loading ---
|
||||||
|
|
||||||
prev_time = time.time()
|
prev_time = time.time()
|
||||||
fps_update_interval = 0.5
|
fps_update_interval = 0.5
|
||||||
frame_count = 0
|
frame_count = 0
|
||||||
|
@ -907,23 +1024,28 @@ def create_webcam_preview(camera_index: int):
|
||||||
)
|
)
|
||||||
|
|
||||||
if not modules.globals.map_faces:
|
if not modules.globals.map_faces:
|
||||||
if source_image is None and modules.globals.source_path:
|
# Case 1: map_faces is False - source_face_obj_for_cam and source_frame_full_for_cam are pre-loaded
|
||||||
source_image = get_one_face(cv2.imread(modules.globals.source_path))
|
if source_face_obj_for_cam and source_frame_full_for_cam is not None: # Check if valid after pre-loading
|
||||||
|
for frame_processor in frame_processors:
|
||||||
for frame_processor in frame_processors:
|
if frame_processor.NAME == "DLC.FACE-ENHANCER":
|
||||||
if frame_processor.NAME == "DLC.FACE-ENHANCER":
|
if modules.globals.fp_ui["face_enhancer"]:
|
||||||
if modules.globals.fp_ui["face_enhancer"]:
|
temp_frame = frame_processor.process_frame(None, temp_frame)
|
||||||
temp_frame = frame_processor.process_frame(None, temp_frame)
|
else:
|
||||||
else:
|
temp_frame = frame_processor.process_frame(source_face_obj_for_cam, source_frame_full_for_cam, temp_frame)
|
||||||
temp_frame = frame_processor.process_frame(source_image, temp_frame)
|
# If source image was invalid (e.g. no face), source_face_obj_for_cam might be None.
|
||||||
|
# In this case, the frame processors that need it will be skipped, effectively just showing the raw webcam frame.
|
||||||
|
# The error message is already persistent due to the pre-loop check.
|
||||||
else:
|
else:
|
||||||
modules.globals.target_path = None
|
# Case 2: map_faces is True - source_frame_full_for_cam_map_faces is pre-loaded
|
||||||
for frame_processor in frame_processors:
|
if source_frame_full_for_cam_map_faces is not None: # Check if valid after pre-loading
|
||||||
if frame_processor.NAME == "DLC.FACE-ENHANCER":
|
modules.globals.target_path = None # Standard for live mode
|
||||||
if modules.globals.fp_ui["face_enhancer"]:
|
for frame_processor in frame_processors:
|
||||||
temp_frame = frame_processor.process_frame_v2(temp_frame)
|
if frame_processor.NAME == "DLC.FACE-ENHANCER":
|
||||||
else:
|
if modules.globals.fp_ui["face_enhancer"]:
|
||||||
temp_frame = frame_processor.process_frame_v2(temp_frame)
|
temp_frame = frame_processor.process_frame_v2(source_frame_full_for_cam_map_faces, temp_frame)
|
||||||
|
else:
|
||||||
|
temp_frame = frame_processor.process_frame_v2(source_frame_full_for_cam_map_faces, temp_frame)
|
||||||
|
# If source_frame_full_for_cam_map_faces was invalid, error is persistent from pre-loop check.
|
||||||
|
|
||||||
# Calculate and display FPS
|
# Calculate and display FPS
|
||||||
current_time = time.time()
|
current_time = time.time()
|
||||||
|
|
|
@ -50,7 +50,48 @@ class VideoCapturer:
|
||||||
continue
|
continue
|
||||||
else:
|
else:
|
||||||
# Unix-like systems (Linux/Mac) capture method
|
# Unix-like systems (Linux/Mac) capture method
|
||||||
self.cap = cv2.VideoCapture(self.device_index)
|
backend = getattr(self, "camera_backend", None)
|
||||||
|
if backend is None:
|
||||||
|
import os
|
||||||
|
backend_env = os.environ.get("VIDEO_CAPTURE_BACKEND")
|
||||||
|
if backend_env is not None:
|
||||||
|
try:
|
||||||
|
backend = int(backend_env)
|
||||||
|
except ValueError:
|
||||||
|
backend = getattr(cv2, backend_env, None)
|
||||||
|
if platform.system() == "Darwin": # macOS
|
||||||
|
tried_backends = []
|
||||||
|
if backend is not None:
|
||||||
|
print(f"INFO: Attempting to use user-specified backend {backend} for macOS camera.")
|
||||||
|
self.cap = cv2.VideoCapture(self.device_index, backend)
|
||||||
|
tried_backends.append(backend)
|
||||||
|
else:
|
||||||
|
print("INFO: Attempting to use cv2.CAP_AVFOUNDATION for macOS camera.")
|
||||||
|
self.cap = cv2.VideoCapture(self.device_index, cv2.CAP_AVFOUNDATION)
|
||||||
|
tried_backends.append(cv2.CAP_AVFOUNDATION)
|
||||||
|
if not self.cap or not self.cap.isOpened():
|
||||||
|
print("WARN: First backend failed to open camera. Trying cv2.CAP_QT for macOS.")
|
||||||
|
if self.cap:
|
||||||
|
self.cap.release()
|
||||||
|
if cv2.CAP_QT not in tried_backends:
|
||||||
|
self.cap = cv2.VideoCapture(self.device_index, cv2.CAP_QT)
|
||||||
|
tried_backends.append(cv2.CAP_QT)
|
||||||
|
if not self.cap or not self.cap.isOpened():
|
||||||
|
print("WARN: cv2.CAP_QT failed to open camera. Trying default backend for macOS.")
|
||||||
|
if self.cap:
|
||||||
|
self.cap.release()
|
||||||
|
self.cap = cv2.VideoCapture(self.device_index) # Fallback to default
|
||||||
|
else: # Other Unix-like systems (e.g., Linux)
|
||||||
|
if backend is not None:
|
||||||
|
print(f"INFO: Attempting to use user-specified backend {backend} for camera.")
|
||||||
|
self.cap = cv2.VideoCapture(self.device_index, backend)
|
||||||
|
if not self.cap or not self.cap.isOpened():
|
||||||
|
print("WARN: User-specified backend failed. Trying default backend.")
|
||||||
|
if self.cap:
|
||||||
|
self.cap.release()
|
||||||
|
self.cap = cv2.VideoCapture(self.device_index)
|
||||||
|
else:
|
||||||
|
self.cap = cv2.VideoCapture(self.device_index)
|
||||||
|
|
||||||
if not self.cap or not self.cap.isOpened():
|
if not self.cap or not self.cap.isOpened():
|
||||||
raise RuntimeError("Failed to open camera")
|
raise RuntimeError("Failed to open camera")
|
||||||
|
|
|
@ -19,3 +19,4 @@ onnxruntime-gpu==1.17; sys_platform != 'darwin'
|
||||||
tensorflow; sys_platform != 'darwin'
|
tensorflow; sys_platform != 'darwin'
|
||||||
opennsfw2==0.10.2
|
opennsfw2==0.10.2
|
||||||
protobuf==4.23.2
|
protobuf==4.23.2
|
||||||
|
transformers>=4.0.0
|
||||||
|
|
17
run-cuda.bat
17
run-cuda.bat
|
@ -1 +1,16 @@
|
||||||
python run.py --execution-provider cuda
|
@echo off
|
||||||
|
set VENV_DIR=.venv
|
||||||
|
|
||||||
|
:: Check if virtual environment exists
|
||||||
|
if not exist "%VENV_DIR%\Scripts\activate.bat" (
|
||||||
|
echo Virtual environment '%VENV_DIR%' not found.
|
||||||
|
echo Please run setup_windows.bat first.
|
||||||
|
pause
|
||||||
|
exit /b 1
|
||||||
|
)
|
||||||
|
|
||||||
|
echo Activating virtual environment...
|
||||||
|
call "%VENV_DIR%\Scripts\activate.bat"
|
||||||
|
|
||||||
|
echo Starting the application with CUDA execution provider...
|
||||||
|
python run.py --execution-provider cuda %*
|
||||||
|
|
|
@ -1 +1,16 @@
|
||||||
python run.py --execution-provider dml
|
@echo off
|
||||||
|
set VENV_DIR=.venv
|
||||||
|
|
||||||
|
:: Check if virtual environment exists
|
||||||
|
if not exist "%VENV_DIR%\Scripts\activate.bat" (
|
||||||
|
echo Virtual environment '%VENV_DIR%' not found.
|
||||||
|
echo Please run setup_windows.bat first.
|
||||||
|
pause
|
||||||
|
exit /b 1
|
||||||
|
)
|
||||||
|
|
||||||
|
echo Activating virtual environment...
|
||||||
|
call "%VENV_DIR%\Scripts\activate.bat"
|
||||||
|
|
||||||
|
echo Starting the application with DirectML execution provider...
|
||||||
|
python run.py --execution-provider dml %*
|
||||||
|
|
|
@ -0,0 +1,20 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
VENV_DIR=".venv"
|
||||||
|
|
||||||
|
# Check if virtual environment exists
|
||||||
|
if [ ! -d "$VENV_DIR" ]; then
|
||||||
|
echo "Virtual environment '$VENV_DIR' not found."
|
||||||
|
echo "Please run ./setup_mac.sh first to create the environment and install dependencies."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Activating virtual environment..."
|
||||||
|
source "$VENV_DIR/bin/activate"
|
||||||
|
|
||||||
|
echo "Starting the application with CPU execution provider..."
|
||||||
|
# Passes all arguments passed to this script (e.g., --source, --target) to run.py
|
||||||
|
python3 run.py --execution-provider cpu "$@"
|
||||||
|
|
||||||
|
# Deactivate after script finishes (optional, as shell context closes)
|
||||||
|
# deactivate
|
|
@ -0,0 +1,13 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
VENV_DIR=".venv"
|
||||||
|
|
||||||
|
if [ ! -d "$VENV_DIR" ]; then
|
||||||
|
echo "Virtual environment '$VENV_DIR' not found."
|
||||||
|
echo "Please run ./setup_mac.sh first."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
source "$VENV_DIR/bin/activate"
|
||||||
|
echo "Starting the application with CoreML execution provider..."
|
||||||
|
python3 run.py --execution-provider coreml "$@"
|
|
@ -0,0 +1,13 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
VENV_DIR=".venv"
|
||||||
|
|
||||||
|
if [ ! -d "$VENV_DIR" ]; then
|
||||||
|
echo "Virtual environment '$VENV_DIR' not found."
|
||||||
|
echo "Please run ./setup_mac.sh first."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
source "$VENV_DIR/bin/activate"
|
||||||
|
echo "Starting the application with CPU execution provider..."
|
||||||
|
python3 run.py --execution-provider cpu "$@"
|
|
@ -0,0 +1,13 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
VENV_DIR=".venv"
|
||||||
|
|
||||||
|
if [ ! -d "$VENV_DIR" ]; then
|
||||||
|
echo "Virtual environment '$VENV_DIR' not found."
|
||||||
|
echo "Please run ./setup_mac.sh first."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
source "$VENV_DIR/bin/activate"
|
||||||
|
echo "Starting the application with MPS execution provider (for Apple Silicon)..."
|
||||||
|
python3 run.py --execution-provider mps "$@"
|
|
@ -0,0 +1,20 @@
|
||||||
|
@echo off
|
||||||
|
set VENV_DIR=.venv
|
||||||
|
|
||||||
|
:: Check if virtual environment exists
|
||||||
|
if not exist "%VENV_DIR%\Scripts\activate.bat" (
|
||||||
|
echo Virtual environment '%VENV_DIR%' not found.
|
||||||
|
echo Please run setup_windows.bat first to create the environment and install dependencies.
|
||||||
|
pause
|
||||||
|
exit /b 1
|
||||||
|
)
|
||||||
|
|
||||||
|
echo Activating virtual environment...
|
||||||
|
call "%VENV_DIR%\Scripts\activate.bat"
|
||||||
|
|
||||||
|
echo Starting the application with CPU execution provider...
|
||||||
|
:: Passes all arguments passed to this script to run.py
|
||||||
|
python run.py --execution-provider cpu %*
|
||||||
|
|
||||||
|
:: Optional: Deactivate after script finishes
|
||||||
|
:: call deactivate
|
|
@ -0,0 +1,81 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# Exit immediately if a command exits with a non-zero status.
|
||||||
|
set -e
|
||||||
|
|
||||||
|
echo "Starting macOS setup..."
|
||||||
|
|
||||||
|
# 1. Check for Python 3
|
||||||
|
echo "Checking for Python 3..."
|
||||||
|
if ! command -v python3 &> /dev/null
|
||||||
|
then
|
||||||
|
echo "Python 3 could not be found. Please install Python 3."
|
||||||
|
echo "You can often install it using Homebrew: brew install python"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 2. Check Python version (>= 3.9)
|
||||||
|
echo "Checking Python 3 version..."
|
||||||
|
python3 -c 'import sys; exit(0) if sys.version_info >= (3,9) else exit(1)'
|
||||||
|
if [ $? -ne 0 ]; then
|
||||||
|
echo "Python 3.9 or higher is required."
|
||||||
|
echo "Your version is: $(python3 --version)"
|
||||||
|
echo "Please upgrade your Python version. Consider using pyenv or Homebrew to manage Python versions."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo "Python 3.9+ found: $(python3 --version)"
|
||||||
|
|
||||||
|
# 3. Check for ffmpeg
|
||||||
|
echo "Checking for ffmpeg..."
|
||||||
|
if ! command -v ffmpeg &> /dev/null
|
||||||
|
then
|
||||||
|
echo "WARNING: ffmpeg could not be found. This program requires ffmpeg for video processing."
|
||||||
|
echo "You can install it using Homebrew: brew install ffmpeg"
|
||||||
|
echo "Continuing with setup, but video processing might fail later."
|
||||||
|
else
|
||||||
|
echo "ffmpeg found: $(ffmpeg -version | head -n 1)"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 4. Define virtual environment directory
|
||||||
|
VENV_DIR=".venv"
|
||||||
|
|
||||||
|
# 5. Create virtual environment
|
||||||
|
if [ -d "$VENV_DIR" ]; then
|
||||||
|
echo "Virtual environment '$VENV_DIR' already exists. Skipping creation."
|
||||||
|
else
|
||||||
|
echo "Creating virtual environment in '$VENV_DIR'..."
|
||||||
|
python3 -m venv "$VENV_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 6. Activate virtual environment (for this script's session)
|
||||||
|
echo "Activating virtual environment..."
|
||||||
|
source "$VENV_DIR/bin/activate"
|
||||||
|
|
||||||
|
# 7. Upgrade pip
|
||||||
|
echo "Upgrading pip..."
|
||||||
|
pip install --upgrade pip
|
||||||
|
|
||||||
|
# 8. Install requirements
|
||||||
|
echo "Installing requirements from requirements.txt..."
|
||||||
|
if [ -f "requirements.txt" ]; then
|
||||||
|
pip install -r requirements.txt
|
||||||
|
else
|
||||||
|
echo "ERROR: requirements.txt not found. Cannot install dependencies."
|
||||||
|
# Deactivate on error if desired, or leave active for user to debug
|
||||||
|
# deactivate
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "Setup complete!"
|
||||||
|
echo ""
|
||||||
|
echo "To activate the virtual environment in your terminal, run:"
|
||||||
|
echo " source $VENV_DIR/bin/activate"
|
||||||
|
echo ""
|
||||||
|
echo "After activating, you can run the application using:"
|
||||||
|
echo " python3 run.py [arguments]"
|
||||||
|
echo "Or use one of the run_mac_*.sh scripts (e.g., ./run_mac_cpu.sh)."
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Deactivate at the end of the script's execution (optional, as script session ends)
|
||||||
|
# deactivate
|
|
@ -0,0 +1,79 @@
|
||||||
|
@echo off
|
||||||
|
echo Starting Windows setup...
|
||||||
|
|
||||||
|
:: 1. Check for Python
|
||||||
|
echo Checking for Python...
|
||||||
|
python --version >nul 2>&1
|
||||||
|
if errorlevel 1 (
|
||||||
|
echo Python could not be found in your PATH.
|
||||||
|
echo Please install Python 3 (3.10 or higher recommended) and ensure it's added to your PATH.
|
||||||
|
echo You can download Python from https://www.python.org/downloads/
|
||||||
|
pause
|
||||||
|
exit /b 1
|
||||||
|
)
|
||||||
|
|
||||||
|
:: Optional: Check Python version (e.g., >= 3.9 or >=3.10).
|
||||||
|
:: This is a bit more complex in pure batch. For now, rely on user having a modern Python 3.
|
||||||
|
:: The README will recommend 3.10.
|
||||||
|
echo Found Python:
|
||||||
|
python --version
|
||||||
|
|
||||||
|
:: 2. Check for ffmpeg (informational)
|
||||||
|
echo Checking for ffmpeg...
|
||||||
|
ffmpeg -version >nul 2>&1
|
||||||
|
if errorlevel 1 (
|
||||||
|
echo WARNING: ffmpeg could not be found in your PATH. This program requires ffmpeg for video processing.
|
||||||
|
echo Please download ffmpeg from https://ffmpeg.org/download.html and add it to your system's PATH.
|
||||||
|
echo (The README.md contains a link for a potentially easier ffmpeg install method using a PowerShell command)
|
||||||
|
echo Continuing with setup, but video processing might fail later.
|
||||||
|
pause
|
||||||
|
) else (
|
||||||
|
echo ffmpeg found.
|
||||||
|
)
|
||||||
|
|
||||||
|
:: 3. Define virtual environment directory
|
||||||
|
set VENV_DIR=.venv
|
||||||
|
|
||||||
|
:: 4. Create virtual environment
|
||||||
|
if exist "%VENV_DIR%\Scripts\activate.bat" (
|
||||||
|
echo Virtual environment '%VENV_DIR%' already exists. Skipping creation.
|
||||||
|
) else (
|
||||||
|
echo Creating virtual environment in '%VENV_DIR%'...
|
||||||
|
python -m venv "%VENV_DIR%"
|
||||||
|
if errorlevel 1 (
|
||||||
|
echo Failed to create virtual environment. Please check your Python installation.
|
||||||
|
pause
|
||||||
|
exit /b 1
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
:: 5. Activate virtual environment (for this script's session)
|
||||||
|
echo Activating virtual environment...
|
||||||
|
call "%VENV_DIR%\Scripts\activate.bat"
|
||||||
|
|
||||||
|
:: 6. Upgrade pip
|
||||||
|
echo Upgrading pip...
|
||||||
|
python -m pip install --upgrade pip
|
||||||
|
|
||||||
|
:: 7. Install requirements
|
||||||
|
echo Installing requirements from requirements.txt...
|
||||||
|
if exist "requirements.txt" (
|
||||||
|
python -m pip install -r requirements.txt
|
||||||
|
) else (
|
||||||
|
echo ERROR: requirements.txt not found. Cannot install dependencies.
|
||||||
|
pause
|
||||||
|
exit /b 1
|
||||||
|
)
|
||||||
|
|
||||||
|
echo.
|
||||||
|
echo Setup complete!
|
||||||
|
echo.
|
||||||
|
echo To activate the virtual environment in your command prompt, run:
|
||||||
|
echo %VENV_DIR%\Scripts\activate.bat
|
||||||
|
echo.
|
||||||
|
echo After activating, you can run the application using:
|
||||||
|
echo python run.py [arguments]
|
||||||
|
echo Or use one of the run-*.bat scripts (e.g., run-cuda.bat, run_windows.bat).
|
||||||
|
echo.
|
||||||
|
pause
|
||||||
|
exit /b 0
|
Loading…
Reference in New Issue