rehanbgmi 2025-05-25 22:37:31 +05:30 committed by GitHub
commit ff0608292d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
11 changed files with 719 additions and 152 deletions

View File

@ -150,22 +150,64 @@ pip install -r requirements.txt
**For macOS:**
Apple Silicon (M1/M2/M3) requires specific setup:
For a streamlined setup on macOS, use the provided shell scripts:
```bash
# Install Python 3.10 (specific version is important)
brew install python@3.10
1. **Make scripts executable:**
Open your terminal, navigate to the cloned `Deep-Live-Cam` directory, and run:
```bash
chmod +x setup_mac.sh
chmod +x run_mac*.sh
```
# Install tkinter package (required for the GUI)
brew install python-tk@3.10
2. **Run the setup script:**
This will check for Python 3.9+, ffmpeg, create a virtual environment (`.venv`), and install required Python packages.
```bash
./setup_mac.sh
```
If you encounter issues with specific packages during `pip install` (especially for libraries that compile C code, like some image processing libraries), you might need to install system libraries via Homebrew (e.g., `brew install jpeg libtiff ...`) or ensure Xcode Command Line Tools are installed (`xcode-select --install`).
# Create and activate virtual environment with Python 3.10
python3.10 -m venv venv
source venv/bin/activate
3. **Activate the virtual environment (for manual runs):**
After setup, if you want to run commands manually or use developer tools from your terminal session:
```bash
source .venv/bin/activate
```
(To deactivate, simply type `deactivate` in the terminal.)
# Install dependencies
pip install -r requirements.txt
```
4. **Run the application:**
Use the provided run scripts for convenience. These scripts automatically activate the virtual environment.
* `./run_mac.sh`: Runs the application with the CPU execution provider by default. This is a good starting point.
* `./run_mac_cpu.sh`: Explicitly uses the CPU execution provider.
* `./run_mac_coreml.sh`: Attempts to use the CoreML execution provider for potential hardware acceleration on Apple Silicon and Intel Macs.
* `./run_mac_mps.sh`: Attempts to use the MPS (Metal Performance Shaders) execution provider, primarily for Apple Silicon Macs.
Example of running with specific source/target arguments:
```bash
./run_mac.sh --source path/to/your_face.jpg --target path/to/video.mp4
```
Or, to simply launch the UI:
```bash
./run_mac.sh
```
**Important Notes for macOS GPU Acceleration (CoreML/MPS):**
* The `setup_mac.sh` script installs packages from `requirements.txt`, which typically includes a general CPU-based version of `onnxruntime`.
* For optimal performance on Apple Silicon (M1/M2/M3) or specific GPU acceleration, you might need to install a different `onnxruntime` package *after* running `setup_mac.sh` and while the virtual environment (`.venv`) is active.
* **Example for `onnxruntime-silicon` (often requires Python 3.10 for older versions like 1.13.1):**
The original `README` noted that `onnxruntime-silicon==1.13.1` was specific to Python 3.10. If you intend to use this exact version for CoreML:
```bash
# Ensure you are using Python 3.10 if required by your chosen onnxruntime-silicon version
# After running setup_mac.sh and activating .venv:
# source .venv/bin/activate
pip uninstall onnxruntime onnxruntime-gpu # Uninstall any existing onnxruntime
pip install onnxruntime-silicon==1.13.1 # Or your desired version
# Then use ./run_mac_coreml.sh
```
Check the ONNX Runtime documentation for the latest recommended packages for Apple Silicon.
* **For MPS with ONNX Runtime:** This may require a specific build or version of `onnxruntime`. Consult the ONNX Runtime documentation. For PyTorch-based operations (like the Face Enhancer or Hair Segmenter if they were PyTorch native and not ONNX), PyTorch should automatically try to use MPS on compatible Apple Silicon hardware if available.
* **User Interface (Tkinter):** If you encounter errors related to `_tkinter` not being found when launching the UI, ensure your Python installation supports Tk. For Python installed via Homebrew, this is usually `python-tk` (e.g., `brew install python-tk@3.9` or `brew install python-tk@3.10`, matching your Python version).
** In case something goes wrong and you need to reinstall the virtual environment **

View File

@ -41,3 +41,4 @@ show_mouth_mask_box = False
mask_feather_ratio = 8
mask_down_size = 0.50
mask_size = 1
enable_hair_swapping = True # Default state for enabling/disabling hair swapping

View File

@ -0,0 +1,110 @@
import torch
import numpy as np
from PIL import Image
from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation
import cv2 # Imported for BGR to RGB conversion, though PIL can also do it.
# Global variables for caching
HAIR_SEGMENTER_PROCESSOR = None
HAIR_SEGMENTER_MODEL = None
MODEL_NAME = "isjackwild/segformer-b0-finetuned-segments-skin-hair-clothing"
def segment_hair(image_np: np.ndarray) -> np.ndarray:
"""
Segments hair from an image.
Args:
image_np: NumPy array representing the image (BGR format from OpenCV).
Returns:
NumPy array representing the binary hair mask.
"""
global HAIR_SEGMENTER_PROCESSOR, HAIR_SEGMENTER_MODEL
if HAIR_SEGMENTER_PROCESSOR is None or HAIR_SEGMENTER_MODEL is None:
print(f"Loading hair segmentation model and processor ({MODEL_NAME}) for the first time...")
try:
HAIR_SEGMENTER_PROCESSOR = SegformerImageProcessor.from_pretrained(MODEL_NAME)
HAIR_SEGMENTER_MODEL = SegformerForSemanticSegmentation.from_pretrained(MODEL_NAME)
# Optional: Move model to GPU if available and if other models use GPU
# if torch.cuda.is_available():
# HAIR_SEGMENTER_MODEL = HAIR_SEGMENTER_MODEL.to('cuda')
# print("Hair segmentation model moved to GPU.")
print("Hair segmentation model and processor loaded successfully.")
except Exception as e:
print(f"Failed to load hair segmentation model/processor: {e}")
# Return an empty mask compatible with expected output shape (H, W)
return np.zeros((image_np.shape[0], image_np.shape[1]), dtype=np.uint8)
# Ensure processor and model are loaded before proceeding
if HAIR_SEGMENTER_PROCESSOR is None or HAIR_SEGMENTER_MODEL is None:
print("Error: Hair segmentation models are not available.")
return np.zeros((image_np.shape[0], image_np.shape[1]), dtype=np.uint8)
# Convert BGR (OpenCV) to RGB (PIL)
image_rgb = cv2.cvtColor(image_np, cv2.COLOR_BGR2RGB)
image_pil = Image.fromarray(image_rgb)
inputs = HAIR_SEGMENTER_PROCESSOR(images=image_pil, return_tensors="pt")
# Optional: Move inputs to GPU if model is on GPU
# if HAIR_SEGMENTER_MODEL.device.type == 'cuda':
# inputs = inputs.to(HAIR_SEGMENTER_MODEL.device)
with torch.no_grad(): # Important for inference
outputs = HAIR_SEGMENTER_MODEL(**inputs)
logits = outputs.logits # Shape: batch_size, num_labels, height, width
# Upsample logits to original image size
upsampled_logits = torch.nn.functional.interpolate(
logits,
size=(image_np.shape[0], image_np.shape[1]), # H, W
mode='bilinear',
align_corners=False
)
segmentation_map = upsampled_logits.argmax(dim=1).squeeze().cpu().numpy().astype(np.uint8)
# Label 2 is for hair in this model
return np.where(segmentation_map == 2, 255, 0).astype(np.uint8)
if __name__ == '__main__':
# This is a conceptual test.
# In a real scenario, you would load an image using OpenCV or Pillow.
# For example:
# sample_image_np = cv2.imread("path/to/your/image.jpg")
# if sample_image_np is not None:
# hair_mask_output = segment_hair(sample_image_np)
# cv2.imwrite("hair_mask_output.png", hair_mask_output)
# print("Hair mask saved to hair_mask_output.png")
# else:
# print("Failed to load sample image.")
print("Conceptual test: Hair segmenter module created.")
# Create a dummy image for a basic test run if no image is available.
dummy_image_np = np.zeros((100, 100, 3), dtype=np.uint8) # 100x100 BGR image
dummy_image_np[:, :, 1] = 255 # Make it green to distinguish from black mask
try:
print("Running segment_hair with a dummy image...")
hair_mask_output = segment_hair(dummy_image_np)
print(f"segment_hair returned a mask of shape: {hair_mask_output.shape}")
# Check if the output is a 2D array (mask) and has the same H, W as input
assert hair_mask_output.shape == (dummy_image_np.shape[0], dummy_image_np.shape[1])
# Check if the mask is binary (0 or 255)
assert np.all(np.isin(hair_mask_output, [0, 255]))
print("Dummy image test successful. Hair mask seems to be generated correctly.")
# Attempt to save the dummy mask (optional, just for visual confirmation if needed)
# cv2.imwrite("dummy_hair_mask_output.png", hair_mask_output)
# print("Dummy hair mask saved to dummy_hair_mask_output.png")
except ImportError as e:
print(f"An ImportError occurred: {e}. This might be due to missing dependencies like transformers, torch, or Pillow.")
print("Please ensure all required packages are installed by updating requirements.txt and installing them.")
except Exception as e:
print(f"An error occurred during the dummy image test: {e}")
print("This could be due to issues with model loading, processing, or other runtime errors.")
print("To perform a full test, replace the dummy image with a real image path.")

View File

@ -9,6 +9,7 @@ import modules.processors.frame.core
from modules.core import update_status
from modules.face_analyser import get_one_face, get_many_faces, default_source_face
from modules.typing import Face, Frame
from modules.hair_segmenter import segment_hair
from modules.utilities import (
conditional_download,
is_image,
@ -67,15 +68,133 @@ def get_face_swapper() -> Any:
return FACE_SWAPPER
def swap_face(source_face: Face, target_face: Face, temp_frame: Frame) -> Frame:
def _prepare_warped_source_material_and_mask(
source_face_obj: Face,
source_frame_full: Frame,
matrix: np.ndarray,
dsize: tuple
) -> tuple[Frame | None, Frame | None]:
"""
Prepares warped source material (full image) and a combined (face+hair) mask for blending.
Returns (None, None) if essential masks cannot be generated.
"""
# Generate Hair Mask
hair_only_mask_source_raw = segment_hair(source_frame_full)
if hair_only_mask_source_raw.ndim == 3 and hair_only_mask_source_raw.shape[2] == 3:
hair_only_mask_source_raw = cv2.cvtColor(hair_only_mask_source_raw, cv2.COLOR_BGR2GRAY)
_, hair_only_mask_source_binary = cv2.threshold(hair_only_mask_source_raw, 127, 255, cv2.THRESH_BINARY)
# Generate Face Mask
face_only_mask_source_raw = create_face_mask(source_face_obj, source_frame_full)
_, face_only_mask_source_binary = cv2.threshold(face_only_mask_source_raw, 127, 255, cv2.THRESH_BINARY)
# Combine Face and Hair Masks
if face_only_mask_source_binary.shape != hair_only_mask_source_binary.shape:
logging.warning("Resizing hair mask to match face mask for source during preparation.")
hair_only_mask_source_binary = cv2.resize(
hair_only_mask_source_binary,
(face_only_mask_source_binary.shape[1], face_only_mask_source_binary.shape[0]),
interpolation=cv2.INTER_NEAREST
)
actual_combined_source_mask = cv2.bitwise_or(face_only_mask_source_binary, hair_only_mask_source_binary)
actual_combined_source_mask_blurred = cv2.GaussianBlur(actual_combined_source_mask, (5, 5), 3)
# Warp the Combined Mask and Full Source Material
warped_full_source_material = cv2.warpAffine(source_frame_full, matrix, dsize)
warped_combined_mask_temp = cv2.warpAffine(actual_combined_source_mask_blurred, matrix, dsize)
_, warped_combined_mask_binary_for_clone = cv2.threshold(warped_combined_mask_temp, 127, 255, cv2.THRESH_BINARY)
return warped_full_source_material, warped_combined_mask_binary_for_clone
def _blend_material_onto_frame(
base_frame: Frame,
material_to_blend: Frame,
mask_for_blending: Frame
) -> Frame:
"""
Blends material onto a base frame using a mask.
Uses seamlessClone if possible, otherwise falls back to simple masking.
"""
x, y, w, h = cv2.boundingRect(mask_for_blending)
output_frame = base_frame # Start with base, will be modified by blending
if w > 0 and h > 0:
center = (x + w // 2, y + h // 2)
if material_to_blend.shape == base_frame.shape and \
material_to_blend.dtype == base_frame.dtype and \
mask_for_blending.dtype == np.uint8:
try:
# Important: seamlessClone modifies the first argument (dst) if it's the same as the output var
# So, if base_frame is final_swapped_frame, it will be modified in place.
# If we want to keep base_frame pristine, it should be base_frame.copy() if it's also final_swapped_frame.
# Given final_swapped_frame is already a copy of swapped_frame at this point, this is fine.
output_frame = cv2.seamlessClone(material_to_blend, base_frame, mask_for_blending, center, cv2.NORMAL_CLONE)
except cv2.error as e:
logging.warning(f"cv2.seamlessClone failed: {e}. Falling back to simple blending.")
boolean_mask = mask_for_blending > 127
output_frame[boolean_mask] = material_to_blend[boolean_mask]
else:
logging.warning("Mismatch in shape/type for seamlessClone. Falling back to simple blending.")
boolean_mask = mask_for_blending > 127
output_frame[boolean_mask] = material_to_blend[boolean_mask]
else:
logging.info("Warped mask for blending is empty. Skipping blending.")
return output_frame
def swap_face(source_face_obj: Face, target_face: Face, source_frame_full: Frame, temp_frame: Frame) -> Frame:
face_swapper = get_face_swapper()
# Apply the face swap
swapped_frame = face_swapper.get(
temp_frame, target_face, source_face, paste_back=True
)
# Apply the base face swap
swapped_frame = face_swapper.get(temp_frame, target_face, source_face_obj, paste_back=True)
final_swapped_frame = swapped_frame # Initialize with the base swap. Copy is made only if needed.
if modules.globals.enable_hair_swapping:
if not (source_face_obj.kps is not None and \
target_face.kps is not None and \
source_face_obj.kps.shape[0] >= 3 and \
target_face.kps.shape[0] >= 3):
logging.warning(
f"Skipping hair blending due to insufficient keypoints. "
f"Source kps: {source_face_obj.kps.shape if source_face_obj.kps is not None else 'None'}, "
f"Target kps: {target_face.kps.shape if target_face.kps is not None else 'None'}."
)
else:
source_kps_float = source_face_obj.kps.astype(np.float32)
target_kps_float = target_face.kps.astype(np.float32)
matrix, _ = cv2.estimateAffinePartial2D(source_kps_float, target_kps_float, method=cv2.LMEDS)
if matrix is None:
logging.warning("Failed to estimate affine transformation matrix for hair. Skipping hair blending.")
else:
dsize = (temp_frame.shape[1], temp_frame.shape[0]) # width, height
warped_material, warped_mask = _prepare_warped_source_material_and_mask(
source_face_obj, source_frame_full, matrix, dsize
)
if warped_material is not None and warped_mask is not None:
# Make a copy only now that we are sure we will modify it for hair.
final_swapped_frame = swapped_frame.copy()
color_corrected_material = apply_color_transfer(warped_material, final_swapped_frame) # Use final_swapped_frame for color context
final_swapped_frame = _blend_material_onto_frame(
final_swapped_frame,
color_corrected_material,
warped_mask
)
# Mouth Mask Logic (operates on final_swapped_frame)
if modules.globals.mouth_mask:
# If final_swapped_frame wasn't copied for hair, it needs to be copied now before mouth mask modification.
if final_swapped_frame is swapped_frame: # Check if it's still the same object
final_swapped_frame = swapped_frame.copy()
# Create a mask for the target face
face_mask = create_face_mask(target_face, temp_frame)
@ -85,20 +204,21 @@ def swap_face(source_face: Face, target_face: Face, temp_frame: Frame) -> Frame:
)
# Apply the mouth area
swapped_frame = apply_mouth_area(
swapped_frame, mouth_cutout, mouth_box, face_mask, lower_lip_polygon
# Apply to final_swapped_frame if hair blending happened, otherwise to swapped_frame
final_swapped_frame = apply_mouth_area(
final_swapped_frame, mouth_cutout, mouth_box, face_mask, lower_lip_polygon
)
if modules.globals.show_mouth_mask_box:
mouth_mask_data = (mouth_mask, mouth_cutout, mouth_box, lower_lip_polygon)
swapped_frame = draw_mouth_mask_visualization(
swapped_frame, target_face, mouth_mask_data
final_swapped_frame = draw_mouth_mask_visualization(
final_swapped_frame, target_face, mouth_mask_data
)
return swapped_frame
return final_swapped_frame
def process_frame(source_face: Face, temp_frame: Frame) -> Frame:
def process_frame(source_face_obj: Face, source_frame_full: Frame, temp_frame: Frame) -> Frame:
if modules.globals.color_correction:
temp_frame = cv2.cvtColor(temp_frame, cv2.COLOR_BGR2RGB)
@ -106,152 +226,192 @@ def process_frame(source_face: Face, temp_frame: Frame) -> Frame:
many_faces = get_many_faces(temp_frame)
if many_faces:
for target_face in many_faces:
if source_face and target_face:
temp_frame = swap_face(source_face, target_face, temp_frame)
if source_face_obj and target_face:
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
else:
print("Face detection failed for target/source.")
else:
target_face = get_one_face(temp_frame)
if target_face and source_face:
temp_frame = swap_face(source_face, target_face, temp_frame)
if target_face and source_face_obj:
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
else:
logging.error("Face detection failed for target or source.")
return temp_frame
# process_frame_v2 needs to accept source_frame_full as well
def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
if is_image(modules.globals.target_path):
if modules.globals.many_faces:
source_face = default_source_face()
for map in modules.globals.source_target_map:
target_face = map["target"]["face"]
temp_frame = swap_face(source_face, target_face, temp_frame)
elif not modules.globals.many_faces:
for map in modules.globals.source_target_map:
if "source" in map:
source_face = map["source"]["face"]
target_face = map["target"]["face"]
temp_frame = swap_face(source_face, target_face, temp_frame)
elif is_video(modules.globals.target_path):
if modules.globals.many_faces:
source_face = default_source_face()
for map in modules.globals.source_target_map:
target_frame = [
f
for f in map["target_faces_in_frame"]
if f["location"] == temp_frame_path
]
for frame in target_frame:
for target_face in frame["faces"]:
temp_frame = swap_face(source_face, target_face, temp_frame)
elif not modules.globals.many_faces:
for map in modules.globals.source_target_map:
if "source" in map:
target_frame = [
f
for f in map["target_faces_in_frame"]
if f["location"] == temp_frame_path
]
source_face = map["source"]["face"]
for frame in target_frame:
for target_face in frame["faces"]:
temp_frame = swap_face(source_face, target_face, temp_frame)
else:
detected_faces = get_many_faces(temp_frame)
if modules.globals.many_faces:
if detected_faces:
source_face = default_source_face()
for target_face in detected_faces:
temp_frame = swap_face(source_face, target_face, temp_frame)
elif not modules.globals.many_faces:
if detected_faces:
if len(detected_faces) <= len(
modules.globals.simple_map["target_embeddings"]
):
for detected_face in detected_faces:
closest_centroid_index, _ = find_closest_centroid(
modules.globals.simple_map["target_embeddings"],
detected_face.normed_embedding,
)
temp_frame = swap_face(
modules.globals.simple_map["source_faces"][
closest_centroid_index
],
detected_face,
temp_frame,
)
else:
detected_faces_centroids = []
for face in detected_faces:
detected_faces_centroids.append(face.normed_embedding)
i = 0
for target_embedding in modules.globals.simple_map[
"target_embeddings"
]:
closest_centroid_index, _ = find_closest_centroid(
detected_faces_centroids, target_embedding
)
temp_frame = swap_face(
modules.globals.simple_map["source_faces"][i],
detected_faces[closest_centroid_index],
temp_frame,
)
i += 1
def _process_image_target_v2(source_frame_full: Frame, temp_frame: Frame) -> Frame:
if modules.globals.many_faces:
source_face_obj = default_source_face()
if source_face_obj:
for map_item in modules.globals.source_target_map:
target_face = map_item["target"]["face"]
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
else: # not many_faces
for map_item in modules.globals.source_target_map:
if "source" in map_item:
source_face_obj = map_item["source"]["face"]
target_face = map_item["target"]["face"]
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
return temp_frame
def _process_video_target_v2(source_frame_full: Frame, temp_frame: Frame, temp_frame_path: str) -> Frame:
if modules.globals.many_faces:
source_face_obj = default_source_face()
if source_face_obj:
for map_item in modules.globals.source_target_map:
target_frames_data = [f for f in map_item.get("target_faces_in_frame", []) if f.get("location") == temp_frame_path]
for frame_data in target_frames_data:
for target_face in frame_data.get("faces", []):
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
else: # not many_faces
for map_item in modules.globals.source_target_map:
if "source" in map_item:
source_face_obj = map_item["source"]["face"]
target_frames_data = [f for f in map_item.get("target_faces_in_frame", []) if f.get("location") == temp_frame_path]
for frame_data in target_frames_data:
for target_face in frame_data.get("faces", []):
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
return temp_frame
def _process_live_target_v2(source_frame_full: Frame, temp_frame: Frame) -> Frame:
detected_faces = get_many_faces(temp_frame)
if not detected_faces:
return temp_frame
if modules.globals.many_faces:
source_face_obj = default_source_face()
if source_face_obj:
for target_face in detected_faces:
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
else: # not many_faces (apply simple_map logic)
if not modules.globals.simple_map or \
not modules.globals.simple_map.get("target_embeddings") or \
not modules.globals.simple_map.get("source_faces"):
logging.warning("Simple map is not configured correctly. Skipping face swap.")
return temp_frame
target_embeddings = modules.globals.simple_map["target_embeddings"]
source_faces_from_map = modules.globals.simple_map["source_faces"]
if len(detected_faces) <= len(target_embeddings):
for detected_face in detected_faces:
closest_centroid_index, _ = find_closest_centroid(target_embeddings, detected_face.normed_embedding)
if closest_centroid_index < len(source_faces_from_map):
source_face_obj_from_map = source_faces_from_map[closest_centroid_index]
temp_frame = swap_face(source_face_obj_from_map, detected_face, source_frame_full, temp_frame)
else:
logging.warning(f"Centroid index {closest_centroid_index} out of bounds for source_faces_from_map.")
else: # More detected faces than target embeddings in simple_map
detected_faces_embeddings = [face.normed_embedding for face in detected_faces]
for i, target_embedding in enumerate(target_embeddings):
if i < len(source_faces_from_map):
closest_detected_face_index, _ = find_closest_centroid(detected_faces_embeddings, target_embedding)
source_face_obj_from_map = source_faces_from_map[i]
target_face_to_swap = detected_faces[closest_detected_face_index]
temp_frame = swap_face(source_face_obj_from_map, target_face_to_swap, source_frame_full, temp_frame)
# Optionally, remove the swapped detected face to prevent re-swapping if one source maps to multiple targets.
# This depends on desired behavior. For now, simple independent mapping.
else:
logging.warning(f"Index {i} out of bounds for source_faces_from_map in simple_map else case.")
return temp_frame
def process_frame_v2(source_frame_full: Frame, temp_frame: Frame, temp_frame_path: str = "") -> Frame:
if is_image(modules.globals.target_path):
return _process_image_target_v2(source_frame_full, temp_frame)
elif is_video(modules.globals.target_path):
return _process_video_target_v2(source_frame_full, temp_frame, temp_frame_path)
else: # This is the live cam / generic case
return _process_live_target_v2(source_frame_full, temp_frame)
def process_frames(
source_path: str, temp_frame_paths: List[str], progress: Any = None
) -> None:
source_img = cv2.imread(source_path)
if source_img is None:
logging.error(f"Failed to read source image from {source_path}")
return
if not modules.globals.map_faces:
source_face = get_one_face(cv2.imread(source_path))
source_face_obj = get_one_face(source_img) # Use source_img here
if not source_face_obj:
logging.error(f"No face detected in source image {source_path}")
return
for temp_frame_path in temp_frame_paths:
temp_frame = cv2.imread(temp_frame_path)
if temp_frame is None:
logging.warning(f"Failed to read temp_frame from {temp_frame_path}, skipping.")
continue
try:
result = process_frame(source_face, temp_frame)
result = process_frame(source_face_obj, source_img, temp_frame)
cv2.imwrite(temp_frame_path, result)
except Exception as exception:
print(exception)
logging.error(f"Error processing frame {temp_frame_path}: {exception}", exc_info=True)
pass
if progress:
progress.update(1)
else:
else: # This is for map_faces == True
# In map_faces=True, source_face is determined per mapping.
# process_frame_v2 will need source_frame_full for hair,
# which should be the original source_path image.
for temp_frame_path in temp_frame_paths:
temp_frame = cv2.imread(temp_frame_path)
if temp_frame is None:
logging.warning(f"Failed to read temp_frame from {temp_frame_path}, skipping.")
continue
try:
result = process_frame_v2(temp_frame, temp_frame_path)
# Pass source_img (as source_frame_full) to process_frame_v2
result = process_frame_v2(source_img, temp_frame, temp_frame_path)
cv2.imwrite(temp_frame_path, result)
except Exception as exception:
print(exception)
logging.error(f"Error processing frame {temp_frame_path} with map_faces: {exception}", exc_info=True)
pass
if progress:
progress.update(1)
def process_image(source_path: str, target_path: str, output_path: str) -> None:
source_img = cv2.imread(source_path)
if source_img is None:
logging.error(f"Failed to read source image from {source_path}")
return
target_frame = cv2.imread(target_path)
if target_frame is None:
logging.error(f"Failed to read target image from {target_path}")
return
# Read the original target frame once at the beginning
original_target_frame = cv2.imread(target_path)
if original_target_frame is None:
logging.error(f"Failed to read original target image from {target_path}")
return
result = None # Initialize result
if not modules.globals.map_faces:
source_face = get_one_face(cv2.imread(source_path))
target_frame = cv2.imread(target_path)
result = process_frame(source_face, target_frame)
cv2.imwrite(output_path, result)
else:
source_face_obj = get_one_face(source_img) # Use source_img here
if not source_face_obj:
logging.error(f"No face detected in source image {source_path}")
return
result = process_frame(source_face_obj, source_img, original_target_frame)
else: # map_faces is True
if modules.globals.many_faces:
update_status(
"Many faces enabled. Using first source image. Progressing...", NAME
)
target_frame = cv2.imread(output_path)
result = process_frame_v2(target_frame)
# process_frame_v2 takes the original target frame for processing.
# target_path is passed as temp_frame_path for consistency with process_frame_v2's signature,
# used for map lookups in video context but less critical for single images.
result = process_frame_v2(source_img, original_target_frame, target_path)
if result is not None:
cv2.imwrite(output_path, result)
else:
logging.error(f"Processing image {target_path} failed, result was None.")
def process_video(source_path: str, temp_frame_paths: List[str]) -> None:

View File

@ -105,6 +105,7 @@ def save_switch_states():
"show_fps": modules.globals.show_fps,
"mouth_mask": modules.globals.mouth_mask,
"show_mouth_mask_box": modules.globals.show_mouth_mask_box,
"enable_hair_swapping": modules.globals.enable_hair_swapping,
}
with open("switch_states.json", "w") as f:
json.dump(switch_states, f)
@ -129,6 +130,9 @@ def load_switch_states():
modules.globals.show_mouth_mask_box = switch_states.get(
"show_mouth_mask_box", False
)
modules.globals.enable_hair_swapping = switch_states.get(
"enable_hair_swapping", True # Default to True if not found
)
except FileNotFoundError:
# If the file doesn't exist, use default values
pass
@ -284,6 +288,20 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
)
show_fps_switch.place(relx=0.6, rely=0.75)
# Hair Swapping Switch (placed below "Show FPS" on the right column)
hair_swapping_value = ctk.BooleanVar(value=modules.globals.enable_hair_swapping)
hair_swapping_switch = ctk.CTkSwitch(
root,
text=_("Swap Hair"),
variable=hair_swapping_value,
cursor="hand2",
command=lambda: (
setattr(modules.globals, "enable_hair_swapping", hair_swapping_value.get()),
save_switch_states(),
)
)
hair_swapping_switch.place(relx=0.6, rely=0.80) # Adjusted rely from 0.75 to 0.80
mouth_mask_var = ctk.BooleanVar(value=modules.globals.mouth_mask)
mouth_mask_switch = ctk.CTkSwitch(
root,
@ -306,24 +324,26 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
)
show_mouth_mask_box_switch.place(relx=0.6, rely=0.55)
# Adjusting placement of Start, Stop, Preview buttons due to new switch
start_button = ctk.CTkButton(
root, text=_("Start"), cursor="hand2", command=lambda: analyze_target(start, root)
)
start_button.place(relx=0.15, rely=0.80, relwidth=0.2, relheight=0.05)
start_button.place(relx=0.15, rely=0.85, relwidth=0.2, relheight=0.05) # rely from 0.80 to 0.85
stop_button = ctk.CTkButton(
root, text=_("Destroy"), cursor="hand2", command=lambda: destroy()
)
stop_button.place(relx=0.4, rely=0.80, relwidth=0.2, relheight=0.05)
stop_button.place(relx=0.4, rely=0.85, relwidth=0.2, relheight=0.05) # rely from 0.80 to 0.85
preview_button = ctk.CTkButton(
root, text=_("Preview"), cursor="hand2", command=lambda: toggle_preview()
)
preview_button.place(relx=0.65, rely=0.80, relwidth=0.2, relheight=0.05)
preview_button.place(relx=0.65, rely=0.85, relwidth=0.2, relheight=0.05) # rely from 0.80 to 0.85
# --- Camera Selection ---
# Adjusting placement of Camera selection due to new switch
camera_label = ctk.CTkLabel(root, text=_("Select Camera:"))
camera_label.place(relx=0.1, rely=0.86, relwidth=0.2, relheight=0.05)
camera_label.place(relx=0.1, rely=0.91, relwidth=0.2, relheight=0.05) # rely from 0.86 to 0.91
available_cameras = get_available_cameras()
camera_indices, camera_names = available_cameras
@ -342,7 +362,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
root, variable=camera_variable, values=camera_names
)
camera_optionmenu.place(relx=0.35, rely=0.86, relwidth=0.25, relheight=0.05)
camera_optionmenu.place(relx=0.35, rely=0.91, relwidth=0.25, relheight=0.05) # rely from 0.86 to 0.91
live_button = ctk.CTkButton(
root,
@ -362,16 +382,16 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
else "disabled"
),
)
live_button.place(relx=0.65, rely=0.86, relwidth=0.2, relheight=0.05)
live_button.place(relx=0.65, rely=0.91, relwidth=0.2, relheight=0.05) # rely from 0.86 to 0.91
# --- End Camera Selection ---
status_label = ctk.CTkLabel(root, text=None, justify="center")
status_label.place(relx=0.1, rely=0.9, relwidth=0.8)
status_label.place(relx=0.1, rely=0.96, relwidth=0.8) # rely from 0.9 to 0.96
donate_label = ctk.CTkLabel(
root, text="Deep Live Cam", justify="center", cursor="hand2"
)
donate_label.place(relx=0.1, rely=0.95, relwidth=0.8)
donate_label.place(relx=0.1, rely=0.99, relwidth=0.8) # rely from 0.95 to 0.99
donate_label.configure(
text_color=ctk.ThemeManager.theme.get("URL").get("text_color")
)
@ -880,7 +900,94 @@ def create_webcam_preview(camera_index: int):
PREVIEW.deiconify()
frame_processors = get_frame_processors_modules(modules.globals.frame_processors)
source_image = None
# --- Source Image Loading and Validation (Moved before the loop) ---
source_face_obj_for_cam = None
source_frame_full_for_cam = None
source_frame_full_for_cam_map_faces = None
if not modules.globals.map_faces:
if not modules.globals.source_path:
update_status("Error: No source image selected for webcam mode.")
cap.release()
PREVIEW.withdraw()
while PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
ROOT.update_idletasks()
ROOT.update()
time.sleep(0.05)
return
if not os.path.exists(modules.globals.source_path):
update_status(f"Error: Source image not found at {modules.globals.source_path}")
cap.release()
PREVIEW.withdraw()
while PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
ROOT.update_idletasks()
ROOT.update()
time.sleep(0.05)
return
source_frame_full_for_cam = cv2.imread(modules.globals.source_path)
if source_frame_full_for_cam is None:
update_status(f"Error: Could not read source image at {modules.globals.source_path}")
cap.release()
PREVIEW.withdraw()
while PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
ROOT.update_idletasks()
ROOT.update()
time.sleep(0.05)
return
source_face_obj_for_cam = get_one_face(source_frame_full_for_cam)
if source_face_obj_for_cam is None:
update_status(f"Error: No face detected in source image {modules.globals.source_path}")
# This error is less critical for stopping immediately, but we'll make it persistent too.
# The loop below will run, but processing for frames will effectively be skipped.
# For consistency in error handling, make it persistent.
cap.release()
PREVIEW.withdraw()
while PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
ROOT.update_idletasks()
ROOT.update()
time.sleep(0.05)
return
else: # modules.globals.map_faces is True
if not modules.globals.source_path:
update_status("Error: No global source image selected (for hair/background in map_faces mode).")
cap.release()
PREVIEW.withdraw()
while PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
ROOT.update_idletasks()
ROOT.update()
time.sleep(0.05)
return
if not os.path.exists(modules.globals.source_path):
update_status(f"Error: Source image (for hair/background) not found at {modules.globals.source_path}")
cap.release()
PREVIEW.withdraw()
while PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
ROOT.update_idletasks()
ROOT.update()
time.sleep(0.05)
return
source_frame_full_for_cam_map_faces = cv2.imread(modules.globals.source_path)
if source_frame_full_for_cam_map_faces is None:
update_status(f"Error: Could not read source image (for hair/background) at {modules.globals.source_path}")
cap.release()
PREVIEW.withdraw()
while PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
ROOT.update_idletasks()
ROOT.update()
time.sleep(0.05)
return
if not modules.globals.source_target_map and not modules.globals.simple_map:
update_status("Warning: No face map defined for map_faces mode. Swapper may not work as expected.")
# This is a warning, not a fatal error for the preview window itself. Processing will continue.
# No persistent loop here, as it's a warning about functionality, not a critical load error.
# --- End Source Image Loading ---
prev_time = time.time()
fps_update_interval = 0.5
frame_count = 0
@ -907,23 +1014,29 @@ def create_webcam_preview(camera_index: int):
)
if not modules.globals.map_faces:
if source_image is None and modules.globals.source_path:
source_image = get_one_face(cv2.imread(modules.globals.source_path))
for frame_processor in frame_processors:
if frame_processor.NAME == "DLC.FACE-ENHANCER":
if modules.globals.fp_ui["face_enhancer"]:
temp_frame = frame_processor.process_frame(None, temp_frame)
else:
temp_frame = frame_processor.process_frame(source_image, temp_frame)
if not modules.globals.map_faces:
# Case 1: map_faces is False - source_face_obj_for_cam and source_frame_full_for_cam are pre-loaded
if source_face_obj_for_cam and source_frame_full_for_cam is not None: # Check if valid after pre-loading
for frame_processor in frame_processors:
if frame_processor.NAME == "DLC.FACE-ENHANCER":
if modules.globals.fp_ui["face_enhancer"]:
temp_frame = frame_processor.process_frame(None, temp_frame)
else:
temp_frame = frame_processor.process_frame(source_face_obj_for_cam, source_frame_full_for_cam, temp_frame)
# If source image was invalid (e.g. no face), source_face_obj_for_cam might be None.
# In this case, the frame processors that need it will be skipped, effectively just showing the raw webcam frame.
# The error message is already persistent due to the pre-loop check.
else:
modules.globals.target_path = None
for frame_processor in frame_processors:
if frame_processor.NAME == "DLC.FACE-ENHANCER":
if modules.globals.fp_ui["face_enhancer"]:
temp_frame = frame_processor.process_frame_v2(temp_frame)
else:
temp_frame = frame_processor.process_frame_v2(temp_frame)
# Case 2: map_faces is True - source_frame_full_for_cam_map_faces is pre-loaded
if source_frame_full_for_cam_map_faces is not None: # Check if valid after pre-loading
modules.globals.target_path = None # Standard for live mode
for frame_processor in frame_processors:
if frame_processor.NAME == "DLC.FACE-ENHANCER":
if modules.globals.fp_ui["face_enhancer"]:
temp_frame = frame_processor.process_frame_v2(source_frame_full_for_cam_map_faces, temp_frame)
else:
temp_frame = frame_processor.process_frame_v2(source_frame_full_for_cam_map_faces, temp_frame)
# If source_frame_full_for_cam_map_faces was invalid, error is persistent from pre-loop check.
# Calculate and display FPS
current_time = time.time()

View File

@ -19,3 +19,4 @@ onnxruntime-gpu==1.17; sys_platform != 'darwin'
tensorflow; sys_platform != 'darwin'
opennsfw2==0.10.2
protobuf==4.23.2
transformers>=4.0.0

20
run_mac.sh 100644
View File

@ -0,0 +1,20 @@
#!/usr/bin/env bash
VENV_DIR=".venv"
# Check if virtual environment exists
if [ ! -d "$VENV_DIR" ]; then
echo "Virtual environment '$VENV_DIR' not found."
echo "Please run ./setup_mac.sh first to create the environment and install dependencies."
exit 1
fi
echo "Activating virtual environment..."
source "$VENV_DIR/bin/activate"
echo "Starting the application with CPU execution provider..."
# Passes all arguments passed to this script (e.g., --source, --target) to run.py
python3 run.py --execution-provider cpu "$@"
# Deactivate after script finishes (optional, as shell context closes)
# deactivate

13
run_mac_coreml.sh 100644
View File

@ -0,0 +1,13 @@
#!/usr/bin/env bash
VENV_DIR=".venv"
if [ ! -d "$VENV_DIR" ]; then
echo "Virtual environment '$VENV_DIR' not found."
echo "Please run ./setup_mac.sh first."
exit 1
fi
source "$VENV_DIR/bin/activate"
echo "Starting the application with CoreML execution provider..."
python3 run.py --execution-provider coreml "$@"

13
run_mac_cpu.sh 100644
View File

@ -0,0 +1,13 @@
#!/usr/bin/env bash
VENV_DIR=".venv"
if [ ! -d "$VENV_DIR" ]; then
echo "Virtual environment '$VENV_DIR' not found."
echo "Please run ./setup_mac.sh first."
exit 1
fi
source "$VENV_DIR/bin/activate"
echo "Starting the application with CPU execution provider..."
python3 run.py --execution-provider cpu "$@"

13
run_mac_mps.sh 100644
View File

@ -0,0 +1,13 @@
#!/usr/bin/env bash
VENV_DIR=".venv"
if [ ! -d "$VENV_DIR" ]; then
echo "Virtual environment '$VENV_DIR' not found."
echo "Please run ./setup_mac.sh first."
exit 1
fi
source "$VENV_DIR/bin/activate"
echo "Starting the application with MPS execution provider (for Apple Silicon)..."
python3 run.py --execution-provider mps "$@"

81
setup_mac.sh 100644
View File

@ -0,0 +1,81 @@
#!/usr/bin/env bash
# Exit immediately if a command exits with a non-zero status.
set -e
echo "Starting macOS setup..."
# 1. Check for Python 3
echo "Checking for Python 3..."
if ! command -v python3 &> /dev/null
then
echo "Python 3 could not be found. Please install Python 3."
echo "You can often install it using Homebrew: brew install python"
exit 1
fi
# 2. Check Python version (>= 3.9)
echo "Checking Python 3 version..."
python3 -c 'import sys; exit(0) if sys.version_info >= (3,9) else exit(1)'
if [ $? -ne 0 ]; then
echo "Python 3.9 or higher is required."
echo "Your version is: $(python3 --version)"
echo "Please upgrade your Python version. Consider using pyenv or Homebrew to manage Python versions."
exit 1
fi
echo "Python 3.9+ found: $(python3 --version)"
# 3. Check for ffmpeg
echo "Checking for ffmpeg..."
if ! command -v ffmpeg &> /dev/null
then
echo "WARNING: ffmpeg could not be found. This program requires ffmpeg for video processing."
echo "You can install it using Homebrew: brew install ffmpeg"
echo "Continuing with setup, but video processing might fail later."
else
echo "ffmpeg found: $(ffmpeg -version | head -n 1)"
fi
# 4. Define virtual environment directory
VENV_DIR=".venv"
# 5. Create virtual environment
if [ -d "$VENV_DIR" ]; then
echo "Virtual environment '$VENV_DIR' already exists. Skipping creation."
else
echo "Creating virtual environment in '$VENV_DIR'..."
python3 -m venv "$VENV_DIR"
fi
# 6. Activate virtual environment (for this script's session)
echo "Activating virtual environment..."
source "$VENV_DIR/bin/activate"
# 7. Upgrade pip
echo "Upgrading pip..."
pip install --upgrade pip
# 8. Install requirements
echo "Installing requirements from requirements.txt..."
if [ -f "requirements.txt" ]; then
pip install -r requirements.txt
else
echo "ERROR: requirements.txt not found. Cannot install dependencies."
# Deactivate on error if desired, or leave active for user to debug
# deactivate
exit 1
fi
echo ""
echo "Setup complete!"
echo ""
echo "To activate the virtual environment in your terminal, run:"
echo " source $VENV_DIR/bin/activate"
echo ""
echo "After activating, you can run the application using:"
echo " python3 run.py [arguments]"
echo "Or use one of the run_mac_*.sh scripts (e.g., ./run_mac_cpu.sh)."
echo ""
# Deactivate at the end of the script's execution (optional, as script session ends)
# deactivate