Compare commits
8 Commits
615c9f05a9
...
ff0608292d
Author | SHA1 | Date |
---|---|---|
|
ff0608292d | |
|
2e617c9401 | |
|
28109e93bb | |
|
fc312516e3 | |
|
72049f3e91 | |
|
6cb5de01f8 | |
|
0bcf340217 | |
|
994a63c546 |
66
README.md
66
README.md
|
@ -150,22 +150,64 @@ pip install -r requirements.txt
|
|||
|
||||
**For macOS:**
|
||||
|
||||
Apple Silicon (M1/M2/M3) requires specific setup:
|
||||
For a streamlined setup on macOS, use the provided shell scripts:
|
||||
|
||||
```bash
|
||||
# Install Python 3.10 (specific version is important)
|
||||
brew install python@3.10
|
||||
1. **Make scripts executable:**
|
||||
Open your terminal, navigate to the cloned `Deep-Live-Cam` directory, and run:
|
||||
```bash
|
||||
chmod +x setup_mac.sh
|
||||
chmod +x run_mac*.sh
|
||||
```
|
||||
|
||||
# Install tkinter package (required for the GUI)
|
||||
brew install python-tk@3.10
|
||||
2. **Run the setup script:**
|
||||
This will check for Python 3.9+, ffmpeg, create a virtual environment (`.venv`), and install required Python packages.
|
||||
```bash
|
||||
./setup_mac.sh
|
||||
```
|
||||
If you encounter issues with specific packages during `pip install` (especially for libraries that compile C code, like some image processing libraries), you might need to install system libraries via Homebrew (e.g., `brew install jpeg libtiff ...`) or ensure Xcode Command Line Tools are installed (`xcode-select --install`).
|
||||
|
||||
# Create and activate virtual environment with Python 3.10
|
||||
python3.10 -m venv venv
|
||||
source venv/bin/activate
|
||||
3. **Activate the virtual environment (for manual runs):**
|
||||
After setup, if you want to run commands manually or use developer tools from your terminal session:
|
||||
```bash
|
||||
source .venv/bin/activate
|
||||
```
|
||||
(To deactivate, simply type `deactivate` in the terminal.)
|
||||
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
4. **Run the application:**
|
||||
Use the provided run scripts for convenience. These scripts automatically activate the virtual environment.
|
||||
* `./run_mac.sh`: Runs the application with the CPU execution provider by default. This is a good starting point.
|
||||
* `./run_mac_cpu.sh`: Explicitly uses the CPU execution provider.
|
||||
* `./run_mac_coreml.sh`: Attempts to use the CoreML execution provider for potential hardware acceleration on Apple Silicon and Intel Macs.
|
||||
* `./run_mac_mps.sh`: Attempts to use the MPS (Metal Performance Shaders) execution provider, primarily for Apple Silicon Macs.
|
||||
|
||||
Example of running with specific source/target arguments:
|
||||
```bash
|
||||
./run_mac.sh --source path/to/your_face.jpg --target path/to/video.mp4
|
||||
```
|
||||
Or, to simply launch the UI:
|
||||
```bash
|
||||
./run_mac.sh
|
||||
```
|
||||
|
||||
**Important Notes for macOS GPU Acceleration (CoreML/MPS):**
|
||||
|
||||
* The `setup_mac.sh` script installs packages from `requirements.txt`, which typically includes a general CPU-based version of `onnxruntime`.
|
||||
* For optimal performance on Apple Silicon (M1/M2/M3) or specific GPU acceleration, you might need to install a different `onnxruntime` package *after* running `setup_mac.sh` and while the virtual environment (`.venv`) is active.
|
||||
* **Example for `onnxruntime-silicon` (often requires Python 3.10 for older versions like 1.13.1):**
|
||||
The original `README` noted that `onnxruntime-silicon==1.13.1` was specific to Python 3.10. If you intend to use this exact version for CoreML:
|
||||
```bash
|
||||
# Ensure you are using Python 3.10 if required by your chosen onnxruntime-silicon version
|
||||
# After running setup_mac.sh and activating .venv:
|
||||
# source .venv/bin/activate
|
||||
|
||||
pip uninstall onnxruntime onnxruntime-gpu # Uninstall any existing onnxruntime
|
||||
pip install onnxruntime-silicon==1.13.1 # Or your desired version
|
||||
|
||||
# Then use ./run_mac_coreml.sh
|
||||
```
|
||||
Check the ONNX Runtime documentation for the latest recommended packages for Apple Silicon.
|
||||
* **For MPS with ONNX Runtime:** This may require a specific build or version of `onnxruntime`. Consult the ONNX Runtime documentation. For PyTorch-based operations (like the Face Enhancer or Hair Segmenter if they were PyTorch native and not ONNX), PyTorch should automatically try to use MPS on compatible Apple Silicon hardware if available.
|
||||
* **User Interface (Tkinter):** If you encounter errors related to `_tkinter` not being found when launching the UI, ensure your Python installation supports Tk. For Python installed via Homebrew, this is usually `python-tk` (e.g., `brew install python-tk@3.9` or `brew install python-tk@3.10`, matching your Python version).
|
||||
|
||||
** In case something goes wrong and you need to reinstall the virtual environment **
|
||||
|
||||
|
|
|
@ -0,0 +1,46 @@
|
|||
{
|
||||
"Source x Target Mapper": "Mapeador de fuente x destino",
|
||||
"select a source image": "Seleccionar imagen fuente",
|
||||
"Preview": "Vista previa",
|
||||
"select a target image or video": "elegir un video o una imagen fuente",
|
||||
"save image output file": "guardar imagen final",
|
||||
"save video output file": "guardar video final",
|
||||
"select a target image": "elegir una imagen objetiva",
|
||||
"source": "fuente",
|
||||
"Select a target": "Elegir un destino",
|
||||
"Select a face": "Elegir una cara",
|
||||
"Keep audio": "Mantener audio original",
|
||||
"Face Enhancer": "Potenciador de caras",
|
||||
"Many faces": "Varias caras",
|
||||
"Show FPS": "Mostrar fps",
|
||||
"Keep fps": "Mantener fps",
|
||||
"Keep frames": "Mantener frames",
|
||||
"Fix Blueish Cam": "Corregir tono azul de video",
|
||||
"Mouth Mask": "Máscara de boca",
|
||||
"Show Mouth Mask Box": "Mostrar área de la máscara de boca",
|
||||
"Start": "Iniciar",
|
||||
"Live": "En vivo",
|
||||
"Destroy": "Borrar",
|
||||
"Map faces": "Mapear caras",
|
||||
"Processing...": "Procesando...",
|
||||
"Processing succeed!": "¡Proceso terminado con éxito!",
|
||||
"Processing ignored!": "¡Procesamiento omitido!",
|
||||
"Failed to start camera": "No se pudo iniciar la cámara",
|
||||
"Please complete pop-up or close it.": "Complete o cierre el pop-up",
|
||||
"Getting unique faces": "Buscando caras únicas",
|
||||
"Please select a source image first": "Primero, seleccione una imagen fuente",
|
||||
"No faces found in target": "No se encontró una cara en el destino",
|
||||
"Add": "Agregar",
|
||||
"Clear": "Limpiar",
|
||||
"Submit": "Enviar",
|
||||
"Select source image": "Seleccionar imagen fuente",
|
||||
"Select target image": "Seleccionar imagen destino",
|
||||
"Please provide mapping!": "Por favor, proporcione un mapeo",
|
||||
"At least 1 source with target is required!": "Se requiere al menos una fuente con un destino.",
|
||||
"At least 1 source with target is required!": "Se requiere al menos una fuente con un destino.",
|
||||
"Face could not be detected in last upload!": "¡No se pudo encontrar una cara en el último video o imagen!",
|
||||
"Select Camera:": "Elegir cámara:",
|
||||
"All mappings cleared!": "¡Todos los mapeos fueron borrados!",
|
||||
"Mappings successfully submitted!": "Mapeos enviados con éxito!",
|
||||
"Source x Target Mapper is already open.": "El mapeador de fuente x destino ya está abierto."
|
||||
}
|
|
@ -0,0 +1,45 @@
|
|||
{
|
||||
"Source x Target Mapper": "ប្រភប x បន្ថែម Mapper",
|
||||
"select a source image": "ជ្រើសរើសប្រភពរូបភាព",
|
||||
"Preview": "បង្ហាញ",
|
||||
"select a target image or video": "ជ្រើសរើសគោលដៅរូបភាពឬវីដេអូ",
|
||||
"save image output file": "រក្សាទុកលទ្ធផលឯកសាររូបភាព",
|
||||
"save video output file": "រក្សាទុកលទ្ធផលឯកសារវីដេអូ",
|
||||
"select a target image": "ជ្រើសរើសគោលដៅរូបភាព",
|
||||
"source": "ប្រភព",
|
||||
"Select a target": "ជ្រើសរើសគោលដៅ",
|
||||
"Select a face": "ជ្រើសរើសមុខ",
|
||||
"Keep audio": "រម្លងសម្លេង",
|
||||
"Face Enhancer": "ឧបករណ៍ពង្រឹងមុខ",
|
||||
"Many faces": "ទម្រង់មុខច្រើន",
|
||||
"Show FPS": "បង្ហាញ FPS",
|
||||
"Keep fps": "រម្លង fps",
|
||||
"Keep frames": "រម្លងទម្រង់",
|
||||
"Fix Blueish Cam": "ជួសជុល Cam Blueish",
|
||||
"Mouth Mask": "របាំងមាត់",
|
||||
"Show Mouth Mask Box": "បង្ហាញប្រអប់របាំងមាត់",
|
||||
"Start": "ចាប់ផ្ដើម",
|
||||
"Live": "ផ្សាយផ្ទាល់",
|
||||
"Destroy": "លុប",
|
||||
"Map faces": "ផែនទីមុខ",
|
||||
"Processing...": "កំពុងដំណើរការ...",
|
||||
"Processing succeed!": "ការដំណើរការទទួលបានជោគជ័យ!",
|
||||
"Processing ignored!": "ការដំណើរការមិនទទួលបានជោគជ័យ!",
|
||||
"Failed to start camera": "បរាជ័យដើម្បីចាប់ផ្ដើមបើកកាមេរ៉ា",
|
||||
"Please complete pop-up or close it.": "សូមបញ្ចប់ផ្ទាំងផុស ឬបិទវា.",
|
||||
"Getting unique faces": "ការចាប់ផ្ដើមទម្រង់មុខប្លែក",
|
||||
"Please select a source image first": "សូមជ្រើសរើសប្រភពរូបភាពដំបូង",
|
||||
"No faces found in target": "រកអត់ឃើញមុខនៅក្នុងគោលដៅ",
|
||||
"Add": "បន្ថែម",
|
||||
"Clear": "សម្អាត",
|
||||
"Submit": "បញ្ចូន",
|
||||
"Select source image": "ជ្រើសរើសប្រភពរូបភាព",
|
||||
"Select target image": "ជ្រើសរើសគោលដៅរូបភាព",
|
||||
"Please provide mapping!": "សូមផ្ដល់នៅផែនទី",
|
||||
"At least 1 source with target is required!": "ត្រូវការប្រភពយ៉ាងហោចណាស់ ១ ដែលមានគោលដៅ!",
|
||||
"Face could not be detected in last upload!": "មុខមិនអាចភ្ជាប់នៅក្នុងការបង្ហេាះចុងក្រោយ!",
|
||||
"Select Camera:": "ជ្រើសរើសកាមេរ៉ា",
|
||||
"All mappings cleared!": "ផែនទីទាំងអស់ត្រូវបានសម្អាត!",
|
||||
"Mappings successfully submitted!": "ផែនទីត្រូវបានបញ្ជូនជោគជ័យ!",
|
||||
"Source x Target Mapper is already open.": "ប្រភព x Target Mapper បានបើករួចហើយ។"
|
||||
}
|
|
@ -0,0 +1,46 @@
|
|||
{
|
||||
"Source x Target Mapper": "Mapeador de Origem x Destino",
|
||||
"select an source image": "Escolha uma imagem de origem",
|
||||
"Preview": "Prévia",
|
||||
"select an target image or video": "Escolha uma imagem ou vídeo de destino",
|
||||
"save image output file": "Salvar imagem final",
|
||||
"save video output file": "Salvar vídeo final",
|
||||
"select an target image": "Escolha uma imagem de destino",
|
||||
"source": "Origem",
|
||||
"Select a target": "Escolha o destino",
|
||||
"Select a face": "Escolha um rosto",
|
||||
"Keep audio": "Manter o áudio original",
|
||||
"Face Enhancer": "Melhorar rosto",
|
||||
"Many faces": "Vários rostos",
|
||||
"Show FPS": "Mostrar FPS",
|
||||
"Keep fps": "Manter FPS",
|
||||
"Keep frames": "Manter frames",
|
||||
"Fix Blueish Cam": "Corrigir tom azulado da câmera",
|
||||
"Mouth Mask": "Máscara da boca",
|
||||
"Show Mouth Mask Box": "Mostrar área da máscara da boca",
|
||||
"Start": "Começar",
|
||||
"Live": "Ao vivo",
|
||||
"Destroy": "Destruir",
|
||||
"Map faces": "Mapear rostos",
|
||||
"Processing...": "Processando...",
|
||||
"Processing succeed!": "Tudo certo!",
|
||||
"Processing ignored!": "Processamento ignorado!",
|
||||
"Failed to start camera": "Não foi possível iniciar a câmera",
|
||||
"Please complete pop-up or close it.": "Finalize ou feche o pop-up",
|
||||
"Getting unique faces": "Buscando rostos diferentes",
|
||||
"Please select a source image first": "Selecione primeiro uma imagem de origem",
|
||||
"No faces found in target": "Nenhum rosto encontrado na imagem de destino",
|
||||
"Add": "Adicionar",
|
||||
"Clear": "Limpar",
|
||||
"Submit": "Enviar",
|
||||
"Select source image": "Escolha a imagem de origem",
|
||||
"Select target image": "Escolha a imagem de destino",
|
||||
"Please provide mapping!": "Você precisa realizar o mapeamento!",
|
||||
"Atleast 1 source with target is required!": "É necessária pelo menos uma origem com um destino!",
|
||||
"At least 1 source with target is required!": "É necessária pelo menos uma origem com um destino!",
|
||||
"Face could not be detected in last upload!": "Não conseguimos detectar o rosto na última imagem!",
|
||||
"Select Camera:": "Escolher câmera:",
|
||||
"All mappings cleared!": "Todos os mapeamentos foram removidos!",
|
||||
"Mappings successfully submitted!": "Mapeamentos enviados com sucesso!",
|
||||
"Source x Target Mapper is already open.": "O Mapeador de Origem x Destino já está aberto."
|
||||
}
|
|
@ -0,0 +1,45 @@
|
|||
{
|
||||
"Source x Target Mapper": "ตัวจับคู่ต้นทาง x ปลายทาง",
|
||||
"select a source image": "เลือกรูปภาพต้นฉบับ",
|
||||
"Preview": "ตัวอย่าง",
|
||||
"select a target image or video": "เลือกรูปภาพหรือวิดีโอเป้าหมาย",
|
||||
"save image output file": "บันทึกไฟล์รูปภาพ",
|
||||
"save video output file": "บันทึกไฟล์วิดีโอ",
|
||||
"select a target image": "เลือกรูปภาพเป้าหมาย",
|
||||
"source": "ต้นฉบับ",
|
||||
"Select a target": "เลือกเป้าหมาย",
|
||||
"Select a face": "เลือกใบหน้า",
|
||||
"Keep audio": "เก็บเสียง",
|
||||
"Face Enhancer": "ปรับปรุงใบหน้า",
|
||||
"Many faces": "หลายใบหน้า",
|
||||
"Show FPS": "แสดง FPS",
|
||||
"Keep fps": "คงค่า FPS",
|
||||
"Keep frames": "คงค่าเฟรม",
|
||||
"Fix Blueish Cam": "แก้ไขภาพอมฟ้าจากกล้อง",
|
||||
"Mouth Mask": "มาสก์ปาก",
|
||||
"Show Mouth Mask Box": "แสดงกรอบมาสก์ปาก",
|
||||
"Start": "เริ่ม",
|
||||
"Live": "สด",
|
||||
"Destroy": "หยุด",
|
||||
"Map faces": "จับคู่ใบหน้า",
|
||||
"Processing...": "กำลังประมวลผล...",
|
||||
"Processing succeed!": "ประมวลผลสำเร็จแล้ว!",
|
||||
"Processing ignored!": "การประมวลผลถูกละเว้น",
|
||||
"Failed to start camera": "ไม่สามารถเริ่มกล้องได้",
|
||||
"Please complete pop-up or close it.": "โปรดดำเนินการในป๊อปอัปให้เสร็จสิ้น หรือปิด",
|
||||
"Getting unique faces": "กำลังค้นหาใบหน้าที่ไม่ซ้ำกัน",
|
||||
"Please select a source image first": "โปรดเลือกภาพต้นฉบับก่อน",
|
||||
"No faces found in target": "ไม่พบใบหน้าในภาพเป้าหมาย",
|
||||
"Add": "เพิ่ม",
|
||||
"Clear": "ล้าง",
|
||||
"Submit": "ส่ง",
|
||||
"Select source image": "เลือกภาพต้นฉบับ",
|
||||
"Select target image": "เลือกภาพเป้าหมาย",
|
||||
"Please provide mapping!": "โปรดระบุการจับคู่!",
|
||||
"At least 1 source with target is required!": "ต้องมีการจับคู่ต้นฉบับกับเป้าหมายอย่างน้อย 1 คู่!",
|
||||
"Face could not be detected in last upload!": "ไม่สามารถตรวจพบใบหน้าในไฟล์อัปโหลดล่าสุด!",
|
||||
"Select Camera:": "เลือกกล้อง:",
|
||||
"All mappings cleared!": "ล้างการจับคู่ทั้งหมดแล้ว!",
|
||||
"Mappings successfully submitted!": "ส่งการจับคู่สำเร็จแล้ว!",
|
||||
"Source x Target Mapper is already open.": "ตัวจับคู่ต้นทาง x ปลายทาง เปิดอยู่แล้ว"
|
||||
}
|
|
@ -41,3 +41,4 @@ show_mouth_mask_box = False
|
|||
mask_feather_ratio = 8
|
||||
mask_down_size = 0.50
|
||||
mask_size = 1
|
||||
enable_hair_swapping = True # Default state for enabling/disabling hair swapping
|
||||
|
|
|
@ -4,6 +4,11 @@ from PIL import Image
|
|||
from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation
|
||||
import cv2 # Imported for BGR to RGB conversion, though PIL can also do it.
|
||||
|
||||
# Global variables for caching
|
||||
HAIR_SEGMENTER_PROCESSOR = None
|
||||
HAIR_SEGMENTER_MODEL = None
|
||||
MODEL_NAME = "isjackwild/segformer-b0-finetuned-segments-skin-hair-clothing"
|
||||
|
||||
def segment_hair(image_np: np.ndarray) -> np.ndarray:
|
||||
"""
|
||||
Segments hair from an image.
|
||||
|
@ -14,15 +19,41 @@ def segment_hair(image_np: np.ndarray) -> np.ndarray:
|
|||
Returns:
|
||||
NumPy array representing the binary hair mask.
|
||||
"""
|
||||
processor = SegformerImageProcessor.from_pretrained("isjackwild/segformer-b0-finetuned-segments-skin-hair-clothing")
|
||||
model = SegformerForSemanticSegmentation.from_pretrained("isjackwild/segformer-b0-finetuned-segments-skin-hair-clothing")
|
||||
global HAIR_SEGMENTER_PROCESSOR, HAIR_SEGMENTER_MODEL
|
||||
|
||||
if HAIR_SEGMENTER_PROCESSOR is None or HAIR_SEGMENTER_MODEL is None:
|
||||
print(f"Loading hair segmentation model and processor ({MODEL_NAME}) for the first time...")
|
||||
try:
|
||||
HAIR_SEGMENTER_PROCESSOR = SegformerImageProcessor.from_pretrained(MODEL_NAME)
|
||||
HAIR_SEGMENTER_MODEL = SegformerForSemanticSegmentation.from_pretrained(MODEL_NAME)
|
||||
# Optional: Move model to GPU if available and if other models use GPU
|
||||
# if torch.cuda.is_available():
|
||||
# HAIR_SEGMENTER_MODEL = HAIR_SEGMENTER_MODEL.to('cuda')
|
||||
# print("Hair segmentation model moved to GPU.")
|
||||
print("Hair segmentation model and processor loaded successfully.")
|
||||
except Exception as e:
|
||||
print(f"Failed to load hair segmentation model/processor: {e}")
|
||||
# Return an empty mask compatible with expected output shape (H, W)
|
||||
return np.zeros((image_np.shape[0], image_np.shape[1]), dtype=np.uint8)
|
||||
|
||||
# Ensure processor and model are loaded before proceeding
|
||||
if HAIR_SEGMENTER_PROCESSOR is None or HAIR_SEGMENTER_MODEL is None:
|
||||
print("Error: Hair segmentation models are not available.")
|
||||
return np.zeros((image_np.shape[0], image_np.shape[1]), dtype=np.uint8)
|
||||
|
||||
# Convert BGR (OpenCV) to RGB (PIL)
|
||||
image_rgb = cv2.cvtColor(image_np, cv2.COLOR_BGR2RGB)
|
||||
image_pil = Image.fromarray(image_rgb)
|
||||
|
||||
inputs = processor(images=image_pil, return_tensors="pt")
|
||||
outputs = model(**inputs)
|
||||
inputs = HAIR_SEGMENTER_PROCESSOR(images=image_pil, return_tensors="pt")
|
||||
|
||||
# Optional: Move inputs to GPU if model is on GPU
|
||||
# if HAIR_SEGMENTER_MODEL.device.type == 'cuda':
|
||||
# inputs = inputs.to(HAIR_SEGMENTER_MODEL.device)
|
||||
|
||||
with torch.no_grad(): # Important for inference
|
||||
outputs = HAIR_SEGMENTER_MODEL(**inputs)
|
||||
|
||||
logits = outputs.logits # Shape: batch_size, num_labels, height, width
|
||||
|
||||
# Upsample logits to original image size
|
||||
|
@ -33,12 +64,10 @@ def segment_hair(image_np: np.ndarray) -> np.ndarray:
|
|||
align_corners=False
|
||||
)
|
||||
|
||||
segmentation_map = upsampled_logits.argmax(dim=1).squeeze().cpu().numpy()
|
||||
segmentation_map = upsampled_logits.argmax(dim=1).squeeze().cpu().numpy().astype(np.uint8)
|
||||
|
||||
# Label 2 is for hair in this model
|
||||
hair_mask = np.where(segmentation_map == 2, 255, 0).astype(np.uint8)
|
||||
|
||||
return hair_mask
|
||||
return np.where(segmentation_map == 2, 255, 0).astype(np.uint8)
|
||||
|
||||
if __name__ == '__main__':
|
||||
# This is a conceptual test.
|
||||
|
|
|
@ -68,94 +68,133 @@ def get_face_swapper() -> Any:
|
|||
return FACE_SWAPPER
|
||||
|
||||
|
||||
def _prepare_warped_source_material_and_mask(
|
||||
source_face_obj: Face,
|
||||
source_frame_full: Frame,
|
||||
matrix: np.ndarray,
|
||||
dsize: tuple
|
||||
) -> tuple[Frame | None, Frame | None]:
|
||||
"""
|
||||
Prepares warped source material (full image) and a combined (face+hair) mask for blending.
|
||||
Returns (None, None) if essential masks cannot be generated.
|
||||
"""
|
||||
# Generate Hair Mask
|
||||
hair_only_mask_source_raw = segment_hair(source_frame_full)
|
||||
if hair_only_mask_source_raw.ndim == 3 and hair_only_mask_source_raw.shape[2] == 3:
|
||||
hair_only_mask_source_raw = cv2.cvtColor(hair_only_mask_source_raw, cv2.COLOR_BGR2GRAY)
|
||||
_, hair_only_mask_source_binary = cv2.threshold(hair_only_mask_source_raw, 127, 255, cv2.THRESH_BINARY)
|
||||
|
||||
# Generate Face Mask
|
||||
face_only_mask_source_raw = create_face_mask(source_face_obj, source_frame_full)
|
||||
_, face_only_mask_source_binary = cv2.threshold(face_only_mask_source_raw, 127, 255, cv2.THRESH_BINARY)
|
||||
|
||||
# Combine Face and Hair Masks
|
||||
if face_only_mask_source_binary.shape != hair_only_mask_source_binary.shape:
|
||||
logging.warning("Resizing hair mask to match face mask for source during preparation.")
|
||||
hair_only_mask_source_binary = cv2.resize(
|
||||
hair_only_mask_source_binary,
|
||||
(face_only_mask_source_binary.shape[1], face_only_mask_source_binary.shape[0]),
|
||||
interpolation=cv2.INTER_NEAREST
|
||||
)
|
||||
|
||||
actual_combined_source_mask = cv2.bitwise_or(face_only_mask_source_binary, hair_only_mask_source_binary)
|
||||
actual_combined_source_mask_blurred = cv2.GaussianBlur(actual_combined_source_mask, (5, 5), 3)
|
||||
|
||||
# Warp the Combined Mask and Full Source Material
|
||||
warped_full_source_material = cv2.warpAffine(source_frame_full, matrix, dsize)
|
||||
warped_combined_mask_temp = cv2.warpAffine(actual_combined_source_mask_blurred, matrix, dsize)
|
||||
_, warped_combined_mask_binary_for_clone = cv2.threshold(warped_combined_mask_temp, 127, 255, cv2.THRESH_BINARY)
|
||||
|
||||
return warped_full_source_material, warped_combined_mask_binary_for_clone
|
||||
|
||||
|
||||
def _blend_material_onto_frame(
|
||||
base_frame: Frame,
|
||||
material_to_blend: Frame,
|
||||
mask_for_blending: Frame
|
||||
) -> Frame:
|
||||
"""
|
||||
Blends material onto a base frame using a mask.
|
||||
Uses seamlessClone if possible, otherwise falls back to simple masking.
|
||||
"""
|
||||
x, y, w, h = cv2.boundingRect(mask_for_blending)
|
||||
output_frame = base_frame # Start with base, will be modified by blending
|
||||
|
||||
if w > 0 and h > 0:
|
||||
center = (x + w // 2, y + h // 2)
|
||||
|
||||
if material_to_blend.shape == base_frame.shape and \
|
||||
material_to_blend.dtype == base_frame.dtype and \
|
||||
mask_for_blending.dtype == np.uint8:
|
||||
try:
|
||||
# Important: seamlessClone modifies the first argument (dst) if it's the same as the output var
|
||||
# So, if base_frame is final_swapped_frame, it will be modified in place.
|
||||
# If we want to keep base_frame pristine, it should be base_frame.copy() if it's also final_swapped_frame.
|
||||
# Given final_swapped_frame is already a copy of swapped_frame at this point, this is fine.
|
||||
output_frame = cv2.seamlessClone(material_to_blend, base_frame, mask_for_blending, center, cv2.NORMAL_CLONE)
|
||||
except cv2.error as e:
|
||||
logging.warning(f"cv2.seamlessClone failed: {e}. Falling back to simple blending.")
|
||||
boolean_mask = mask_for_blending > 127
|
||||
output_frame[boolean_mask] = material_to_blend[boolean_mask]
|
||||
else:
|
||||
logging.warning("Mismatch in shape/type for seamlessClone. Falling back to simple blending.")
|
||||
boolean_mask = mask_for_blending > 127
|
||||
output_frame[boolean_mask] = material_to_blend[boolean_mask]
|
||||
else:
|
||||
logging.info("Warped mask for blending is empty. Skipping blending.")
|
||||
|
||||
return output_frame
|
||||
|
||||
|
||||
def swap_face(source_face_obj: Face, target_face: Face, source_frame_full: Frame, temp_frame: Frame) -> Frame:
|
||||
face_swapper = get_face_swapper()
|
||||
|
||||
# Apply the face swap
|
||||
swapped_frame = face_swapper.get(
|
||||
temp_frame, target_face, source_face_obj, paste_back=True
|
||||
)
|
||||
# Apply the base face swap
|
||||
swapped_frame = face_swapper.get(temp_frame, target_face, source_face_obj, paste_back=True)
|
||||
final_swapped_frame = swapped_frame # Initialize with the base swap. Copy is made only if needed.
|
||||
|
||||
final_swapped_frame = swapped_frame.copy() # Initialize final_swapped_frame
|
||||
if modules.globals.enable_hair_swapping:
|
||||
if not (source_face_obj.kps is not None and \
|
||||
target_face.kps is not None and \
|
||||
source_face_obj.kps.shape[0] >= 3 and \
|
||||
target_face.kps.shape[0] >= 3):
|
||||
logging.warning(
|
||||
f"Skipping hair blending due to insufficient keypoints. "
|
||||
f"Source kps: {source_face_obj.kps.shape if source_face_obj.kps is not None else 'None'}, "
|
||||
f"Target kps: {target_face.kps.shape if target_face.kps is not None else 'None'}."
|
||||
)
|
||||
else:
|
||||
source_kps_float = source_face_obj.kps.astype(np.float32)
|
||||
target_kps_float = target_face.kps.astype(np.float32)
|
||||
matrix, _ = cv2.estimateAffinePartial2D(source_kps_float, target_kps_float, method=cv2.LMEDS)
|
||||
|
||||
# START of Hair Blending Logic
|
||||
if source_face_obj.kps is not None and target_face.kps is not None and source_face_obj.kps.shape[0] >=2 and target_face.kps.shape[0] >=2 : # kps are 5x2 landmarks
|
||||
hair_only_mask_source = segment_hair(source_frame_full)
|
||||
|
||||
# Ensure kps are float32 for estimateAffinePartial2D
|
||||
source_kps_float = source_face_obj.kps.astype(np.float32)
|
||||
target_kps_float = target_face.kps.astype(np.float32)
|
||||
|
||||
# b. Estimate Transformation Matrix
|
||||
# Using LMEDS for robustness
|
||||
matrix, _ = cv2.estimateAffinePartial2D(source_kps_float, target_kps_float, method=cv2.LMEDS)
|
||||
|
||||
if matrix is not None:
|
||||
# c. Warp Source Hair and its Mask
|
||||
dsize = (temp_frame.shape[1], temp_frame.shape[0]) # width, height
|
||||
|
||||
# Ensure hair_only_mask_source is 8-bit single channel
|
||||
if hair_only_mask_source.ndim == 3 and hair_only_mask_source.shape[2] == 3:
|
||||
hair_only_mask_source_gray = cv2.cvtColor(hair_only_mask_source, cv2.COLOR_BGR2GRAY)
|
||||
if matrix is None:
|
||||
logging.warning("Failed to estimate affine transformation matrix for hair. Skipping hair blending.")
|
||||
else:
|
||||
hair_only_mask_source_gray = hair_only_mask_source
|
||||
|
||||
# Threshold to ensure binary mask for warping
|
||||
_, hair_only_mask_source_binary = cv2.threshold(hair_only_mask_source_gray, 127, 255, cv2.THRESH_BINARY)
|
||||
|
||||
warped_hair_mask = cv2.warpAffine(hair_only_mask_source_binary, matrix, dsize)
|
||||
warped_source_hair_image = cv2.warpAffine(source_frame_full, matrix, dsize)
|
||||
|
||||
# d. Color Correct Warped Source Hair
|
||||
# Using swapped_frame (face-swapped output) as the target for color correction
|
||||
color_corrected_warped_hair = apply_color_transfer(warped_source_hair_image, swapped_frame)
|
||||
|
||||
# e. Blend Hair onto Swapped Frame
|
||||
# Ensure warped_hair_mask is binary (0 or 255) after warping
|
||||
_, warped_hair_mask_binary = cv2.threshold(warped_hair_mask, 127, 255, cv2.THRESH_BINARY)
|
||||
|
||||
# Preferred: cv2.seamlessClone
|
||||
x, y, w, h = cv2.boundingRect(warped_hair_mask_binary)
|
||||
if w > 0 and h > 0:
|
||||
center = (x + w // 2, y + h // 2)
|
||||
# seamlessClone expects target image, source image, mask, center, method
|
||||
# The mask should be single channel 8-bit.
|
||||
# The source (color_corrected_warped_hair) and target (swapped_frame) should be 8-bit 3-channel.
|
||||
dsize = (temp_frame.shape[1], temp_frame.shape[0]) # width, height
|
||||
|
||||
# Check if swapped_frame is suitable for seamlessClone (it should be the base)
|
||||
# Ensure color_corrected_warped_hair is also 8UC3
|
||||
if color_corrected_warped_hair.shape == swapped_frame.shape and \
|
||||
color_corrected_warped_hair.dtype == swapped_frame.dtype and \
|
||||
warped_hair_mask_binary.dtype == np.uint8:
|
||||
try:
|
||||
final_swapped_frame = cv2.seamlessClone(color_corrected_warped_hair, swapped_frame, warped_hair_mask_binary, center, cv2.NORMAL_CLONE)
|
||||
except cv2.error as e:
|
||||
logging.warning(f"cv2.seamlessClone failed: {e}. Falling back to simple blending.")
|
||||
# Fallback: Simple Blending (if seamlessClone fails)
|
||||
warped_hair_mask_3ch = cv2.cvtColor(warped_hair_mask_binary, cv2.COLOR_GRAY2BGR) > 0 # boolean mask
|
||||
final_swapped_frame[warped_hair_mask_3ch] = color_corrected_warped_hair[warped_hair_mask_3ch]
|
||||
else:
|
||||
logging.warning("Mismatch in shape/type for seamlessClone. Falling back to simple blending.")
|
||||
# Fallback: Simple Blending
|
||||
warped_hair_mask_3ch = cv2.cvtColor(warped_hair_mask_binary, cv2.COLOR_GRAY2BGR) > 0
|
||||
final_swapped_frame[warped_hair_mask_3ch] = color_corrected_warped_hair[warped_hair_mask_3ch]
|
||||
else:
|
||||
# Mask is empty, no hair to blend, final_swapped_frame remains as is (copy of swapped_frame)
|
||||
logging.info("Warped hair mask is empty. Skipping hair blending.")
|
||||
# final_swapped_frame is already a copy of swapped_frame
|
||||
else:
|
||||
logging.warning("Failed to estimate affine transformation matrix for hair. Skipping hair blending.")
|
||||
# final_swapped_frame is already a copy of swapped_frame
|
||||
else:
|
||||
if source_face_obj.kps is None or target_face.kps is None:
|
||||
logging.warning("Source or target keypoints (kps) are None. Skipping hair blending.")
|
||||
else:
|
||||
logging.warning(f"Not enough keypoints for hair transformation. Source kps: {source_face_obj.kps.shape if source_face_obj.kps is not None else 'None'}, Target kps: {target_face.kps.shape if target_face.kps is not None else 'None'}. Skipping hair blending.")
|
||||
# final_swapped_frame is already a copy of swapped_frame
|
||||
# END of Hair Blending Logic
|
||||
warped_material, warped_mask = _prepare_warped_source_material_and_mask(
|
||||
source_face_obj, source_frame_full, matrix, dsize
|
||||
)
|
||||
|
||||
# f. Mouth Mask Logic
|
||||
if warped_material is not None and warped_mask is not None:
|
||||
# Make a copy only now that we are sure we will modify it for hair.
|
||||
final_swapped_frame = swapped_frame.copy()
|
||||
|
||||
color_corrected_material = apply_color_transfer(warped_material, final_swapped_frame) # Use final_swapped_frame for color context
|
||||
|
||||
final_swapped_frame = _blend_material_onto_frame(
|
||||
final_swapped_frame,
|
||||
color_corrected_material,
|
||||
warped_mask
|
||||
)
|
||||
|
||||
# Mouth Mask Logic (operates on final_swapped_frame)
|
||||
if modules.globals.mouth_mask:
|
||||
# If final_swapped_frame wasn't copied for hair, it needs to be copied now before mouth mask modification.
|
||||
if final_swapped_frame is swapped_frame: # Check if it's still the same object
|
||||
final_swapped_frame = swapped_frame.copy()
|
||||
|
||||
# Create a mask for the target face
|
||||
face_mask = create_face_mask(target_face, temp_frame)
|
||||
|
||||
|
@ -201,99 +240,91 @@ def process_frame(source_face_obj: Face, source_frame_full: Frame, temp_frame: F
|
|||
|
||||
|
||||
# process_frame_v2 needs to accept source_frame_full as well
|
||||
|
||||
def _process_image_target_v2(source_frame_full: Frame, temp_frame: Frame) -> Frame:
|
||||
if modules.globals.many_faces:
|
||||
source_face_obj = default_source_face()
|
||||
if source_face_obj:
|
||||
for map_item in modules.globals.source_target_map:
|
||||
target_face = map_item["target"]["face"]
|
||||
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
|
||||
else: # not many_faces
|
||||
for map_item in modules.globals.source_target_map:
|
||||
if "source" in map_item:
|
||||
source_face_obj = map_item["source"]["face"]
|
||||
target_face = map_item["target"]["face"]
|
||||
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
|
||||
return temp_frame
|
||||
|
||||
def _process_video_target_v2(source_frame_full: Frame, temp_frame: Frame, temp_frame_path: str) -> Frame:
|
||||
if modules.globals.many_faces:
|
||||
source_face_obj = default_source_face()
|
||||
if source_face_obj:
|
||||
for map_item in modules.globals.source_target_map:
|
||||
target_frames_data = [f for f in map_item.get("target_faces_in_frame", []) if f.get("location") == temp_frame_path]
|
||||
for frame_data in target_frames_data:
|
||||
for target_face in frame_data.get("faces", []):
|
||||
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
|
||||
else: # not many_faces
|
||||
for map_item in modules.globals.source_target_map:
|
||||
if "source" in map_item:
|
||||
source_face_obj = map_item["source"]["face"]
|
||||
target_frames_data = [f for f in map_item.get("target_faces_in_frame", []) if f.get("location") == temp_frame_path]
|
||||
for frame_data in target_frames_data:
|
||||
for target_face in frame_data.get("faces", []):
|
||||
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
|
||||
return temp_frame
|
||||
|
||||
def _process_live_target_v2(source_frame_full: Frame, temp_frame: Frame) -> Frame:
|
||||
detected_faces = get_many_faces(temp_frame)
|
||||
if not detected_faces:
|
||||
return temp_frame
|
||||
|
||||
if modules.globals.many_faces:
|
||||
source_face_obj = default_source_face()
|
||||
if source_face_obj:
|
||||
for target_face in detected_faces:
|
||||
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
|
||||
else: # not many_faces (apply simple_map logic)
|
||||
if not modules.globals.simple_map or \
|
||||
not modules.globals.simple_map.get("target_embeddings") or \
|
||||
not modules.globals.simple_map.get("source_faces"):
|
||||
logging.warning("Simple map is not configured correctly. Skipping face swap.")
|
||||
return temp_frame
|
||||
|
||||
target_embeddings = modules.globals.simple_map["target_embeddings"]
|
||||
source_faces_from_map = modules.globals.simple_map["source_faces"]
|
||||
|
||||
if len(detected_faces) <= len(target_embeddings):
|
||||
for detected_face in detected_faces:
|
||||
closest_centroid_index, _ = find_closest_centroid(target_embeddings, detected_face.normed_embedding)
|
||||
if closest_centroid_index < len(source_faces_from_map):
|
||||
source_face_obj_from_map = source_faces_from_map[closest_centroid_index]
|
||||
temp_frame = swap_face(source_face_obj_from_map, detected_face, source_frame_full, temp_frame)
|
||||
else:
|
||||
logging.warning(f"Centroid index {closest_centroid_index} out of bounds for source_faces_from_map.")
|
||||
else: # More detected faces than target embeddings in simple_map
|
||||
detected_faces_embeddings = [face.normed_embedding for face in detected_faces]
|
||||
for i, target_embedding in enumerate(target_embeddings):
|
||||
if i < len(source_faces_from_map):
|
||||
closest_detected_face_index, _ = find_closest_centroid(detected_faces_embeddings, target_embedding)
|
||||
source_face_obj_from_map = source_faces_from_map[i]
|
||||
target_face_to_swap = detected_faces[closest_detected_face_index]
|
||||
temp_frame = swap_face(source_face_obj_from_map, target_face_to_swap, source_frame_full, temp_frame)
|
||||
# Optionally, remove the swapped detected face to prevent re-swapping if one source maps to multiple targets.
|
||||
# This depends on desired behavior. For now, simple independent mapping.
|
||||
else:
|
||||
logging.warning(f"Index {i} out of bounds for source_faces_from_map in simple_map else case.")
|
||||
return temp_frame
|
||||
|
||||
|
||||
def process_frame_v2(source_frame_full: Frame, temp_frame: Frame, temp_frame_path: str = "") -> Frame:
|
||||
if is_image(modules.globals.target_path):
|
||||
if modules.globals.many_faces:
|
||||
source_face_obj = default_source_face() # This function needs to be checked if it needs source_frame_full
|
||||
if source_face_obj: # Ensure default_source_face actually returns a face
|
||||
for map_item in modules.globals.source_target_map: # Renamed map to map_item to avoid conflict
|
||||
target_face = map_item["target"]["face"]
|
||||
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
|
||||
|
||||
elif not modules.globals.many_faces:
|
||||
for map_item in modules.globals.source_target_map: # Renamed map to map_item
|
||||
if "source" in map_item:
|
||||
source_face_obj = map_item["source"]["face"]
|
||||
target_face = map_item["target"]["face"]
|
||||
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
|
||||
|
||||
return _process_image_target_v2(source_frame_full, temp_frame)
|
||||
elif is_video(modules.globals.target_path):
|
||||
if modules.globals.many_faces:
|
||||
source_face_obj = default_source_face() # This function needs to be checked
|
||||
if source_face_obj:
|
||||
for map_item in modules.globals.source_target_map: # Renamed map to map_item
|
||||
target_frames_data = [ # Renamed target_frame to target_frames_data
|
||||
f
|
||||
for f in map_item["target_faces_in_frame"]
|
||||
if f["location"] == temp_frame_path
|
||||
]
|
||||
|
||||
for frame_data in target_frames_data: # Renamed frame to frame_data
|
||||
for target_face in frame_data["faces"]:
|
||||
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
|
||||
|
||||
elif not modules.globals.many_faces:
|
||||
for map_item in modules.globals.source_target_map: # Renamed map to map_item
|
||||
if "source" in map_item:
|
||||
target_frames_data = [ # Renamed target_frame to target_frames_data
|
||||
f
|
||||
for f in map_item["target_faces_in_frame"]
|
||||
if f["location"] == temp_frame_path
|
||||
]
|
||||
source_face_obj = map_item["source"]["face"]
|
||||
|
||||
for frame_data in target_frames_data: # Renamed frame to frame_data
|
||||
for target_face in frame_data["faces"]:
|
||||
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
|
||||
|
||||
return _process_video_target_v2(source_frame_full, temp_frame, temp_frame_path)
|
||||
else: # This is the live cam / generic case
|
||||
detected_faces = get_many_faces(temp_frame)
|
||||
if modules.globals.many_faces:
|
||||
if detected_faces:
|
||||
source_face_obj = default_source_face() # This function needs to be checked
|
||||
if source_face_obj:
|
||||
for target_face in detected_faces:
|
||||
temp_frame = swap_face(source_face_obj, target_face, source_frame_full, temp_frame)
|
||||
|
||||
elif not modules.globals.many_faces:
|
||||
if detected_faces:
|
||||
if len(detected_faces) <= len(
|
||||
modules.globals.simple_map["target_embeddings"]
|
||||
):
|
||||
for detected_face in detected_faces:
|
||||
closest_centroid_index, _ = find_closest_centroid(
|
||||
modules.globals.simple_map["target_embeddings"],
|
||||
detected_face.normed_embedding,
|
||||
)
|
||||
# Assuming simple_map["source_faces"] are Face objects
|
||||
# And default_source_face() logic might need to be more complex if source_frame_full is always from a single source_path
|
||||
source_face_obj_from_map = modules.globals.simple_map["source_faces"][closest_centroid_index]
|
||||
temp_frame = swap_face(
|
||||
source_face_obj_from_map, # This is source_face_obj
|
||||
detected_face, # This is target_face
|
||||
source_frame_full, # This is source_frame_full
|
||||
temp_frame,
|
||||
)
|
||||
else:
|
||||
detected_faces_centroids = []
|
||||
for face in detected_faces:
|
||||
detected_faces_centroids.append(face.normed_embedding)
|
||||
i = 0
|
||||
for target_embedding in modules.globals.simple_map[
|
||||
"target_embeddings"
|
||||
]:
|
||||
closest_centroid_index, _ = find_closest_centroid(
|
||||
detected_faces_centroids, target_embedding
|
||||
)
|
||||
source_face_obj_from_map = modules.globals.simple_map["source_faces"][i]
|
||||
temp_frame = swap_face(
|
||||
source_face_obj_from_map, # source_face_obj
|
||||
detected_faces[closest_centroid_index], # target_face
|
||||
source_frame_full, # source_frame_full
|
||||
temp_frame,
|
||||
)
|
||||
i += 1
|
||||
return temp_frame
|
||||
return _process_live_target_v2(source_frame_full, temp_frame)
|
||||
|
||||
|
||||
def process_frames(
|
||||
|
@ -353,30 +384,34 @@ def process_image(source_path: str, target_path: str, output_path: str) -> None:
|
|||
logging.error(f"Failed to read target image from {target_path}")
|
||||
return
|
||||
|
||||
# Read the original target frame once at the beginning
|
||||
original_target_frame = cv2.imread(target_path)
|
||||
if original_target_frame is None:
|
||||
logging.error(f"Failed to read original target image from {target_path}")
|
||||
return
|
||||
|
||||
result = None # Initialize result
|
||||
|
||||
if not modules.globals.map_faces:
|
||||
source_face_obj = get_one_face(source_img) # Use source_img here
|
||||
if not source_face_obj:
|
||||
logging.error(f"No face detected in source image {source_path}")
|
||||
return
|
||||
result = process_frame(source_face_obj, source_img, target_frame)
|
||||
cv2.imwrite(output_path, result)
|
||||
else:
|
||||
# map_faces == True for process_image
|
||||
# process_frame_v2 expects source_frame_full as its first argument.
|
||||
# The output_path is often the same as target_path initially for images.
|
||||
# We read the target_frame (which will be modified)
|
||||
target_frame_for_v2 = cv2.imread(output_path) # Or target_path, depending on desired workflow
|
||||
if target_frame_for_v2 is None:
|
||||
logging.error(f"Failed to read image for process_frame_v2 from {output_path}")
|
||||
return
|
||||
|
||||
result = process_frame(source_face_obj, source_img, original_target_frame)
|
||||
else: # map_faces is True
|
||||
if modules.globals.many_faces:
|
||||
update_status(
|
||||
"Many faces enabled. Using first source image. Progressing...", NAME
|
||||
)
|
||||
# Pass source_img (as source_frame_full) to process_frame_v2
|
||||
result = process_frame_v2(source_img, target_frame_for_v2, target_path) # target_path as temp_frame_path hint
|
||||
# process_frame_v2 takes the original target frame for processing.
|
||||
# target_path is passed as temp_frame_path for consistency with process_frame_v2's signature,
|
||||
# used for map lookups in video context but less critical for single images.
|
||||
result = process_frame_v2(source_img, original_target_frame, target_path)
|
||||
|
||||
if result is not None:
|
||||
cv2.imwrite(output_path, result)
|
||||
else:
|
||||
logging.error(f"Processing image {target_path} failed, result was None.")
|
||||
|
||||
|
||||
def process_video(source_path: str, temp_frame_paths: List[str]) -> None:
|
||||
|
@ -745,113 +780,3 @@ def apply_color_transfer(source, target):
|
|||
source = (source - source_mean) * (target_std / source_std) + target_mean
|
||||
|
||||
return cv2.cvtColor(np.clip(source, 0, 255).astype("uint8"), cv2.COLOR_LAB2BGR)
|
||||
|
||||
|
||||
def create_face_and_hair_mask(source_face: Face, source_frame: Frame) -> np.ndarray:
|
||||
"""
|
||||
Creates a combined mask for the face and hair from the source image.
|
||||
"""
|
||||
# 1. Generate the basic face mask (adapted from create_face_mask)
|
||||
face_only_mask = np.zeros(source_frame.shape[:2], dtype=np.uint8)
|
||||
landmarks = source_face.landmark_2d_106
|
||||
if landmarks is not None:
|
||||
landmarks = landmarks.astype(np.int32)
|
||||
|
||||
# Extract facial features (same logic as create_face_mask)
|
||||
right_side_face = landmarks[0:16]
|
||||
left_side_face = landmarks[17:32]
|
||||
# right_eye = landmarks[33:42] # Not directly used for outline
|
||||
right_eye_brow = landmarks[43:51]
|
||||
# left_eye = landmarks[87:96] # Not directly used for outline
|
||||
left_eye_brow = landmarks[97:105]
|
||||
|
||||
# Calculate forehead extension (same logic as create_face_mask)
|
||||
right_eyebrow_top = np.min(right_eye_brow[:, 1])
|
||||
left_eyebrow_top = np.min(left_eye_brow[:, 1])
|
||||
eyebrow_top = min(right_eyebrow_top, left_eyebrow_top)
|
||||
|
||||
face_top = np.min([right_side_face[0, 1], left_side_face[-1, 1]])
|
||||
# Ensure forehead_height is not negative if eyebrows are above the topmost landmark of face sides
|
||||
forehead_height = max(0, face_top - eyebrow_top)
|
||||
extended_forehead_height = int(forehead_height * 5.0)
|
||||
|
||||
forehead_left = right_side_face[0].copy()
|
||||
forehead_right = left_side_face[-1].copy()
|
||||
|
||||
# Ensure extended forehead points do not go into negative y values
|
||||
forehead_left[1] = max(0, forehead_left[1] - extended_forehead_height)
|
||||
forehead_right[1] = max(0, forehead_right[1] - extended_forehead_height)
|
||||
|
||||
face_outline = np.vstack(
|
||||
[
|
||||
[forehead_left],
|
||||
right_side_face,
|
||||
left_side_face[::-1],
|
||||
[forehead_right],
|
||||
]
|
||||
)
|
||||
|
||||
# Calculate padding (same logic as create_face_mask)
|
||||
# Ensure face_outline has at least one point before calculating norm
|
||||
if face_outline.shape[0] > 1:
|
||||
padding = int(
|
||||
np.linalg.norm(right_side_face[0] - left_side_face[-1]) * 0.05
|
||||
)
|
||||
else:
|
||||
padding = 5 # Default padding if not enough points
|
||||
|
||||
hull = cv2.convexHull(face_outline)
|
||||
hull_padded = []
|
||||
center = np.mean(face_outline, axis=0).squeeze() # Squeeze to handle potential extra dim
|
||||
|
||||
# Ensure center is a 1D array for subtraction
|
||||
if center.ndim > 1:
|
||||
center = np.mean(center, axis=0)
|
||||
|
||||
|
||||
for point_contour in hull:
|
||||
point = point_contour[0] # cv2.convexHull returns points wrapped in an extra array
|
||||
direction = point - center
|
||||
norm_direction = np.linalg.norm(direction)
|
||||
if norm_direction == 0: # Avoid division by zero if point is the center
|
||||
unit_direction = np.array([0,0])
|
||||
else:
|
||||
unit_direction = direction / norm_direction
|
||||
|
||||
padded_point = point + unit_direction * padding
|
||||
hull_padded.append(padded_point)
|
||||
|
||||
if hull_padded: # Ensure hull_padded is not empty
|
||||
hull_padded = np.array(hull_padded, dtype=np.int32)
|
||||
cv2.fillConvexPoly(face_only_mask, hull_padded, 255)
|
||||
else: # Fallback if hull_padded is empty (e.g. very few landmarks)
|
||||
cv2.fillConvexPoly(face_only_mask, hull, 255) # Use unpadded hull
|
||||
|
||||
|
||||
# Initial blur for face_only_mask is not strictly in the old one before combining,
|
||||
# but can be applied here or after combining. Let's keep it like original for now.
|
||||
# face_only_mask = cv2.GaussianBlur(face_only_mask, (5, 5), 3) # Original blur from create_face_mask
|
||||
|
||||
# 2. Generate the hair mask
|
||||
# Ensure source_frame is contiguous, as some cv2 functions might require it.
|
||||
source_frame_contiguous = np.ascontiguousarray(source_frame, dtype=np.uint8)
|
||||
hair_mask_on_source = segment_hair(source_frame_contiguous)
|
||||
|
||||
# 3. Combine the masks
|
||||
# Ensure masks are binary and of the same type for bitwise operations
|
||||
_, face_only_mask_binary = cv2.threshold(face_only_mask, 127, 255, cv2.THRESH_BINARY)
|
||||
_, hair_mask_on_source_binary = cv2.threshold(hair_mask_on_source, 127, 255, cv2.THRESH_BINARY)
|
||||
|
||||
# Ensure shapes match. If not, hair_mask might be different. Resize if necessary.
|
||||
# This should ideally not happen if segment_hair preserves dimensions.
|
||||
if face_only_mask_binary.shape != hair_mask_on_source_binary.shape:
|
||||
hair_mask_on_source_binary = cv2.resize(hair_mask_on_source_binary,
|
||||
(face_only_mask_binary.shape[1], face_only_mask_binary.shape[0]),
|
||||
interpolation=cv2.INTER_NEAREST)
|
||||
|
||||
combined_mask = cv2.bitwise_or(face_only_mask_binary, hair_mask_on_source_binary)
|
||||
|
||||
# 4. Apply Gaussian blur to the combined mask
|
||||
combined_mask = cv2.GaussianBlur(combined_mask, (5, 5), 3)
|
||||
|
||||
return combined_mask
|
||||
|
|
216
modules/ui.py
216
modules/ui.py
|
@ -105,6 +105,7 @@ def save_switch_states():
|
|||
"show_fps": modules.globals.show_fps,
|
||||
"mouth_mask": modules.globals.mouth_mask,
|
||||
"show_mouth_mask_box": modules.globals.show_mouth_mask_box,
|
||||
"enable_hair_swapping": modules.globals.enable_hair_swapping,
|
||||
}
|
||||
with open("switch_states.json", "w") as f:
|
||||
json.dump(switch_states, f)
|
||||
|
@ -129,6 +130,9 @@ def load_switch_states():
|
|||
modules.globals.show_mouth_mask_box = switch_states.get(
|
||||
"show_mouth_mask_box", False
|
||||
)
|
||||
modules.globals.enable_hair_swapping = switch_states.get(
|
||||
"enable_hair_swapping", True # Default to True if not found
|
||||
)
|
||||
except FileNotFoundError:
|
||||
# If the file doesn't exist, use default values
|
||||
pass
|
||||
|
@ -284,6 +288,20 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
)
|
||||
show_fps_switch.place(relx=0.6, rely=0.75)
|
||||
|
||||
# Hair Swapping Switch (placed below "Show FPS" on the right column)
|
||||
hair_swapping_value = ctk.BooleanVar(value=modules.globals.enable_hair_swapping)
|
||||
hair_swapping_switch = ctk.CTkSwitch(
|
||||
root,
|
||||
text=_("Swap Hair"),
|
||||
variable=hair_swapping_value,
|
||||
cursor="hand2",
|
||||
command=lambda: (
|
||||
setattr(modules.globals, "enable_hair_swapping", hair_swapping_value.get()),
|
||||
save_switch_states(),
|
||||
)
|
||||
)
|
||||
hair_swapping_switch.place(relx=0.6, rely=0.80) # Adjusted rely from 0.75 to 0.80
|
||||
|
||||
mouth_mask_var = ctk.BooleanVar(value=modules.globals.mouth_mask)
|
||||
mouth_mask_switch = ctk.CTkSwitch(
|
||||
root,
|
||||
|
@ -306,24 +324,26 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
)
|
||||
show_mouth_mask_box_switch.place(relx=0.6, rely=0.55)
|
||||
|
||||
# Adjusting placement of Start, Stop, Preview buttons due to new switch
|
||||
start_button = ctk.CTkButton(
|
||||
root, text=_("Start"), cursor="hand2", command=lambda: analyze_target(start, root)
|
||||
)
|
||||
start_button.place(relx=0.15, rely=0.80, relwidth=0.2, relheight=0.05)
|
||||
start_button.place(relx=0.15, rely=0.85, relwidth=0.2, relheight=0.05) # rely from 0.80 to 0.85
|
||||
|
||||
stop_button = ctk.CTkButton(
|
||||
root, text=_("Destroy"), cursor="hand2", command=lambda: destroy()
|
||||
)
|
||||
stop_button.place(relx=0.4, rely=0.80, relwidth=0.2, relheight=0.05)
|
||||
stop_button.place(relx=0.4, rely=0.85, relwidth=0.2, relheight=0.05) # rely from 0.80 to 0.85
|
||||
|
||||
preview_button = ctk.CTkButton(
|
||||
root, text=_("Preview"), cursor="hand2", command=lambda: toggle_preview()
|
||||
)
|
||||
preview_button.place(relx=0.65, rely=0.80, relwidth=0.2, relheight=0.05)
|
||||
preview_button.place(relx=0.65, rely=0.85, relwidth=0.2, relheight=0.05) # rely from 0.80 to 0.85
|
||||
|
||||
# --- Camera Selection ---
|
||||
# Adjusting placement of Camera selection due to new switch
|
||||
camera_label = ctk.CTkLabel(root, text=_("Select Camera:"))
|
||||
camera_label.place(relx=0.1, rely=0.86, relwidth=0.2, relheight=0.05)
|
||||
camera_label.place(relx=0.1, rely=0.91, relwidth=0.2, relheight=0.05) # rely from 0.86 to 0.91
|
||||
|
||||
available_cameras = get_available_cameras()
|
||||
camera_indices, camera_names = available_cameras
|
||||
|
@ -342,7 +362,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
root, variable=camera_variable, values=camera_names
|
||||
)
|
||||
|
||||
camera_optionmenu.place(relx=0.35, rely=0.86, relwidth=0.25, relheight=0.05)
|
||||
camera_optionmenu.place(relx=0.35, rely=0.91, relwidth=0.25, relheight=0.05) # rely from 0.86 to 0.91
|
||||
|
||||
live_button = ctk.CTkButton(
|
||||
root,
|
||||
|
@ -362,16 +382,16 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
else "disabled"
|
||||
),
|
||||
)
|
||||
live_button.place(relx=0.65, rely=0.86, relwidth=0.2, relheight=0.05)
|
||||
live_button.place(relx=0.65, rely=0.91, relwidth=0.2, relheight=0.05) # rely from 0.86 to 0.91
|
||||
# --- End Camera Selection ---
|
||||
|
||||
status_label = ctk.CTkLabel(root, text=None, justify="center")
|
||||
status_label.place(relx=0.1, rely=0.9, relwidth=0.8)
|
||||
status_label.place(relx=0.1, rely=0.96, relwidth=0.8) # rely from 0.9 to 0.96
|
||||
|
||||
donate_label = ctk.CTkLabel(
|
||||
root, text="Deep Live Cam", justify="center", cursor="hand2"
|
||||
)
|
||||
donate_label.place(relx=0.1, rely=0.95, relwidth=0.8)
|
||||
donate_label.place(relx=0.1, rely=0.99, relwidth=0.8) # rely from 0.95 to 0.99
|
||||
donate_label.configure(
|
||||
text_color=ctk.ThemeManager.theme.get("URL").get("text_color")
|
||||
)
|
||||
|
@ -880,7 +900,94 @@ def create_webcam_preview(camera_index: int):
|
|||
PREVIEW.deiconify()
|
||||
|
||||
frame_processors = get_frame_processors_modules(modules.globals.frame_processors)
|
||||
# source_image = None # Replaced by source_face_obj_for_cam
|
||||
|
||||
# --- Source Image Loading and Validation (Moved before the loop) ---
|
||||
source_face_obj_for_cam = None
|
||||
source_frame_full_for_cam = None
|
||||
source_frame_full_for_cam_map_faces = None
|
||||
|
||||
if not modules.globals.map_faces:
|
||||
if not modules.globals.source_path:
|
||||
update_status("Error: No source image selected for webcam mode.")
|
||||
cap.release()
|
||||
PREVIEW.withdraw()
|
||||
while PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
|
||||
ROOT.update_idletasks()
|
||||
ROOT.update()
|
||||
time.sleep(0.05)
|
||||
return
|
||||
if not os.path.exists(modules.globals.source_path):
|
||||
update_status(f"Error: Source image not found at {modules.globals.source_path}")
|
||||
cap.release()
|
||||
PREVIEW.withdraw()
|
||||
while PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
|
||||
ROOT.update_idletasks()
|
||||
ROOT.update()
|
||||
time.sleep(0.05)
|
||||
return
|
||||
|
||||
source_frame_full_for_cam = cv2.imread(modules.globals.source_path)
|
||||
if source_frame_full_for_cam is None:
|
||||
update_status(f"Error: Could not read source image at {modules.globals.source_path}")
|
||||
cap.release()
|
||||
PREVIEW.withdraw()
|
||||
while PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
|
||||
ROOT.update_idletasks()
|
||||
ROOT.update()
|
||||
time.sleep(0.05)
|
||||
return
|
||||
|
||||
source_face_obj_for_cam = get_one_face(source_frame_full_for_cam)
|
||||
if source_face_obj_for_cam is None:
|
||||
update_status(f"Error: No face detected in source image {modules.globals.source_path}")
|
||||
# This error is less critical for stopping immediately, but we'll make it persistent too.
|
||||
# The loop below will run, but processing for frames will effectively be skipped.
|
||||
# For consistency in error handling, make it persistent.
|
||||
cap.release()
|
||||
PREVIEW.withdraw()
|
||||
while PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
|
||||
ROOT.update_idletasks()
|
||||
ROOT.update()
|
||||
time.sleep(0.05)
|
||||
return
|
||||
else: # modules.globals.map_faces is True
|
||||
if not modules.globals.source_path:
|
||||
update_status("Error: No global source image selected (for hair/background in map_faces mode).")
|
||||
cap.release()
|
||||
PREVIEW.withdraw()
|
||||
while PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
|
||||
ROOT.update_idletasks()
|
||||
ROOT.update()
|
||||
time.sleep(0.05)
|
||||
return
|
||||
if not os.path.exists(modules.globals.source_path):
|
||||
update_status(f"Error: Source image (for hair/background) not found at {modules.globals.source_path}")
|
||||
cap.release()
|
||||
PREVIEW.withdraw()
|
||||
while PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
|
||||
ROOT.update_idletasks()
|
||||
ROOT.update()
|
||||
time.sleep(0.05)
|
||||
return
|
||||
|
||||
source_frame_full_for_cam_map_faces = cv2.imread(modules.globals.source_path)
|
||||
if source_frame_full_for_cam_map_faces is None:
|
||||
update_status(f"Error: Could not read source image (for hair/background) at {modules.globals.source_path}")
|
||||
cap.release()
|
||||
PREVIEW.withdraw()
|
||||
while PREVIEW.state() != "withdrawn" and ROOT.winfo_exists():
|
||||
ROOT.update_idletasks()
|
||||
ROOT.update()
|
||||
time.sleep(0.05)
|
||||
return
|
||||
|
||||
if not modules.globals.source_target_map and not modules.globals.simple_map:
|
||||
update_status("Warning: No face map defined for map_faces mode. Swapper may not work as expected.")
|
||||
# This is a warning, not a fatal error for the preview window itself. Processing will continue.
|
||||
# No persistent loop here, as it's a warning about functionality, not a critical load error.
|
||||
|
||||
# --- End Source Image Loading ---
|
||||
|
||||
prev_time = time.time()
|
||||
fps_update_interval = 0.5
|
||||
frame_count = 0
|
||||
|
@ -907,80 +1014,29 @@ def create_webcam_preview(camera_index: int):
|
|||
)
|
||||
|
||||
if not modules.globals.map_faces:
|
||||
# Case 1: map_faces is False
|
||||
source_face_obj_for_cam = None
|
||||
source_frame_full_for_cam = None
|
||||
if modules.globals.source_path and os.path.exists(modules.globals.source_path):
|
||||
source_frame_full_for_cam = cv2.imread(modules.globals.source_path)
|
||||
if source_frame_full_for_cam is not None:
|
||||
source_face_obj_for_cam = get_one_face(source_frame_full_for_cam)
|
||||
if source_face_obj_for_cam is None:
|
||||
update_status(f"Error: No face detected in source image at {modules.globals.source_path}")
|
||||
# Optional: could return here or allow running without a source face if some processors handle it
|
||||
else:
|
||||
update_status(f"Error: Could not read source image at {modules.globals.source_path}")
|
||||
cap.release()
|
||||
PREVIEW.withdraw()
|
||||
return
|
||||
elif modules.globals.source_path:
|
||||
update_status(f"Error: Source image not found at {modules.globals.source_path}")
|
||||
cap.release()
|
||||
PREVIEW.withdraw()
|
||||
return
|
||||
else:
|
||||
update_status("Error: No source image selected for webcam mode.")
|
||||
cap.release()
|
||||
PREVIEW.withdraw()
|
||||
return
|
||||
|
||||
for frame_processor in frame_processors:
|
||||
if frame_processor.NAME == "DLC.FACE-ENHANCER":
|
||||
if modules.globals.fp_ui["face_enhancer"]:
|
||||
# Assuming face_enhancer's process_frame doesn't need source_face or source_frame_full
|
||||
temp_frame = frame_processor.process_frame(None, temp_frame)
|
||||
else:
|
||||
if source_face_obj_for_cam and source_frame_full_for_cam is not None:
|
||||
if not modules.globals.map_faces:
|
||||
# Case 1: map_faces is False - source_face_obj_for_cam and source_frame_full_for_cam are pre-loaded
|
||||
if source_face_obj_for_cam and source_frame_full_for_cam is not None: # Check if valid after pre-loading
|
||||
for frame_processor in frame_processors:
|
||||
if frame_processor.NAME == "DLC.FACE-ENHANCER":
|
||||
if modules.globals.fp_ui["face_enhancer"]:
|
||||
temp_frame = frame_processor.process_frame(None, temp_frame)
|
||||
else:
|
||||
temp_frame = frame_processor.process_frame(source_face_obj_for_cam, source_frame_full_for_cam, temp_frame)
|
||||
# else: temp_frame remains unchanged if source isn't ready
|
||||
# If source image was invalid (e.g. no face), source_face_obj_for_cam might be None.
|
||||
# In this case, the frame processors that need it will be skipped, effectively just showing the raw webcam frame.
|
||||
# The error message is already persistent due to the pre-loop check.
|
||||
else:
|
||||
# Case 2: map_faces is True
|
||||
source_frame_full_for_cam_map_faces = None
|
||||
if modules.globals.source_path and os.path.exists(modules.globals.source_path):
|
||||
source_frame_full_for_cam_map_faces = cv2.imread(modules.globals.source_path)
|
||||
if source_frame_full_for_cam_map_faces is None:
|
||||
update_status(f"Error: Could not read source image (for hair/background) at {modules.globals.source_path}")
|
||||
cap.release()
|
||||
PREVIEW.withdraw()
|
||||
return
|
||||
elif modules.globals.source_path:
|
||||
update_status(f"Error: Source image (for hair/background) not found at {modules.globals.source_path}")
|
||||
cap.release()
|
||||
PREVIEW.withdraw()
|
||||
return
|
||||
else:
|
||||
update_status("Error: No global source image selected (for hair/background in map_faces mode).")
|
||||
cap.release()
|
||||
PREVIEW.withdraw()
|
||||
return
|
||||
|
||||
# Also check if map is defined, though process_frame_v2 handles specific face mapping internally
|
||||
if not modules.globals.source_target_map and not modules.globals.simple_map: # Check both map types
|
||||
update_status("Error: No face map defined for map_faces mode.")
|
||||
# This might not need a return if some processors can run without map
|
||||
# but for face_swapper, it's likely needed.
|
||||
# For now, we proceed and let process_frame_v2 handle it.
|
||||
|
||||
modules.globals.target_path = None # Standard for live mode
|
||||
for frame_processor in frame_processors:
|
||||
if frame_processor.NAME == "DLC.FACE-ENHANCER":
|
||||
if modules.globals.fp_ui["face_enhancer"]:
|
||||
# Pass source_frame_full_for_cam_map_faces for signature consistency
|
||||
# The enhancer can choose to ignore it if not needed.
|
||||
# Case 2: map_faces is True - source_frame_full_for_cam_map_faces is pre-loaded
|
||||
if source_frame_full_for_cam_map_faces is not None: # Check if valid after pre-loading
|
||||
modules.globals.target_path = None # Standard for live mode
|
||||
for frame_processor in frame_processors:
|
||||
if frame_processor.NAME == "DLC.FACE-ENHANCER":
|
||||
if modules.globals.fp_ui["face_enhancer"]:
|
||||
temp_frame = frame_processor.process_frame_v2(source_frame_full_for_cam_map_faces, temp_frame)
|
||||
else:
|
||||
temp_frame = frame_processor.process_frame_v2(source_frame_full_for_cam_map_faces, temp_frame)
|
||||
else:
|
||||
if source_frame_full_for_cam_map_faces is not None:
|
||||
temp_frame = frame_processor.process_frame_v2(source_frame_full_for_cam_map_faces, temp_frame)
|
||||
# else: temp_frame remains unchanged if global source for map_faces isn't ready
|
||||
# If source_frame_full_for_cam_map_faces was invalid, error is persistent from pre-loop check.
|
||||
|
||||
# Calculate and display FPS
|
||||
current_time = time.time()
|
||||
|
|
|
@ -0,0 +1,20 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
VENV_DIR=".venv"
|
||||
|
||||
# Check if virtual environment exists
|
||||
if [ ! -d "$VENV_DIR" ]; then
|
||||
echo "Virtual environment '$VENV_DIR' not found."
|
||||
echo "Please run ./setup_mac.sh first to create the environment and install dependencies."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Activating virtual environment..."
|
||||
source "$VENV_DIR/bin/activate"
|
||||
|
||||
echo "Starting the application with CPU execution provider..."
|
||||
# Passes all arguments passed to this script (e.g., --source, --target) to run.py
|
||||
python3 run.py --execution-provider cpu "$@"
|
||||
|
||||
# Deactivate after script finishes (optional, as shell context closes)
|
||||
# deactivate
|
|
@ -0,0 +1,13 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
VENV_DIR=".venv"
|
||||
|
||||
if [ ! -d "$VENV_DIR" ]; then
|
||||
echo "Virtual environment '$VENV_DIR' not found."
|
||||
echo "Please run ./setup_mac.sh first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
source "$VENV_DIR/bin/activate"
|
||||
echo "Starting the application with CoreML execution provider..."
|
||||
python3 run.py --execution-provider coreml "$@"
|
|
@ -0,0 +1,13 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
VENV_DIR=".venv"
|
||||
|
||||
if [ ! -d "$VENV_DIR" ]; then
|
||||
echo "Virtual environment '$VENV_DIR' not found."
|
||||
echo "Please run ./setup_mac.sh first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
source "$VENV_DIR/bin/activate"
|
||||
echo "Starting the application with CPU execution provider..."
|
||||
python3 run.py --execution-provider cpu "$@"
|
|
@ -0,0 +1,13 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
VENV_DIR=".venv"
|
||||
|
||||
if [ ! -d "$VENV_DIR" ]; then
|
||||
echo "Virtual environment '$VENV_DIR' not found."
|
||||
echo "Please run ./setup_mac.sh first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
source "$VENV_DIR/bin/activate"
|
||||
echo "Starting the application with MPS execution provider (for Apple Silicon)..."
|
||||
python3 run.py --execution-provider mps "$@"
|
|
@ -0,0 +1,81 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
# Exit immediately if a command exits with a non-zero status.
|
||||
set -e
|
||||
|
||||
echo "Starting macOS setup..."
|
||||
|
||||
# 1. Check for Python 3
|
||||
echo "Checking for Python 3..."
|
||||
if ! command -v python3 &> /dev/null
|
||||
then
|
||||
echo "Python 3 could not be found. Please install Python 3."
|
||||
echo "You can often install it using Homebrew: brew install python"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 2. Check Python version (>= 3.9)
|
||||
echo "Checking Python 3 version..."
|
||||
python3 -c 'import sys; exit(0) if sys.version_info >= (3,9) else exit(1)'
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Python 3.9 or higher is required."
|
||||
echo "Your version is: $(python3 --version)"
|
||||
echo "Please upgrade your Python version. Consider using pyenv or Homebrew to manage Python versions."
|
||||
exit 1
|
||||
fi
|
||||
echo "Python 3.9+ found: $(python3 --version)"
|
||||
|
||||
# 3. Check for ffmpeg
|
||||
echo "Checking for ffmpeg..."
|
||||
if ! command -v ffmpeg &> /dev/null
|
||||
then
|
||||
echo "WARNING: ffmpeg could not be found. This program requires ffmpeg for video processing."
|
||||
echo "You can install it using Homebrew: brew install ffmpeg"
|
||||
echo "Continuing with setup, but video processing might fail later."
|
||||
else
|
||||
echo "ffmpeg found: $(ffmpeg -version | head -n 1)"
|
||||
fi
|
||||
|
||||
# 4. Define virtual environment directory
|
||||
VENV_DIR=".venv"
|
||||
|
||||
# 5. Create virtual environment
|
||||
if [ -d "$VENV_DIR" ]; then
|
||||
echo "Virtual environment '$VENV_DIR' already exists. Skipping creation."
|
||||
else
|
||||
echo "Creating virtual environment in '$VENV_DIR'..."
|
||||
python3 -m venv "$VENV_DIR"
|
||||
fi
|
||||
|
||||
# 6. Activate virtual environment (for this script's session)
|
||||
echo "Activating virtual environment..."
|
||||
source "$VENV_DIR/bin/activate"
|
||||
|
||||
# 7. Upgrade pip
|
||||
echo "Upgrading pip..."
|
||||
pip install --upgrade pip
|
||||
|
||||
# 8. Install requirements
|
||||
echo "Installing requirements from requirements.txt..."
|
||||
if [ -f "requirements.txt" ]; then
|
||||
pip install -r requirements.txt
|
||||
else
|
||||
echo "ERROR: requirements.txt not found. Cannot install dependencies."
|
||||
# Deactivate on error if desired, or leave active for user to debug
|
||||
# deactivate
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Setup complete!"
|
||||
echo ""
|
||||
echo "To activate the virtual environment in your terminal, run:"
|
||||
echo " source $VENV_DIR/bin/activate"
|
||||
echo ""
|
||||
echo "After activating, you can run the application using:"
|
||||
echo " python3 run.py [arguments]"
|
||||
echo "Or use one of the run_mac_*.sh scripts (e.g., ./run_mac_cpu.sh)."
|
||||
echo ""
|
||||
|
||||
# Deactivate at the end of the script's execution (optional, as script session ends)
|
||||
# deactivate
|
Loading…
Reference in New Issue