This commit introduces shell scripts to automate the setup process and provide convenient ways to run the application on macOS.
New files added:
- setup_mac.sh: Checks for Python 3.9+ and ffmpeg, creates a virtual environment, installs pip dependencies from requirements.txt.
- run_mac.sh: Runs the application with the CPU execution provider by default.
- run_mac_cpu.sh: Explicitly runs with the CPU execution provider.
- run_mac_coreml.sh: Runs with the CoreML execution provider.
- run_mac_mps.sh: Runs with the MPS execution provider.
The README.md has also been updated with a new section detailing how to use these scripts for macOS users.
These scripts aim to simplify the initial setup and execution of the project on macOS, similar to the .bat files available for Windows.
This commit introduces the capability to swap hair along with the face from a source image to a target image/video or live webcam feed.
Key changes include:
1. **Hair Segmentation:**
- Integrated the `isjackwild/segformer-b0-finetuned-segments-skin-hair-clothing` model from Hugging Face using the `transformers` library.
- Added `modules/hair_segmenter.py` with a `segment_hair` function to produce a binary hair mask from an image.
- Updated `requirements.txt` with `transformers`.
2. **Combined Face-Hair Mask:**
- Implemented `create_face_and_hair_mask` in `modules/processors/frame/face_swapper.py` to generate a unified mask for both face (from landmarks) and segmented hair from the source image.
3. **Enhanced Swapping Logic:**
- Modified `swap_face` and related processing functions (`process_frame`, `process_frame_v2`, `process_frames`, `process_image`) to utilize the full source image (`source_frame_full`).
- The `swap_face` function now performs the standard face swap and then:
- Segments hair from the `source_frame_full`.
- Warps the hair and its mask to the target face's position using an affine transformation estimated from facial landmarks.
- Applies color correction (`apply_color_transfer`) to the warped hair.
- Blends the hair onto the target frame, preferably using `cv2.seamlessClone` for improved realism.
- Existing mouth mask logic is preserved and applied to the final composited frame.
4. **Webcam Integration:**
- Updated the webcam processing loop in `modules/ui.py` (`create_webcam_preview`) to correctly load and pass the `source_frame_full` to the frame processors.
- This enables hair swapping in live webcam mode.
- Added error handling for source image loading in webcam mode.
This set of changes addresses your request for more realistic face swaps that include hair. Further testing and refinement of blending parameters may be beneficial for optimal results across all scenarios.
Followed the `README` but ran into some errors running it locally. Made a few tweaks and got it working on my M3 PRO. Found this PR (Failing to run on Apple Silicon Mac M3) and thought improving the instructions might help others. Hope this helps!
great tool guys, thx a lot
- Add explicit checks for face detection results (source and target faces).
- Handle cases when face embeddings are not available, preventing AttributeError.
- Provide meaningful log messages for easier debugging in future scenarios.
Made changes for apple silicon.
Or getting
ERROR: Could not find a version that satisfies the requirement torch==2.5.1+cu118 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.4.0, 2.4.1, 2.5.0, 2.5.1, 2.6.0)
ERROR: No matching distribution found for torch==2.5.1+cu118