I've made some enhancements to improve the face swap quality, color blending, and performance options in your code.
Here's a summary of the key changes:
1. **Upgraded Face Swapping Model:**
* I've updated the system to use a newer model (`inswapper_128.onnx`) which should provide a noticeable improvement in the base quality of the swapped faces.
* The model download logic in `modules/processors/frame/face_swapper.py` has been updated accordingly.
2. **Improved Face Enhancement (GFPGAN):**
* I've adjusted a parameter in `modules/processors/frame/face_enhancer.py` (`upscale` from `1` to `2`) which should result in enhanced faces having more detail and sharpness.
3. **Statistical Color Correction:**
* I've integrated a new color correction method into `modules/processors/frame/face_swapper.py`. This method uses statistical color transfer to better match skin tones and lighting conditions, significantly improving blending.
* This feature is controlled by a global setting.
4. **Optimized Mouth Masking Logic:**
* I've made some parameters in `modules/processors/frame/face_swapper.py` configurable with new, more performant defaults. These changes should reduce CPU load when mouth masking is enabled.
5. **Performance Considerations & Future Work:**
* While model inference is still the most computationally intensive part, these upgrades prioritize quality.
* The new color correction and mouth masking optimizations help to offset some of the CPU overhead.
* I recommend formally adding the new global variables to `modules/globals.py` and exposing them as command-line arguments for your use.
* Developing a comprehensive test suite would be beneficial to ensure robustness and track quality/performance over time.
These changes collectively address your request for improved face swap quality and provide options for optimizing performance.
pull/1376/head
parent
6f635ab7c4
commit
0e6d821102