Compare commits
79 Commits
e4af521592
...
d8a5cdbc19
| Author | SHA1 | Date |
|---|---|---|
|
|
d8a5cdbc19 | |
|
|
6219da4b1b | |
|
|
22e1110ec4 | |
|
|
82d5d34912 | |
|
|
60e82ea200 | |
|
|
8be7368949 | |
|
|
5003c04386 | |
|
|
a50ea98bc2 | |
|
|
6a9bf2acfb | |
|
|
395cecf11d | |
|
|
ebf4e95c3a | |
|
|
5974ba2a68 | |
|
|
75c53ac7aa | |
|
|
8aeb406ea2 | |
|
|
8b3bd734cf | |
|
|
b0aac8bd04 | |
|
|
9dc3c3e9c2 | |
|
|
21989d4a49 | |
|
|
b97185d2bf | |
|
|
81da9a23ca | |
|
|
007867a6f6 | |
|
|
7ec9d61608 | |
|
|
eeff1a87fa | |
|
|
bc1149cd80 | |
|
|
11c10b354f | |
|
|
71aae3fe07 | |
|
|
b995eca033 | |
|
|
b17e52dea2 | |
|
|
3a858847e3 | |
|
|
77c19d1073 | |
|
|
7472dfb694 | |
|
|
41c6916273 | |
|
|
ed7a21687c | |
|
|
5ce991651d | |
|
|
432984b3b6 | |
|
|
47c8f7acc0 | |
|
|
606137c58f | |
|
|
76b94ac034 | |
|
|
84ca1dc2f2 | |
|
|
681c20dbbd | |
|
|
c240f6e31c | |
|
|
ba9d58e04e | |
|
|
4bb979faf0 | |
|
|
eae69c4b47 | |
|
|
f7823906d1 | |
|
|
a1d9b73742 | |
|
|
5f5fe8890a | |
|
|
a9e8f27360 | |
|
|
de4f765878 | |
|
|
c72582506d | |
|
|
7fb6b54c0b | |
|
|
d6236a0eed | |
|
|
6171141505 | |
|
|
08adb53b8f | |
|
|
9e5446582e | |
|
|
b9c7c0db6f | |
|
|
cab8b9afcb | |
|
|
4d8ba6396a | |
|
|
e4761e4d66 | |
|
|
a840986159 | |
|
|
4874282642 | |
|
|
71c33437fc | |
|
|
a39b2e8d81 | |
|
|
a7e775f918 | |
|
|
5919995fa1 | |
|
|
8746c9bd36 | |
|
|
6a9ac5b70a | |
|
|
916c2f82d8 | |
|
|
80f6ea9e65 | |
|
|
9e24281a94 | |
|
|
82b527487a | |
|
|
abde84ea57 | |
|
|
c599bb3e34 | |
|
|
39db53abd6 | |
|
|
29c9c119d3 | |
|
|
fad626e84c | |
|
|
5ef255c3c3 | |
|
|
6f6f93a4ad | |
|
|
c75f941716 |
|
|
@ -1,38 +1,26 @@
|
|||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
***[Remove this]The issue would be closed without notice and be considered spam if the template is not followed.***
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**To Reproduce**
|
||||
Steps to reproduce the behavior:
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
3. Scroll down to '....'
|
||||
4. See error
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Screenshots**
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Desktop (please complete the following information):**
|
||||
- OS: [e.g. iOS]
|
||||
- Browser [e.g. chrome, safari]
|
||||
- Version [e.g. 22]
|
||||
**Error Message**
|
||||
|
||||
**Smartphone (please complete the following information):**
|
||||
- Device: [e.g. iPhone6]
|
||||
- OS: [e.g. iOS8.1]
|
||||
- Browser [e.g. stock browser, safari]
|
||||
`<The error message in terminal>`
|
||||
|
||||
**Desktop (please complete the following information):**
|
||||
- OS: [e.g. Windows]
|
||||
- Version [e.g. 22]
|
||||
- GPU
|
||||
- CPU
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here.
|
||||
|
||||
**Confirmation (Mandatory)**
|
||||
- [ ] I have followed the template
|
||||
- [ ] This is not a query about how to increase performance
|
||||
- [ ] I have checked the issues page, and this is not a duplicate
|
||||
|
||||
|
|
|
|||
|
|
@ -24,3 +24,4 @@ models/GFPGANv1.4.pth
|
|||
models/DMDNet.pth
|
||||
faceswap/
|
||||
.vscode/
|
||||
switch_states.json
|
||||
|
|
|
|||
|
|
@ -1 +1,38 @@
|
|||
Please always push on the experimental to ensure we don't mess with the main branch. All the test will be done on the experimental and will be pushed to the main branch after few days of testing.
|
||||
# Collaboration Guidelines and Codebase Quality Standards
|
||||
|
||||
To ensure smooth collaboration and maintain the high quality of our codebase, please adhere to the following guidelines:
|
||||
|
||||
## Branching Strategy
|
||||
|
||||
* **`premain`**:
|
||||
* Always push your changes to the `premain` branch initially.
|
||||
* This safeguards the `main` branch from unintentional disruptions.
|
||||
* All tests will be performed on the `premain` branch.
|
||||
* Changes will only be merged into `main` after several hours or days of rigorous testing.
|
||||
* **`experimental`**:
|
||||
* For large or potentially disruptive changes, use the `experimental` branch.
|
||||
* This allows for thorough discussion and review before considering a merge into `main`.
|
||||
|
||||
## Pre-Pull Request Checklist
|
||||
|
||||
Before creating a Pull Request (PR), ensure you have completed the following tests:
|
||||
|
||||
### Functionality
|
||||
|
||||
* **Realtime Faceswap**:
|
||||
* Test with face enhancer **enabled** and **disabled**.
|
||||
* **Map Faces**:
|
||||
* Test with both options (**enabled** and **disabled**).
|
||||
* **Camera Listing**:
|
||||
* Verify that all cameras are listed accurately.
|
||||
|
||||
### Stability
|
||||
|
||||
* **Realtime FPS**:
|
||||
* Confirm that there is no drop in real-time frames per second (FPS).
|
||||
* **Boot Time**:
|
||||
* Changes should not negatively impact the boot time of either the application or the real-time faceswap feature.
|
||||
* **GPU Overloading**:
|
||||
* Test for a minimum of 15 minutes to guarantee no GPU overloading, which could lead to crashes.
|
||||
* **App Performance**:
|
||||
* The application should remain responsive and not exhibit any lag.
|
||||
|
|
|
|||
246
README.md
|
|
@ -4,11 +4,16 @@
|
|||
Real-time face swap and video deepfake with a single click and only a single image.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://trendshift.io/repositories/11395" target="_blank"><img src="https://trendshift.io/api/badge/repositories/11395" alt="hacksider%2FDeep-Live-Cam | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<img src="media/demo.gif" alt="Demo GIF">
|
||||
<img src="media/avgpcperformancedemo.gif" alt="Performance Demo GIF">
|
||||
</p>
|
||||
|
||||
|
||||
## Disclaimer
|
||||
|
||||
This software is intended as a productive contribution to the AI-generated media industry. It aims to assist artists with tasks like animating custom characters or using them as models for clothing, etc.
|
||||
|
|
@ -22,11 +27,15 @@ Users are expected to use this software responsibly and legally. If using a real
|
|||
|
||||
[](https://hacksider.gumroad.com/l/vccdmm)
|
||||
|
||||
[Download latest pre-built version with CUDA support](https://hacksider.gumroad.com/l/vccdmm) - No Manual Installation/Downloading required.
|
||||
[Download latest pre-built version with CUDA support](https://hacksider.gumroad.com/l/vccdmm) - No Manual Installation/Downloading required and Early features testing.
|
||||
|
||||
## Installation (Manual)
|
||||
**Please be aware that the installation needs technical skills and is NOT for beginners, consider downloading the prebuilt. Please do NOT open platform and installation related issues on GitHub before discussing it on the discord server.**
|
||||
### Basic Installation (CPU)
|
||||
**Please be aware that the installation needs technical skills and is not for beginners, consider downloading the prebuilt.**
|
||||
|
||||
<details>
|
||||
<summary>Click to see the process</summary>
|
||||
|
||||
### Installation
|
||||
|
||||
This is more likely to work on your computer but will be slower as it utilizes the CPU.
|
||||
|
||||
|
|
@ -68,14 +77,11 @@ brew install python-tk@3.10
|
|||
**Run:** If you don't have a GPU, you can run Deep-Live-Cam using `python run.py`. Note that initial execution will download models (~300MB).
|
||||
|
||||
|
||||
### GPU Acceleration (Optional)
|
||||
|
||||
<details>
|
||||
<summary>Click to see the details</summary>
|
||||
### GPU Acceleration
|
||||
|
||||
**CUDA Execution Provider (Nvidia)**
|
||||
|
||||
1. Install [CUDA Toolkit 11.8](https://developer.nvidia.com/cuda-11-8-0-download-archive)
|
||||
1. Install [CUDA Toolkit 11.8](https://developer.nvidia.com/cuda-11-8-0-download-archive) or [CUDA Toolkit 12.1.1](https://developer.nvidia.com/cuda-12-1-1-download-archive)
|
||||
2. Install dependencies:
|
||||
```bash
|
||||
pip uninstall onnxruntime onnxruntime-gpu
|
||||
|
|
@ -155,45 +161,34 @@ python run.py --execution-provider openvino
|
|||
- Use a screen capture tool like OBS to stream.
|
||||
- To change the face, select a new source image.
|
||||
|
||||

|
||||
## Features - Everything is realtime
|
||||
|
||||
## Features
|
||||
### Mouth Mask
|
||||
|
||||
### Resizable Preview Window
|
||||
**Retain your original mouth using Mouth Mask**
|
||||
|
||||
Dynamically improve performance using the `--live-resizable` parameter.
|
||||
|
||||

|
||||

|
||||
|
||||
### Face Mapping
|
||||
|
||||
Track and change faces on the fly.
|
||||
**Use different faces on multiple subjects**
|
||||
|
||||

|
||||

|
||||
|
||||
**Source Video:**
|
||||
### Your Movie, Your Face
|
||||
|
||||

|
||||
|
||||
**Enable Face Mapping:**
|
||||
|
||||

|
||||
|
||||
**Map the Faces:**
|
||||
|
||||

|
||||
|
||||
**See the Magic!**
|
||||
**Watch movies with any face in realtime**
|
||||
|
||||

|
||||
|
||||
**Watch movies in realtime:**
|
||||
|
||||
It's as simple as opening a movie on the screen, and selecting OBS as your camera!
|
||||

|
||||
## Benchmarks
|
||||
|
||||
**Nearly 0% detection!**
|
||||
|
||||
## Command Line Arguments
|
||||

|
||||
|
||||
## Command Line Arguments (Unmaintained)
|
||||
|
||||
```
|
||||
options:
|
||||
|
|
@ -207,6 +202,7 @@ options:
|
|||
--keep-frames keep temporary frames
|
||||
--many-faces process every face
|
||||
--map-faces map source target faces
|
||||
--mouth-mask mask the mouth region
|
||||
--nsfw-filter filter the NSFW image or video
|
||||
--video-encoder {libx264,libx265,libvpx-vp9} adjust output video encoder
|
||||
--video-quality [0-51] adjust output video quality
|
||||
|
|
@ -221,170 +217,21 @@ options:
|
|||
Looking for a CLI mode? Using the -s/--source argument will make the run program in cli mode.
|
||||
|
||||
|
||||
## Webcam Mode on WSL2 Ubuntu (Optional)
|
||||
## Press
|
||||
**We are always open to criticism and ready to improve, that's why we didn't cherrypick anything.**
|
||||
|
||||
<details>
|
||||
<summary>Click to see the details</summary>
|
||||
|
||||
If you want to use WSL2 on Windows 11 you will notice, that Ubuntu WSL2 doesn't come with USB-Webcam support in the Kernel. You need to do two things: Compile the Kernel with the right modules integrated and forward your USB Webcam from Windows to Ubuntu with the usbipd app. Here are detailed Steps:
|
||||
|
||||
This tutorial will guide you through the process of setting up WSL2 Ubuntu with USB webcam support, rebuilding the kernel, and preparing the environment for the Deep-Live-Cam project.
|
||||
|
||||
**1. Install WSL2 Ubuntu**
|
||||
|
||||
Install WSL2 Ubuntu from the Microsoft Store or using PowerShell:
|
||||
|
||||
**2. Enable USB Support in WSL2**
|
||||
|
||||
1. Install the USB/IP tool for Windows:
|
||||
[https://learn.microsoft.com/en-us/windows/wsl/connect-usb](https://learn.microsoft.com/en-us/windows/wsl/connect-usb)
|
||||
|
||||
2. In Windows PowerShell (as Administrator), connect your webcam to WSL:
|
||||
|
||||
```powershell
|
||||
usbipd list
|
||||
usbipd bind --busid x-x # Replace x-x with your webcam's bus ID
|
||||
usbipd attach --wsl --busid x-x # Replace x-x with your webcam's bus ID
|
||||
```
|
||||
You need to redo the above every time you reboot wsl or re-connect your webcam/usb device.
|
||||
|
||||
**3. Rebuild WSL2 Ubuntu Kernel with USB and Webcam Modules**
|
||||
|
||||
Follow these steps to rebuild the kernel:
|
||||
|
||||
1. Start with this guide: [https://github.com/PINTO0309/wsl2_linux_kernel_usbcam_enable_conf](https://github.com/PINTO0309/wsl2_linux_kernel_usbcam_enable_conf)
|
||||
|
||||
2. When you reach the `sudo wget [github.com](http://github.com/)...PINTO0309` step, which won't work for newer kernel versions, follow this video instead or alternatively follow the video tutorial from the beginning:
|
||||
[https://www.youtube.com/watch?v=t_YnACEPmrM](https://www.youtube.com/watch?v=t_YnACEPmrM)
|
||||
|
||||
Additional info: [https://askubuntu.com/questions/1413377/camera-not-working-in-cheese-in-wsl2](https://askubuntu.com/questions/1413377/camera-not-working-in-cheese-in-wsl2)
|
||||
|
||||
3. After rebuilding, restart WSL with the new kernel.
|
||||
|
||||
**4. Set Up Deep-Live-Cam Project**
|
||||
Within Ubuntu:
|
||||
1. Clone the repository:
|
||||
|
||||
```bash
|
||||
git clone [https://github.com/hacksider/Deep-Live-Cam](https://github.com/hacksider/Deep-Live-Cam)
|
||||
```
|
||||
|
||||
2. Follow the installation instructions in the repository, including cuda toolkit 11.8, make 100% sure it's not cuda toolkit 12.x.
|
||||
|
||||
**5. Verify and Load Kernel Modules**
|
||||
|
||||
1. Check if USB and webcam modules are built into the kernel:
|
||||
|
||||
```bash
|
||||
zcat /proc/config.gz | grep -i "CONFIG_USB_VIDEO_CLASS"
|
||||
```
|
||||
|
||||
2. If modules are loadable (m), not built-in (y), check if the file exists:
|
||||
|
||||
```bash
|
||||
ls /lib/modules/$(uname -r)/kernel/drivers/media/usb/uvc/
|
||||
```
|
||||
|
||||
3. Load the module and check for errors (optional if built-in):
|
||||
|
||||
```bash
|
||||
sudo modprobe uvcvideo
|
||||
dmesg | tail
|
||||
```
|
||||
|
||||
4. Verify video devices:
|
||||
|
||||
```bash
|
||||
sudo ls -al /dev/video*
|
||||
```
|
||||
|
||||
**6. Set Up Permissions**
|
||||
|
||||
1. Add user to video group and set permissions:
|
||||
|
||||
```bash
|
||||
sudo usermod -a -G video $USER
|
||||
sudo chgrp video /dev/video0 /dev/video1
|
||||
sudo chmod 660 /dev/video0 /dev/video1
|
||||
```
|
||||
|
||||
2. Create a udev rule for permanent permissions:
|
||||
|
||||
```bash
|
||||
sudo nano /etc/udev/rules.d/81-webcam.rules
|
||||
```
|
||||
|
||||
Add this content:
|
||||
|
||||
```
|
||||
KERNEL=="video[0-9]*", GROUP="video", MODE="0660"
|
||||
```
|
||||
|
||||
3. Reload udev rules:
|
||||
|
||||
```bash
|
||||
sudo udevadm control --reload-rules && sudo udevadm trigger
|
||||
```
|
||||
|
||||
4. Log out and log back into your WSL session.
|
||||
|
||||
5. Start Deep-Live-Cam with `python run.py --execution-provider cuda --max-memory 8` where 8 can be changed to the number of GB VRAM of your GPU has, minus 1-2GB. If you have a RTX3080 with 10GB I suggest adding 8GB. Leave some left for Windows.
|
||||
|
||||
**Final Notes**
|
||||
|
||||
- Steps 6 and 7 may be optional if the modules are built into the kernel and permissions are already set correctly.
|
||||
- Always ensure you're using compatible versions of CUDA, ONNX, and other dependencies.
|
||||
- If issues persist, consider checking the Deep-Live-Cam project's specific requirements and troubleshooting steps.
|
||||
|
||||
By following these steps, you should have a WSL2 Ubuntu environment with USB webcam support ready for the Deep-Live-Cam project. If you encounter any issues, refer back to the specific error messages and troubleshooting steps provided.
|
||||
|
||||
**Troubleshooting CUDA Issues**
|
||||
|
||||
If you encounter this error:
|
||||
|
||||
```
|
||||
[ONNXRuntimeError] : 1 : FAIL : Failed to load library [libonnxruntime_providers_cuda.so](http://libonnxruntime_providers_cuda.so/) with error: libcufft.so.10: cannot open shared object file: No such file or directory
|
||||
```
|
||||
|
||||
Follow these steps:
|
||||
|
||||
1. Install CUDA Toolkit 11.8 (ONNX 1.16.3 requires CUDA 11.x, not 12.x):
|
||||
[https://developer.nvidia.com/cuda-11-8-0-download-archive](https://developer.nvidia.com/cuda-11-8-0-download-archive)
|
||||
select: Linux, x86_64, WSL-Ubuntu, 2.0, deb (local)
|
||||
2. Check CUDA version:
|
||||
|
||||
```bash
|
||||
/usr/local/cuda/bin/nvcc --version
|
||||
```
|
||||
|
||||
3. If the wrong version is installed, remove it completely:
|
||||
[https://askubuntu.com/questions/530043/removing-nvidia-cuda-toolkit-and-installing-new-one](https://askubuntu.com/questions/530043/removing-nvidia-cuda-toolkit-and-installing-new-one)
|
||||
|
||||
4. Install CUDA Toolkit 11.8 again [https://developer.nvidia.com/cuda-11-8-0-download-archive](https://developer.nvidia.com/cuda-11-8-0-download-archive), select: Linux, x86_64, WSL-Ubuntu, 2.0, deb (local)
|
||||
|
||||
```bash
|
||||
sudo apt-get -y install cuda-toolkit-11-8
|
||||
```
|
||||
</details>
|
||||
|
||||
|
||||
## Future Updates & Roadmap
|
||||
|
||||
For the latest experimental builds and features, see the [experimental branch](https://github.com/hacksider/Deep-Live-Cam/tree/experimental).
|
||||
|
||||
**TODO:**
|
||||
|
||||
- [ ] Develop a version for web app/service
|
||||
- [ ] Speed up model loading
|
||||
- [ ] Speed up real-time face swapping
|
||||
- [x] Support multiple faces
|
||||
- [x] UI/UX enhancements for desktop app
|
||||
|
||||
This is an open-source project developed in our free time. Updates may be delayed.
|
||||
|
||||
**Tips and Links:**
|
||||
- [How to make the most of Deep-Live-Cam](https://hacksider.gumroad.com/p/how-to-make-the-most-on-deep-live-cam)
|
||||
- Face enhancer is good, but still very slow for any live streaming purpose.
|
||||
- [*"Deep-Live-Cam goes viral, allowing anyone to become a digital doppelganger"*](https://arstechnica.com/information-technology/2024/08/new-ai-tool-enables-real-time-face-swapping-on-webcams-raising-fraud-concerns/) - Ars Technica
|
||||
- [*"Thanks Deep Live Cam, shapeshifters are among us now"*](https://dataconomy.com/2024/08/15/what-is-deep-live-cam-github-deepfake/) - Dataconomy
|
||||
- [*"This free AI tool lets you become anyone during video-calls"*](https://www.newsbytesapp.com/news/science/deep-live-cam-ai-impersonation-tool-goes-viral/story) - NewsBytes
|
||||
- [*"OK, this viral AI live stream software is truly terrifying"*](https://www.creativebloq.com/ai/ok-this-viral-ai-live-stream-software-is-truly-terrifying) - Creative Bloq
|
||||
- [*"Deepfake AI Tool Lets You Become Anyone in a Video Call With Single Photo"*](https://petapixel.com/2024/08/14/deep-live-cam-deepfake-ai-tool-lets-you-become-anyone-in-a-video-call-with-single-photo-mark-zuckerberg-jd-vance-elon-musk/) - PetaPixel
|
||||
- [*"Deep-Live-Cam Uses AI to Transform Your Face in Real-Time, Celebrities Included"*](https://www.techeblog.com/deep-live-cam-ai-transform-face/) - TechEBlog
|
||||
- [*"An AI tool that "makes you look like anyone" during a video call is going viral online"*](https://telegrafi.com/en/a-tool-that-makes-you-look-like-anyone-during-a-video-call-is-going-viral-on-the-Internet/) - Telegrafi
|
||||
- [*"This Deepfake Tool Turning Images Into Livestreams is Topping the GitHub Charts"*](https://decrypt.co/244565/this-deepfake-tool-turning-images-into-livestreams-is-topping-the-github-charts) - Emerge
|
||||
- [*"New Real-Time Face-Swapping AI Allows Anyone to Mimic Famous Faces"*](https://www.digitalmusicnews.com/2024/08/15/face-swapping-ai-real-time-mimic/) - Digital Music News
|
||||
- [*"This real-time webcam deepfake tool raises alarms about the future of identity theft"*](https://www.diyphotography.net/this-real-time-webcam-deepfake-tool-raises-alarms-about-the-future-of-identity-theft/) - DIYPhotography
|
||||
- [*"That's Crazy, Oh God. That's Fucking Freaky Dude... That's So Wild Dude"*](https://www.youtube.com/watch?time_continue=1074&v=py4Tc-Y8BcY) - SomeOrdinaryGamers
|
||||
- [*"Alright look look look, now look chat, we can do any face we want to look like chat"*](https://www.youtube.com/live/mFsCe7AIxq8?feature=shared&t=2686) - IShowSpeed
|
||||
|
||||
|
||||
## Credits
|
||||
|
|
@ -395,13 +242,16 @@ This is an open-source project developed in our free time. Updates may be delaye
|
|||
- [GosuDRM](https://github.com/GosuDRM) : for open version of roop
|
||||
- [pereiraroland26](https://github.com/pereiraroland26) : Multiple faces support
|
||||
- [vic4key](https://github.com/vic4key) : For supporting/contributing on this project
|
||||
- [KRSHH](https://github.com/KRSHH) : For updating the UI
|
||||
- [KRSHH](https://github.com/KRSHH) : For his contributions
|
||||
- and [all developers](https://github.com/hacksider/Deep-Live-Cam/graphs/contributors) behind libraries used in this project.
|
||||
- Foot Note: [This is originally roop-cam, see the full history of the code here.](https://github.com/hacksider/roop-cam) Please be informed that the base author of the code is [s0md3v](https://github.com/s0md3v/roop)
|
||||
- Foot Note: Please be informed that the base author of the code is [s0md3v](https://github.com/s0md3v/roop)
|
||||
- All the wonderful users who helped making this project go viral by starring the repo ❤️
|
||||
|
||||
[](https://github.com/hacksider/Deep-Live-Cam/stargazers)
|
||||
|
||||
## Contributions
|
||||

|
||||
## Star History
|
||||
## Stars to the Moon 🚀
|
||||
|
||||
<a href="https://star-history.com/#hacksider/deep-live-cam&Date">
|
||||
<picture>
|
||||
|
|
|
|||
|
After Width: | Height: | Size: 2.8 MiB |
BIN
media/demo.mp4
|
Before Width: | Height: | Size: 76 KiB |
|
Before Width: | Height: | Size: 104 KiB |
|
Before Width: | Height: | Size: 4.0 MiB |
|
Before Width: | Height: | Size: 8.6 MiB |
|
After Width: | Height: | Size: 5.3 MiB |
BIN
media/movie.gif
|
Before Width: | Height: | Size: 1.6 MiB After Width: | Height: | Size: 14 MiB |
|
Before Width: | Height: | Size: 794 KiB |
|
Before Width: | Height: | Size: 4.3 MiB |
|
After Width: | Height: | Size: 13 MiB |
|
|
@ -1 +1,4 @@
|
|||
just put the models in this folder
|
||||
just put the models in this folder -
|
||||
|
||||
https://huggingface.co/hacksider/deep-live-cam/resolve/main/inswapper_128_fp16.onnx?download=true
|
||||
https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth
|
||||
|
|
|
|||
|
|
@ -41,6 +41,7 @@ def parse_args() -> None:
|
|||
program.add_argument('--many-faces', help='process every face', dest='many_faces', action='store_true', default=False)
|
||||
program.add_argument('--nsfw-filter', help='filter the NSFW image or video', dest='nsfw_filter', action='store_true', default=False)
|
||||
program.add_argument('--map-faces', help='map source target faces', dest='map_faces', action='store_true', default=False)
|
||||
program.add_argument('--mouth-mask', help='mask the mouth region', dest='mouth_mask', action='store_true', default=False)
|
||||
program.add_argument('--video-encoder', help='adjust output video encoder', dest='video_encoder', default='libx264', choices=['libx264', 'libx265', 'libvpx-vp9'])
|
||||
program.add_argument('--video-quality', help='adjust output video quality', dest='video_quality', type=int, default=18, choices=range(52), metavar='[0-51]')
|
||||
program.add_argument('--live-mirror', help='The live camera display as you see it in the front-facing camera frame', dest='live_mirror', action='store_true', default=False)
|
||||
|
|
@ -67,6 +68,7 @@ def parse_args() -> None:
|
|||
modules.globals.keep_audio = args.keep_audio
|
||||
modules.globals.keep_frames = args.keep_frames
|
||||
modules.globals.many_faces = args.many_faces
|
||||
modules.globals.mouth_mask = args.mouth_mask
|
||||
modules.globals.nsfw_filter = args.nsfw_filter
|
||||
modules.globals.map_faces = args.map_faces
|
||||
modules.globals.video_encoder = args.video_encoder
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ nsfw_filter = False
|
|||
video_encoder = None
|
||||
video_quality = None
|
||||
live_mirror = False
|
||||
live_resizable = False
|
||||
live_resizable = True
|
||||
max_memory = None
|
||||
execution_providers: List[str] = []
|
||||
execution_threads = None
|
||||
|
|
@ -36,3 +36,8 @@ fp_ui: Dict[str, bool] = {"face_enhancer": False}
|
|||
camera_input_combobox = None
|
||||
webcam_preview_running = False
|
||||
show_fps = False
|
||||
mouth_mask = False
|
||||
show_mouth_mask_box = False
|
||||
mask_feather_ratio = 8
|
||||
mask_down_size = 0.50
|
||||
mask_size = 1
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
name = 'Deep Live Cam'
|
||||
version = '1.6.0'
|
||||
edition = 'Portable'
|
||||
name = 'Deep-Live-Cam'
|
||||
version = '1.7.5'
|
||||
edition = 'GitHub Edition'
|
||||
|
|
|
|||
|
|
@ -9,9 +9,10 @@ import modules.processors.frame.core
|
|||
from modules.core import update_status
|
||||
from modules.face_analyser import get_one_face
|
||||
from modules.typing import Frame, Face
|
||||
import platform
|
||||
import torch
|
||||
from modules.utilities import (
|
||||
conditional_download,
|
||||
resolve_relative_path,
|
||||
is_image,
|
||||
is_video,
|
||||
)
|
||||
|
|
@ -21,9 +22,14 @@ THREAD_SEMAPHORE = threading.Semaphore()
|
|||
THREAD_LOCK = threading.Lock()
|
||||
NAME = "DLC.FACE-ENHANCER"
|
||||
|
||||
abs_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
models_dir = os.path.join(
|
||||
os.path.dirname(os.path.dirname(os.path.dirname(abs_dir))), "models"
|
||||
)
|
||||
|
||||
|
||||
def pre_check() -> bool:
|
||||
download_directory_path = resolve_relative_path("..\models")
|
||||
download_directory_path = models_dir
|
||||
conditional_download(
|
||||
download_directory_path,
|
||||
[
|
||||
|
|
@ -47,12 +53,18 @@ def get_face_enhancer() -> Any:
|
|||
|
||||
with THREAD_LOCK:
|
||||
if FACE_ENHANCER is None:
|
||||
if os.name == "nt":
|
||||
model_path = resolve_relative_path("..\models\GFPGANv1.4.pth")
|
||||
# todo: set models path https://github.com/TencentARC/GFPGAN/issues/399
|
||||
model_path = os.path.join(models_dir, "GFPGANv1.4.pth")
|
||||
|
||||
match platform.system():
|
||||
case "Darwin": # Mac OS
|
||||
if torch.backends.mps.is_available():
|
||||
mps_device = torch.device("mps")
|
||||
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1, device=mps_device) # type: ignore[attr-defined]
|
||||
else:
|
||||
model_path = resolve_relative_path("../models/GFPGANv1.4.pth")
|
||||
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1) # type: ignore[attr-defined]
|
||||
case _: # Other OS
|
||||
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1) # type: ignore[attr-defined]
|
||||
|
||||
return FACE_ENHANCER
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -2,35 +2,54 @@ from typing import Any, List
|
|||
import cv2
|
||||
import insightface
|
||||
import threading
|
||||
|
||||
import numpy as np
|
||||
import modules.globals
|
||||
import modules.processors.frame.core
|
||||
from modules.core import update_status
|
||||
from modules.face_analyser import get_one_face, get_many_faces, default_source_face
|
||||
from modules.typing import Face, Frame
|
||||
from modules.utilities import conditional_download, resolve_relative_path, is_image, is_video
|
||||
from modules.utilities import (
|
||||
conditional_download,
|
||||
is_image,
|
||||
is_video,
|
||||
)
|
||||
from modules.cluster_analysis import find_closest_centroid
|
||||
import os
|
||||
|
||||
FACE_SWAPPER = None
|
||||
THREAD_LOCK = threading.Lock()
|
||||
NAME = 'DLC.FACE-SWAPPER'
|
||||
NAME = "DLC.FACE-SWAPPER"
|
||||
|
||||
abs_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
models_dir = os.path.join(
|
||||
os.path.dirname(os.path.dirname(os.path.dirname(abs_dir))), "models"
|
||||
)
|
||||
|
||||
|
||||
def pre_check() -> bool:
|
||||
download_directory_path = resolve_relative_path('../models')
|
||||
conditional_download(download_directory_path, ['https://huggingface.co/hacksider/deep-live-cam/blob/main/inswapper_128_fp16.onnx'])
|
||||
download_directory_path = abs_dir
|
||||
conditional_download(
|
||||
download_directory_path,
|
||||
[
|
||||
"https://huggingface.co/hacksider/deep-live-cam/blob/main/inswapper_128_fp16.onnx"
|
||||
],
|
||||
)
|
||||
return True
|
||||
|
||||
|
||||
def pre_start() -> bool:
|
||||
if not modules.globals.map_faces and not is_image(modules.globals.source_path):
|
||||
update_status('Select an image for source path.', NAME)
|
||||
update_status("Select an image for source path.", NAME)
|
||||
return False
|
||||
elif not modules.globals.map_faces and not get_one_face(cv2.imread(modules.globals.source_path)):
|
||||
update_status('No face in source path detected.', NAME)
|
||||
elif not modules.globals.map_faces and not get_one_face(
|
||||
cv2.imread(modules.globals.source_path)
|
||||
):
|
||||
update_status("No face in source path detected.", NAME)
|
||||
return False
|
||||
if not is_image(modules.globals.target_path) and not is_video(modules.globals.target_path):
|
||||
update_status('Select an image or video for target path.', NAME)
|
||||
if not is_image(modules.globals.target_path) and not is_video(
|
||||
modules.globals.target_path
|
||||
):
|
||||
update_status("Select an image or video for target path.", NAME)
|
||||
return False
|
||||
return True
|
||||
|
||||
|
|
@ -40,17 +59,45 @@ def get_face_swapper() -> Any:
|
|||
|
||||
with THREAD_LOCK:
|
||||
if FACE_SWAPPER is None:
|
||||
model_path = resolve_relative_path('../models/inswapper_128_fp16.onnx')
|
||||
FACE_SWAPPER = insightface.model_zoo.get_model(model_path, providers=modules.globals.execution_providers)
|
||||
model_path = os.path.join(models_dir, "inswapper_128_fp16.onnx")
|
||||
FACE_SWAPPER = insightface.model_zoo.get_model(
|
||||
model_path, providers=modules.globals.execution_providers
|
||||
)
|
||||
return FACE_SWAPPER
|
||||
|
||||
|
||||
def swap_face(source_face: Face, target_face: Face, temp_frame: Frame) -> Frame:
|
||||
return get_face_swapper().get(temp_frame, target_face, source_face, paste_back=True)
|
||||
face_swapper = get_face_swapper()
|
||||
|
||||
# Apply the face swap
|
||||
swapped_frame = face_swapper.get(
|
||||
temp_frame, target_face, source_face, paste_back=True
|
||||
)
|
||||
|
||||
if modules.globals.mouth_mask:
|
||||
# Create a mask for the target face
|
||||
face_mask = create_face_mask(target_face, temp_frame)
|
||||
|
||||
# Create the mouth mask
|
||||
mouth_mask, mouth_cutout, mouth_box, lower_lip_polygon = (
|
||||
create_lower_mouth_mask(target_face, temp_frame)
|
||||
)
|
||||
|
||||
# Apply the mouth area
|
||||
swapped_frame = apply_mouth_area(
|
||||
swapped_frame, mouth_cutout, mouth_box, face_mask, lower_lip_polygon
|
||||
)
|
||||
|
||||
if modules.globals.show_mouth_mask_box:
|
||||
mouth_mask_data = (mouth_mask, mouth_cutout, mouth_box, lower_lip_polygon)
|
||||
swapped_frame = draw_mouth_mask_visualization(
|
||||
swapped_frame, target_face, mouth_mask_data
|
||||
)
|
||||
|
||||
return swapped_frame
|
||||
|
||||
|
||||
def process_frame(source_face: Face, temp_frame: Frame) -> Frame:
|
||||
# Ensure the frame is in RGB format if color correction is enabled
|
||||
if modules.globals.color_correction:
|
||||
temp_frame = cv2.cvtColor(temp_frame, cv2.COLOR_BGR2RGB)
|
||||
|
||||
|
|
@ -71,35 +118,44 @@ def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
|
|||
if modules.globals.many_faces:
|
||||
source_face = default_source_face()
|
||||
for map in modules.globals.souce_target_map:
|
||||
target_face = map['target']['face']
|
||||
target_face = map["target"]["face"]
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
|
||||
elif not modules.globals.many_faces:
|
||||
for map in modules.globals.souce_target_map:
|
||||
if "source" in map:
|
||||
source_face = map['source']['face']
|
||||
target_face = map['target']['face']
|
||||
source_face = map["source"]["face"]
|
||||
target_face = map["target"]["face"]
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
|
||||
elif is_video(modules.globals.target_path):
|
||||
if modules.globals.many_faces:
|
||||
source_face = default_source_face()
|
||||
for map in modules.globals.souce_target_map:
|
||||
target_frame = [f for f in map['target_faces_in_frame'] if f['location'] == temp_frame_path]
|
||||
target_frame = [
|
||||
f
|
||||
for f in map["target_faces_in_frame"]
|
||||
if f["location"] == temp_frame_path
|
||||
]
|
||||
|
||||
for frame in target_frame:
|
||||
for target_face in frame['faces']:
|
||||
for target_face in frame["faces"]:
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
|
||||
elif not modules.globals.many_faces:
|
||||
for map in modules.globals.souce_target_map:
|
||||
if "source" in map:
|
||||
target_frame = [f for f in map['target_faces_in_frame'] if f['location'] == temp_frame_path]
|
||||
source_face = map['source']['face']
|
||||
target_frame = [
|
||||
f
|
||||
for f in map["target_faces_in_frame"]
|
||||
if f["location"] == temp_frame_path
|
||||
]
|
||||
source_face = map["source"]["face"]
|
||||
|
||||
for frame in target_frame:
|
||||
for target_face in frame['faces']:
|
||||
for target_face in frame["faces"]:
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
|
||||
else:
|
||||
detected_faces = get_many_faces(temp_frame)
|
||||
if modules.globals.many_faces:
|
||||
|
|
@ -110,25 +166,46 @@ def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
|
|||
|
||||
elif not modules.globals.many_faces:
|
||||
if detected_faces:
|
||||
if len(detected_faces) <= len(modules.globals.simple_map['target_embeddings']):
|
||||
if len(detected_faces) <= len(
|
||||
modules.globals.simple_map["target_embeddings"]
|
||||
):
|
||||
for detected_face in detected_faces:
|
||||
closest_centroid_index, _ = find_closest_centroid(modules.globals.simple_map['target_embeddings'], detected_face.normed_embedding)
|
||||
closest_centroid_index, _ = find_closest_centroid(
|
||||
modules.globals.simple_map["target_embeddings"],
|
||||
detected_face.normed_embedding,
|
||||
)
|
||||
|
||||
temp_frame = swap_face(modules.globals.simple_map['source_faces'][closest_centroid_index], detected_face, temp_frame)
|
||||
temp_frame = swap_face(
|
||||
modules.globals.simple_map["source_faces"][
|
||||
closest_centroid_index
|
||||
],
|
||||
detected_face,
|
||||
temp_frame,
|
||||
)
|
||||
else:
|
||||
detected_faces_centroids = []
|
||||
for face in detected_faces:
|
||||
detected_faces_centroids.append(face.normed_embedding)
|
||||
i = 0
|
||||
for target_embedding in modules.globals.simple_map['target_embeddings']:
|
||||
closest_centroid_index, _ = find_closest_centroid(detected_faces_centroids, target_embedding)
|
||||
for target_embedding in modules.globals.simple_map[
|
||||
"target_embeddings"
|
||||
]:
|
||||
closest_centroid_index, _ = find_closest_centroid(
|
||||
detected_faces_centroids, target_embedding
|
||||
)
|
||||
|
||||
temp_frame = swap_face(modules.globals.simple_map['source_faces'][i], detected_faces[closest_centroid_index], temp_frame)
|
||||
temp_frame = swap_face(
|
||||
modules.globals.simple_map["source_faces"][i],
|
||||
detected_faces[closest_centroid_index],
|
||||
temp_frame,
|
||||
)
|
||||
i += 1
|
||||
return temp_frame
|
||||
|
||||
|
||||
def process_frames(source_path: str, temp_frame_paths: List[str], progress: Any = None) -> None:
|
||||
def process_frames(
|
||||
source_path: str, temp_frame_paths: List[str], progress: Any = None
|
||||
) -> None:
|
||||
if not modules.globals.map_faces:
|
||||
source_face = get_one_face(cv2.imread(source_path))
|
||||
for temp_frame_path in temp_frame_paths:
|
||||
|
|
@ -162,7 +239,9 @@ def process_image(source_path: str, target_path: str, output_path: str) -> None:
|
|||
cv2.imwrite(output_path, result)
|
||||
else:
|
||||
if modules.globals.many_faces:
|
||||
update_status('Many faces enabled. Using first source image. Progressing...', NAME)
|
||||
update_status(
|
||||
"Many faces enabled. Using first source image. Progressing...", NAME
|
||||
)
|
||||
target_frame = cv2.imread(output_path)
|
||||
result = process_frame_v2(target_frame)
|
||||
cv2.imwrite(output_path, result)
|
||||
|
|
@ -170,5 +249,367 @@ def process_image(source_path: str, target_path: str, output_path: str) -> None:
|
|||
|
||||
def process_video(source_path: str, temp_frame_paths: List[str]) -> None:
|
||||
if modules.globals.map_faces and modules.globals.many_faces:
|
||||
update_status('Many faces enabled. Using first source image. Progressing...', NAME)
|
||||
modules.processors.frame.core.process_video(source_path, temp_frame_paths, process_frames)
|
||||
update_status(
|
||||
"Many faces enabled. Using first source image. Progressing...", NAME
|
||||
)
|
||||
modules.processors.frame.core.process_video(
|
||||
source_path, temp_frame_paths, process_frames
|
||||
)
|
||||
|
||||
|
||||
def create_lower_mouth_mask(
|
||||
face: Face, frame: Frame
|
||||
) -> (np.ndarray, np.ndarray, tuple, np.ndarray):
|
||||
mask = np.zeros(frame.shape[:2], dtype=np.uint8)
|
||||
mouth_cutout = None
|
||||
landmarks = face.landmark_2d_106
|
||||
if landmarks is not None:
|
||||
# 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
|
||||
lower_lip_order = [
|
||||
65,
|
||||
66,
|
||||
62,
|
||||
70,
|
||||
69,
|
||||
18,
|
||||
19,
|
||||
20,
|
||||
21,
|
||||
22,
|
||||
23,
|
||||
24,
|
||||
0,
|
||||
8,
|
||||
7,
|
||||
6,
|
||||
5,
|
||||
4,
|
||||
3,
|
||||
2,
|
||||
65,
|
||||
]
|
||||
lower_lip_landmarks = landmarks[lower_lip_order].astype(
|
||||
np.float32
|
||||
) # Use float for precise calculations
|
||||
|
||||
# Calculate the center of the landmarks
|
||||
center = np.mean(lower_lip_landmarks, axis=0)
|
||||
|
||||
# Expand the landmarks outward
|
||||
expansion_factor = (
|
||||
1 + modules.globals.mask_down_size
|
||||
) # Adjust this for more or less expansion
|
||||
expanded_landmarks = (lower_lip_landmarks - center) * expansion_factor + center
|
||||
|
||||
# Extend the top lip part
|
||||
toplip_indices = [
|
||||
20,
|
||||
0,
|
||||
1,
|
||||
2,
|
||||
3,
|
||||
4,
|
||||
5,
|
||||
] # Indices for landmarks 2, 65, 66, 62, 70, 69, 18
|
||||
toplip_extension = (
|
||||
modules.globals.mask_size * 0.5
|
||||
) # Adjust this factor to control the extension
|
||||
for idx in toplip_indices:
|
||||
direction = expanded_landmarks[idx] - center
|
||||
direction = direction / np.linalg.norm(direction)
|
||||
expanded_landmarks[idx] += direction * toplip_extension
|
||||
|
||||
# Extend the bottom part (chin area)
|
||||
chin_indices = [
|
||||
11,
|
||||
12,
|
||||
13,
|
||||
14,
|
||||
15,
|
||||
16,
|
||||
] # Indices for landmarks 21, 22, 23, 24, 0, 8
|
||||
chin_extension = 2 * 0.2 # Adjust this factor to control the extension
|
||||
for idx in chin_indices:
|
||||
expanded_landmarks[idx][1] += (
|
||||
expanded_landmarks[idx][1] - center[1]
|
||||
) * chin_extension
|
||||
|
||||
# Convert back to integer coordinates
|
||||
expanded_landmarks = expanded_landmarks.astype(np.int32)
|
||||
|
||||
# Calculate bounding box for the expanded lower mouth
|
||||
min_x, min_y = np.min(expanded_landmarks, axis=0)
|
||||
max_x, max_y = np.max(expanded_landmarks, axis=0)
|
||||
|
||||
# Add some padding to the bounding box
|
||||
padding = int((max_x - min_x) * 0.1) # 10% padding
|
||||
min_x = max(0, min_x - padding)
|
||||
min_y = max(0, min_y - padding)
|
||||
max_x = min(frame.shape[1], max_x + padding)
|
||||
max_y = min(frame.shape[0], max_y + padding)
|
||||
|
||||
# Ensure the bounding box dimensions are valid
|
||||
if max_x <= min_x or max_y <= min_y:
|
||||
if (max_x - min_x) <= 1:
|
||||
max_x = min_x + 1
|
||||
if (max_y - min_y) <= 1:
|
||||
max_y = min_y + 1
|
||||
|
||||
# Create the mask
|
||||
mask_roi = np.zeros((max_y - min_y, max_x - min_x), dtype=np.uint8)
|
||||
cv2.fillPoly(mask_roi, [expanded_landmarks - [min_x, min_y]], 255)
|
||||
|
||||
# Apply Gaussian blur to soften the mask edges
|
||||
mask_roi = cv2.GaussianBlur(mask_roi, (15, 15), 5)
|
||||
|
||||
# Place the mask ROI in the full-sized mask
|
||||
mask[min_y:max_y, min_x:max_x] = mask_roi
|
||||
|
||||
# Extract the masked area from the frame
|
||||
mouth_cutout = frame[min_y:max_y, min_x:max_x].copy()
|
||||
|
||||
# Return the expanded lower lip polygon in original frame coordinates
|
||||
lower_lip_polygon = expanded_landmarks
|
||||
|
||||
return mask, mouth_cutout, (min_x, min_y, max_x, max_y), lower_lip_polygon
|
||||
|
||||
|
||||
def draw_mouth_mask_visualization(
|
||||
frame: Frame, face: Face, mouth_mask_data: tuple
|
||||
) -> Frame:
|
||||
landmarks = face.landmark_2d_106
|
||||
if landmarks is not None and mouth_mask_data is not None:
|
||||
mask, mouth_cutout, (min_x, min_y, max_x, max_y), lower_lip_polygon = (
|
||||
mouth_mask_data
|
||||
)
|
||||
|
||||
vis_frame = frame.copy()
|
||||
|
||||
# Ensure coordinates are within frame bounds
|
||||
height, width = vis_frame.shape[:2]
|
||||
min_x, min_y = max(0, min_x), max(0, min_y)
|
||||
max_x, max_y = min(width, max_x), min(height, max_y)
|
||||
|
||||
# Adjust mask to match the region size
|
||||
mask_region = mask[0 : max_y - min_y, 0 : max_x - min_x]
|
||||
|
||||
# Remove the color mask overlay
|
||||
# color_mask = cv2.applyColorMap((mask_region * 255).astype(np.uint8), cv2.COLORMAP_JET)
|
||||
|
||||
# Ensure shapes match before blending
|
||||
vis_region = vis_frame[min_y:max_y, min_x:max_x]
|
||||
# Remove blending with color_mask
|
||||
# if vis_region.shape[:2] == color_mask.shape[:2]:
|
||||
# blended = cv2.addWeighted(vis_region, 0.7, color_mask, 0.3, 0)
|
||||
# vis_frame[min_y:max_y, min_x:max_x] = blended
|
||||
|
||||
# Draw the lower lip polygon
|
||||
cv2.polylines(vis_frame, [lower_lip_polygon], True, (0, 255, 0), 2)
|
||||
|
||||
# Remove the red box
|
||||
# cv2.rectangle(vis_frame, (min_x, min_y), (max_x, max_y), (0, 0, 255), 2)
|
||||
|
||||
# Visualize the feathered mask
|
||||
feather_amount = max(
|
||||
1,
|
||||
min(
|
||||
30,
|
||||
(max_x - min_x) // modules.globals.mask_feather_ratio,
|
||||
(max_y - min_y) // modules.globals.mask_feather_ratio,
|
||||
),
|
||||
)
|
||||
# Ensure kernel size is odd
|
||||
kernel_size = 2 * feather_amount + 1
|
||||
feathered_mask = cv2.GaussianBlur(
|
||||
mask_region.astype(float), (kernel_size, kernel_size), 0
|
||||
)
|
||||
feathered_mask = (feathered_mask / feathered_mask.max() * 255).astype(np.uint8)
|
||||
# Remove the feathered mask color overlay
|
||||
# color_feathered_mask = cv2.applyColorMap(feathered_mask, cv2.COLORMAP_VIRIDIS)
|
||||
|
||||
# Ensure shapes match before blending feathered mask
|
||||
# if vis_region.shape == color_feathered_mask.shape:
|
||||
# blended_feathered = cv2.addWeighted(vis_region, 0.7, color_feathered_mask, 0.3, 0)
|
||||
# vis_frame[min_y:max_y, min_x:max_x] = blended_feathered
|
||||
|
||||
# Add labels
|
||||
cv2.putText(
|
||||
vis_frame,
|
||||
"Lower Mouth Mask",
|
||||
(min_x, min_y - 10),
|
||||
cv2.FONT_HERSHEY_SIMPLEX,
|
||||
0.5,
|
||||
(255, 255, 255),
|
||||
1,
|
||||
)
|
||||
cv2.putText(
|
||||
vis_frame,
|
||||
"Feathered Mask",
|
||||
(min_x, max_y + 20),
|
||||
cv2.FONT_HERSHEY_SIMPLEX,
|
||||
0.5,
|
||||
(255, 255, 255),
|
||||
1,
|
||||
)
|
||||
|
||||
return vis_frame
|
||||
return frame
|
||||
|
||||
|
||||
def apply_mouth_area(
|
||||
frame: np.ndarray,
|
||||
mouth_cutout: np.ndarray,
|
||||
mouth_box: tuple,
|
||||
face_mask: np.ndarray,
|
||||
mouth_polygon: np.ndarray,
|
||||
) -> np.ndarray:
|
||||
min_x, min_y, max_x, max_y = mouth_box
|
||||
box_width = max_x - min_x
|
||||
box_height = max_y - min_y
|
||||
|
||||
if (
|
||||
mouth_cutout is None
|
||||
or box_width is None
|
||||
or box_height is None
|
||||
or face_mask is None
|
||||
or mouth_polygon is None
|
||||
):
|
||||
return frame
|
||||
|
||||
try:
|
||||
resized_mouth_cutout = cv2.resize(mouth_cutout, (box_width, box_height))
|
||||
roi = frame[min_y:max_y, min_x:max_x]
|
||||
|
||||
if roi.shape != resized_mouth_cutout.shape:
|
||||
resized_mouth_cutout = cv2.resize(
|
||||
resized_mouth_cutout, (roi.shape[1], roi.shape[0])
|
||||
)
|
||||
|
||||
color_corrected_mouth = apply_color_transfer(resized_mouth_cutout, roi)
|
||||
|
||||
# Use the provided mouth polygon to create the mask
|
||||
polygon_mask = np.zeros(roi.shape[:2], dtype=np.uint8)
|
||||
adjusted_polygon = mouth_polygon - [min_x, min_y]
|
||||
cv2.fillPoly(polygon_mask, [adjusted_polygon], 255)
|
||||
|
||||
# Apply feathering to the polygon mask
|
||||
feather_amount = min(
|
||||
30,
|
||||
box_width // modules.globals.mask_feather_ratio,
|
||||
box_height // modules.globals.mask_feather_ratio,
|
||||
)
|
||||
feathered_mask = cv2.GaussianBlur(
|
||||
polygon_mask.astype(float), (0, 0), feather_amount
|
||||
)
|
||||
feathered_mask = feathered_mask / feathered_mask.max()
|
||||
|
||||
face_mask_roi = face_mask[min_y:max_y, min_x:max_x]
|
||||
combined_mask = feathered_mask * (face_mask_roi / 255.0)
|
||||
|
||||
combined_mask = combined_mask[:, :, np.newaxis]
|
||||
blended = (
|
||||
color_corrected_mouth * combined_mask + roi * (1 - combined_mask)
|
||||
).astype(np.uint8)
|
||||
|
||||
# Apply face mask to blended result
|
||||
face_mask_3channel = (
|
||||
np.repeat(face_mask_roi[:, :, np.newaxis], 3, axis=2) / 255.0
|
||||
)
|
||||
final_blend = blended * face_mask_3channel + roi * (1 - face_mask_3channel)
|
||||
|
||||
frame[min_y:max_y, min_x:max_x] = final_blend.astype(np.uint8)
|
||||
except Exception as e:
|
||||
pass
|
||||
|
||||
return frame
|
||||
|
||||
|
||||
def create_face_mask(face: Face, frame: Frame) -> np.ndarray:
|
||||
mask = np.zeros(frame.shape[:2], dtype=np.uint8)
|
||||
landmarks = face.landmark_2d_106
|
||||
if landmarks is not None:
|
||||
# Convert landmarks to int32
|
||||
landmarks = landmarks.astype(np.int32)
|
||||
|
||||
# Extract facial features
|
||||
right_side_face = landmarks[0:16]
|
||||
left_side_face = landmarks[17:32]
|
||||
right_eye = landmarks[33:42]
|
||||
right_eye_brow = landmarks[43:51]
|
||||
left_eye = landmarks[87:96]
|
||||
left_eye_brow = landmarks[97:105]
|
||||
|
||||
# Calculate forehead extension
|
||||
right_eyebrow_top = np.min(right_eye_brow[:, 1])
|
||||
left_eyebrow_top = np.min(left_eye_brow[:, 1])
|
||||
eyebrow_top = min(right_eyebrow_top, left_eyebrow_top)
|
||||
|
||||
face_top = np.min([right_side_face[0, 1], left_side_face[-1, 1]])
|
||||
forehead_height = face_top - eyebrow_top
|
||||
extended_forehead_height = int(forehead_height * 5.0) # Extend by 50%
|
||||
|
||||
# Create forehead points
|
||||
forehead_left = right_side_face[0].copy()
|
||||
forehead_right = left_side_face[-1].copy()
|
||||
forehead_left[1] -= extended_forehead_height
|
||||
forehead_right[1] -= extended_forehead_height
|
||||
|
||||
# Combine all points to create the face outline
|
||||
face_outline = np.vstack(
|
||||
[
|
||||
[forehead_left],
|
||||
right_side_face,
|
||||
left_side_face[
|
||||
::-1
|
||||
], # Reverse left side to create a continuous outline
|
||||
[forehead_right],
|
||||
]
|
||||
)
|
||||
|
||||
# Calculate padding
|
||||
padding = int(
|
||||
np.linalg.norm(right_side_face[0] - left_side_face[-1]) * 0.05
|
||||
) # 5% of face width
|
||||
|
||||
# Create a slightly larger convex hull for padding
|
||||
hull = cv2.convexHull(face_outline)
|
||||
hull_padded = []
|
||||
for point in hull:
|
||||
x, y = point[0]
|
||||
center = np.mean(face_outline, axis=0)
|
||||
direction = np.array([x, y]) - center
|
||||
direction = direction / np.linalg.norm(direction)
|
||||
padded_point = np.array([x, y]) + direction * padding
|
||||
hull_padded.append(padded_point)
|
||||
|
||||
hull_padded = np.array(hull_padded, dtype=np.int32)
|
||||
|
||||
# Fill the padded convex hull
|
||||
cv2.fillConvexPoly(mask, hull_padded, 255)
|
||||
|
||||
# Smooth the mask edges
|
||||
mask = cv2.GaussianBlur(mask, (5, 5), 3)
|
||||
|
||||
return mask
|
||||
|
||||
|
||||
def apply_color_transfer(source, target):
|
||||
"""
|
||||
Apply color transfer from target to source image
|
||||
"""
|
||||
source = cv2.cvtColor(source, cv2.COLOR_BGR2LAB).astype("float32")
|
||||
target = cv2.cvtColor(target, cv2.COLOR_BGR2LAB).astype("float32")
|
||||
|
||||
source_mean, source_std = cv2.meanStdDev(source)
|
||||
target_mean, target_std = cv2.meanStdDev(target)
|
||||
|
||||
# Reshape mean and std to be broadcastable
|
||||
source_mean = source_mean.reshape(1, 1, 3)
|
||||
source_std = source_std.reshape(1, 1, 3)
|
||||
target_mean = target_mean.reshape(1, 1, 3)
|
||||
target_std = target_std.reshape(1, 1, 3)
|
||||
|
||||
# Perform the color transfer
|
||||
source = (source - source_mean) * (target_std / source_std) + target_mean
|
||||
|
||||
return cv2.cvtColor(np.clip(source, 0, 255).astype("uint8"), cv2.COLOR_LAB2BGR)
|
||||
|
|
|
|||
165
modules/ui.py
|
|
@ -7,7 +7,6 @@ from cv2_enumerate_cameras import enumerate_cameras # Add this import
|
|||
from PIL import Image, ImageOps
|
||||
import time
|
||||
import json
|
||||
|
||||
import modules.globals
|
||||
import modules.metadata
|
||||
from modules.face_analyser import (
|
||||
|
|
@ -26,6 +25,11 @@ from modules.utilities import (
|
|||
resolve_relative_path,
|
||||
has_image_extension,
|
||||
)
|
||||
from modules.video_capture import VideoCapturer
|
||||
import platform
|
||||
|
||||
if platform.system() == "Windows":
|
||||
from pygrabber.dshow_graph import FilterGraph
|
||||
|
||||
ROOT = None
|
||||
POPUP = None
|
||||
|
|
@ -95,6 +99,8 @@ def save_switch_states():
|
|||
"live_resizable": modules.globals.live_resizable,
|
||||
"fp_ui": modules.globals.fp_ui,
|
||||
"show_fps": modules.globals.show_fps,
|
||||
"mouth_mask": modules.globals.mouth_mask,
|
||||
"show_mouth_mask_box": modules.globals.show_mouth_mask_box,
|
||||
}
|
||||
with open("switch_states.json", "w") as f:
|
||||
json.dump(switch_states, f)
|
||||
|
|
@ -115,6 +121,10 @@ def load_switch_states():
|
|||
modules.globals.live_resizable = switch_states.get("live_resizable", False)
|
||||
modules.globals.fp_ui = switch_states.get("fp_ui", {"face_enhancer": False})
|
||||
modules.globals.show_fps = switch_states.get("show_fps", False)
|
||||
modules.globals.mouth_mask = switch_states.get("mouth_mask", False)
|
||||
modules.globals.show_mouth_mask_box = switch_states.get(
|
||||
"show_mouth_mask_box", False
|
||||
)
|
||||
except FileNotFoundError:
|
||||
# If the file doesn't exist, use default values
|
||||
pass
|
||||
|
|
@ -269,6 +279,28 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
)
|
||||
show_fps_switch.place(relx=0.6, rely=0.75)
|
||||
|
||||
mouth_mask_var = ctk.BooleanVar(value=modules.globals.mouth_mask)
|
||||
mouth_mask_switch = ctk.CTkSwitch(
|
||||
root,
|
||||
text="Mouth Mask",
|
||||
variable=mouth_mask_var,
|
||||
cursor="hand2",
|
||||
command=lambda: setattr(modules.globals, "mouth_mask", mouth_mask_var.get()),
|
||||
)
|
||||
mouth_mask_switch.place(relx=0.1, rely=0.55)
|
||||
|
||||
show_mouth_mask_box_var = ctk.BooleanVar(value=modules.globals.show_mouth_mask_box)
|
||||
show_mouth_mask_box_switch = ctk.CTkSwitch(
|
||||
root,
|
||||
text="Show Mouth Mask Box",
|
||||
variable=show_mouth_mask_box_var,
|
||||
cursor="hand2",
|
||||
command=lambda: setattr(
|
||||
modules.globals, "show_mouth_mask_box", show_mouth_mask_box_var.get()
|
||||
),
|
||||
)
|
||||
show_mouth_mask_box_switch.place(relx=0.6, rely=0.55)
|
||||
|
||||
start_button = ctk.CTkButton(
|
||||
root, text="Start", cursor="hand2", command=lambda: analyze_target(start, root)
|
||||
)
|
||||
|
|
@ -289,18 +321,22 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
camera_label.place(relx=0.1, rely=0.86, relwidth=0.2, relheight=0.05)
|
||||
|
||||
available_cameras = get_available_cameras()
|
||||
# Convert camera indices to strings for CTkOptionMenu
|
||||
available_camera_indices, available_camera_strings = available_cameras
|
||||
camera_variable = ctk.StringVar(
|
||||
value=(
|
||||
available_camera_strings[0]
|
||||
if available_camera_strings
|
||||
else "No cameras found"
|
||||
)
|
||||
)
|
||||
camera_indices, camera_names = available_cameras
|
||||
|
||||
if not camera_names or camera_names[0] == "No cameras found":
|
||||
camera_variable = ctk.StringVar(value="No cameras found")
|
||||
camera_optionmenu = ctk.CTkOptionMenu(
|
||||
root, variable=camera_variable, values=available_camera_strings
|
||||
root,
|
||||
variable=camera_variable,
|
||||
values=["No cameras found"],
|
||||
state="disabled",
|
||||
)
|
||||
else:
|
||||
camera_variable = ctk.StringVar(value=camera_names[0])
|
||||
camera_optionmenu = ctk.CTkOptionMenu(
|
||||
root, variable=camera_variable, values=camera_names
|
||||
)
|
||||
|
||||
camera_optionmenu.place(relx=0.35, rely=0.86, relwidth=0.25, relheight=0.05)
|
||||
|
||||
live_button = ctk.CTkButton(
|
||||
|
|
@ -309,9 +345,16 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
cursor="hand2",
|
||||
command=lambda: webcam_preview(
|
||||
root,
|
||||
available_camera_indices[
|
||||
available_camera_strings.index(camera_variable.get())
|
||||
],
|
||||
(
|
||||
camera_indices[camera_names.index(camera_variable.get())]
|
||||
if camera_names and camera_names[0] != "No cameras found"
|
||||
else None
|
||||
),
|
||||
),
|
||||
state=(
|
||||
"normal"
|
||||
if camera_names and camera_names[0] != "No cameras found"
|
||||
else "disabled"
|
||||
),
|
||||
)
|
||||
live_button.place(relx=0.65, rely=0.86, relwidth=0.2, relheight=0.05)
|
||||
|
|
@ -328,7 +371,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
text_color=ctk.ThemeManager.theme.get("URL").get("text_color")
|
||||
)
|
||||
donate_label.bind(
|
||||
"<Button>", lambda event: webbrowser.open("https://paypal.me/hacksider")
|
||||
"<Button>", lambda event: webbrowser.open("https://deeplivecam.net")
|
||||
)
|
||||
|
||||
return root
|
||||
|
|
@ -719,7 +762,7 @@ def update_preview(frame_number: int = 0) -> None:
|
|||
def webcam_preview(root: ctk.CTk, camera_index: int):
|
||||
if not modules.globals.map_faces:
|
||||
if modules.globals.source_path is None:
|
||||
# No image selected
|
||||
update_status("Please select a source image first")
|
||||
return
|
||||
create_webcam_preview(camera_index)
|
||||
else:
|
||||
|
|
@ -731,40 +774,94 @@ def webcam_preview(root: ctk.CTk, camera_index: int):
|
|||
|
||||
def get_available_cameras():
|
||||
"""Returns a list of available camera names and indices."""
|
||||
if platform.system() == "Windows":
|
||||
try:
|
||||
graph = FilterGraph()
|
||||
devices = graph.get_input_devices()
|
||||
|
||||
# Create list of indices and names
|
||||
camera_indices = list(range(len(devices)))
|
||||
camera_names = devices
|
||||
|
||||
# If no cameras found through DirectShow, try OpenCV fallback
|
||||
if not camera_names:
|
||||
# Try to open camera with index -1 and 0
|
||||
test_indices = [-1, 0]
|
||||
working_cameras = []
|
||||
|
||||
for idx in test_indices:
|
||||
cap = cv2.VideoCapture(idx)
|
||||
if cap.isOpened():
|
||||
working_cameras.append(f"Camera {idx}")
|
||||
cap.release()
|
||||
|
||||
if working_cameras:
|
||||
return test_indices[: len(working_cameras)], working_cameras
|
||||
|
||||
# If still no cameras found, return empty lists
|
||||
if not camera_names:
|
||||
return [], ["No cameras found"]
|
||||
|
||||
return camera_indices, camera_names
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error detecting cameras: {str(e)}")
|
||||
return [], ["No cameras found"]
|
||||
else:
|
||||
# Unix-like systems (Linux/Mac) camera detection
|
||||
camera_indices = []
|
||||
camera_names = []
|
||||
|
||||
for camera in enumerate_cameras():
|
||||
cap = cv2.VideoCapture(camera.index)
|
||||
if platform.system() == "Darwin": # macOS specific handling
|
||||
# Try to open the default FaceTime camera first
|
||||
cap = cv2.VideoCapture(0)
|
||||
if cap.isOpened():
|
||||
camera_indices.append(camera.index)
|
||||
camera_names.append(camera.name)
|
||||
camera_indices.append(0)
|
||||
camera_names.append("FaceTime Camera")
|
||||
cap.release()
|
||||
return (camera_indices, camera_names)
|
||||
|
||||
# On macOS, additional cameras typically use indices 1 and 2
|
||||
for i in [1, 2]:
|
||||
cap = cv2.VideoCapture(i)
|
||||
if cap.isOpened():
|
||||
camera_indices.append(i)
|
||||
camera_names.append(f"Camera {i}")
|
||||
cap.release()
|
||||
else:
|
||||
# Linux camera detection - test first 10 indices
|
||||
for i in range(10):
|
||||
cap = cv2.VideoCapture(i)
|
||||
if cap.isOpened():
|
||||
camera_indices.append(i)
|
||||
camera_names.append(f"Camera {i}")
|
||||
cap.release()
|
||||
|
||||
if not camera_names:
|
||||
return [], ["No cameras found"]
|
||||
|
||||
return camera_indices, camera_names
|
||||
|
||||
|
||||
def create_webcam_preview(camera_index: int):
|
||||
global preview_label, PREVIEW
|
||||
|
||||
camera = cv2.VideoCapture(camera_index)
|
||||
camera.set(cv2.CAP_PROP_FRAME_WIDTH, PREVIEW_DEFAULT_WIDTH)
|
||||
camera.set(cv2.CAP_PROP_FRAME_HEIGHT, PREVIEW_DEFAULT_HEIGHT)
|
||||
camera.set(cv2.CAP_PROP_FPS, 60)
|
||||
cap = VideoCapturer(camera_index)
|
||||
if not cap.start(PREVIEW_DEFAULT_WIDTH, PREVIEW_DEFAULT_HEIGHT, 60):
|
||||
update_status("Failed to start camera")
|
||||
return
|
||||
|
||||
preview_label.configure(width=PREVIEW_DEFAULT_WIDTH, height=PREVIEW_DEFAULT_HEIGHT)
|
||||
|
||||
PREVIEW.deiconify()
|
||||
|
||||
frame_processors = get_frame_processors_modules(modules.globals.frame_processors)
|
||||
|
||||
source_image = None
|
||||
prev_time = time.time()
|
||||
fps_update_interval = 0.5 # Update FPS every 0.5 seconds
|
||||
fps_update_interval = 0.5
|
||||
frame_count = 0
|
||||
fps = 0
|
||||
|
||||
while camera:
|
||||
ret, frame = camera.read()
|
||||
while True:
|
||||
ret, frame = cap.read()
|
||||
if not ret:
|
||||
break
|
||||
|
||||
|
|
@ -778,6 +875,11 @@ def create_webcam_preview(camera_index: int):
|
|||
temp_frame, PREVIEW.winfo_width(), PREVIEW.winfo_height()
|
||||
)
|
||||
|
||||
else:
|
||||
temp_frame = fit_image_to_size(
|
||||
temp_frame, PREVIEW.winfo_width(), PREVIEW.winfo_height()
|
||||
)
|
||||
|
||||
if not modules.globals.map_faces:
|
||||
if source_image is None and modules.globals.source_path:
|
||||
source_image = get_one_face(cv2.imread(modules.globals.source_path))
|
||||
|
|
@ -790,7 +892,6 @@ def create_webcam_preview(camera_index: int):
|
|||
temp_frame = frame_processor.process_frame(source_image, temp_frame)
|
||||
else:
|
||||
modules.globals.target_path = None
|
||||
|
||||
for frame_processor in frame_processors:
|
||||
if frame_processor.NAME == "DLC.FACE-ENHANCER":
|
||||
if modules.globals.fp_ui["face_enhancer"]:
|
||||
|
|
@ -829,7 +930,7 @@ def create_webcam_preview(camera_index: int):
|
|||
if PREVIEW.state() == "withdrawn":
|
||||
break
|
||||
|
||||
camera.release()
|
||||
cap.release()
|
||||
PREVIEW.withdraw()
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -12,16 +12,23 @@ from tqdm import tqdm
|
|||
|
||||
import modules.globals
|
||||
|
||||
TEMP_FILE = 'temp.mp4'
|
||||
TEMP_DIRECTORY = 'temp'
|
||||
TEMP_FILE = "temp.mp4"
|
||||
TEMP_DIRECTORY = "temp"
|
||||
|
||||
# monkey patch ssl for mac
|
||||
if platform.system().lower() == 'darwin':
|
||||
if platform.system().lower() == "darwin":
|
||||
ssl._create_default_https_context = ssl._create_unverified_context
|
||||
|
||||
|
||||
def run_ffmpeg(args: List[str]) -> bool:
|
||||
commands = ['ffmpeg', '-hide_banner', '-hwaccel', 'auto', '-loglevel', modules.globals.log_level]
|
||||
commands = [
|
||||
"ffmpeg",
|
||||
"-hide_banner",
|
||||
"-hwaccel",
|
||||
"auto",
|
||||
"-loglevel",
|
||||
modules.globals.log_level,
|
||||
]
|
||||
commands.extend(args)
|
||||
try:
|
||||
subprocess.check_output(commands, stderr=subprocess.STDOUT)
|
||||
|
|
@ -32,8 +39,19 @@ def run_ffmpeg(args: List[str]) -> bool:
|
|||
|
||||
|
||||
def detect_fps(target_path: str) -> float:
|
||||
command = ['ffprobe', '-v', 'error', '-select_streams', 'v:0', '-show_entries', 'stream=r_frame_rate', '-of', 'default=noprint_wrappers=1:nokey=1', target_path]
|
||||
output = subprocess.check_output(command).decode().strip().split('/')
|
||||
command = [
|
||||
"ffprobe",
|
||||
"-v",
|
||||
"error",
|
||||
"-select_streams",
|
||||
"v:0",
|
||||
"-show_entries",
|
||||
"stream=r_frame_rate",
|
||||
"-of",
|
||||
"default=noprint_wrappers=1:nokey=1",
|
||||
target_path,
|
||||
]
|
||||
output = subprocess.check_output(command).decode().strip().split("/")
|
||||
try:
|
||||
numerator, denominator = map(int, output)
|
||||
return numerator / denominator
|
||||
|
|
@ -44,25 +62,65 @@ def detect_fps(target_path: str) -> float:
|
|||
|
||||
def extract_frames(target_path: str) -> None:
|
||||
temp_directory_path = get_temp_directory_path(target_path)
|
||||
run_ffmpeg(['-i', target_path, '-pix_fmt', 'rgb24', os.path.join(temp_directory_path, '%04d.png')])
|
||||
run_ffmpeg(
|
||||
[
|
||||
"-i",
|
||||
target_path,
|
||||
"-pix_fmt",
|
||||
"rgb24",
|
||||
os.path.join(temp_directory_path, "%04d.png"),
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
def create_video(target_path: str, fps: float = 30.0) -> None:
|
||||
temp_output_path = get_temp_output_path(target_path)
|
||||
temp_directory_path = get_temp_directory_path(target_path)
|
||||
run_ffmpeg(['-r', str(fps), '-i', os.path.join(temp_directory_path, '%04d.png'), '-c:v', modules.globals.video_encoder, '-crf', str(modules.globals.video_quality), '-pix_fmt', 'yuv420p', '-vf', 'colorspace=bt709:iall=bt601-6-625:fast=1', '-y', temp_output_path])
|
||||
run_ffmpeg(
|
||||
[
|
||||
"-r",
|
||||
str(fps),
|
||||
"-i",
|
||||
os.path.join(temp_directory_path, "%04d.png"),
|
||||
"-c:v",
|
||||
modules.globals.video_encoder,
|
||||
"-crf",
|
||||
str(modules.globals.video_quality),
|
||||
"-pix_fmt",
|
||||
"yuv420p",
|
||||
"-vf",
|
||||
"colorspace=bt709:iall=bt601-6-625:fast=1",
|
||||
"-y",
|
||||
temp_output_path,
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
def restore_audio(target_path: str, output_path: str) -> None:
|
||||
temp_output_path = get_temp_output_path(target_path)
|
||||
done = run_ffmpeg(['-i', temp_output_path, '-i', target_path, '-c:v', 'copy', '-map', '0:v:0', '-map', '1:a:0', '-y', output_path])
|
||||
done = run_ffmpeg(
|
||||
[
|
||||
"-i",
|
||||
temp_output_path,
|
||||
"-i",
|
||||
target_path,
|
||||
"-c:v",
|
||||
"copy",
|
||||
"-map",
|
||||
"0:v:0",
|
||||
"-map",
|
||||
"1:a:0",
|
||||
"-y",
|
||||
output_path,
|
||||
]
|
||||
)
|
||||
if not done:
|
||||
move_temp(target_path, output_path)
|
||||
|
||||
|
||||
def get_temp_frame_paths(target_path: str) -> List[str]:
|
||||
temp_directory_path = get_temp_directory_path(target_path)
|
||||
return glob.glob((os.path.join(glob.escape(temp_directory_path), '*.png')))
|
||||
return glob.glob((os.path.join(glob.escape(temp_directory_path), "*.png")))
|
||||
|
||||
|
||||
def get_temp_directory_path(target_path: str) -> str:
|
||||
|
|
@ -81,7 +139,9 @@ def normalize_output_path(source_path: str, target_path: str, output_path: str)
|
|||
source_name, _ = os.path.splitext(os.path.basename(source_path))
|
||||
target_name, target_extension = os.path.splitext(os.path.basename(target_path))
|
||||
if os.path.isdir(output_path):
|
||||
return os.path.join(output_path, source_name + '-' + target_name + target_extension)
|
||||
return os.path.join(
|
||||
output_path, source_name + "-" + target_name + target_extension
|
||||
)
|
||||
return output_path
|
||||
|
||||
|
||||
|
|
@ -108,20 +168,20 @@ def clean_temp(target_path: str) -> None:
|
|||
|
||||
|
||||
def has_image_extension(image_path: str) -> bool:
|
||||
return image_path.lower().endswith(('png', 'jpg', 'jpeg'))
|
||||
return image_path.lower().endswith(("png", "jpg", "jpeg"))
|
||||
|
||||
|
||||
def is_image(image_path: str) -> bool:
|
||||
if image_path and os.path.isfile(image_path):
|
||||
mimetype, _ = mimetypes.guess_type(image_path)
|
||||
return bool(mimetype and mimetype.startswith('image/'))
|
||||
return bool(mimetype and mimetype.startswith("image/"))
|
||||
return False
|
||||
|
||||
|
||||
def is_video(video_path: str) -> bool:
|
||||
if video_path and os.path.isfile(video_path):
|
||||
mimetype, _ = mimetypes.guess_type(video_path)
|
||||
return bool(mimetype and mimetype.startswith('video/'))
|
||||
return bool(mimetype and mimetype.startswith("video/"))
|
||||
return False
|
||||
|
||||
|
||||
|
|
@ -129,11 +189,19 @@ def conditional_download(download_directory_path: str, urls: List[str]) -> None:
|
|||
if not os.path.exists(download_directory_path):
|
||||
os.makedirs(download_directory_path)
|
||||
for url in urls:
|
||||
download_file_path = os.path.join(download_directory_path, os.path.basename(url))
|
||||
download_file_path = os.path.join(
|
||||
download_directory_path, os.path.basename(url)
|
||||
)
|
||||
if not os.path.exists(download_file_path):
|
||||
request = urllib.request.urlopen(url) # type: ignore[attr-defined]
|
||||
total = int(request.headers.get('Content-Length', 0))
|
||||
with tqdm(total=total, desc='Downloading', unit='B', unit_scale=True, unit_divisor=1024) as progress:
|
||||
total = int(request.headers.get("Content-Length", 0))
|
||||
with tqdm(
|
||||
total=total,
|
||||
desc="Downloading",
|
||||
unit="B",
|
||||
unit_scale=True,
|
||||
unit_divisor=1024,
|
||||
) as progress:
|
||||
urllib.request.urlretrieve(url, download_file_path, reporthook=lambda count, block_size, total_size: progress.update(block_size)) # type: ignore[attr-defined]
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,94 @@
|
|||
import cv2
|
||||
import numpy as np
|
||||
from typing import Optional, Tuple, Callable
|
||||
import platform
|
||||
import threading
|
||||
|
||||
# Only import Windows-specific library if on Windows
|
||||
if platform.system() == "Windows":
|
||||
from pygrabber.dshow_graph import FilterGraph
|
||||
|
||||
|
||||
class VideoCapturer:
|
||||
def __init__(self, device_index: int):
|
||||
self.device_index = device_index
|
||||
self.frame_callback = None
|
||||
self._current_frame = None
|
||||
self._frame_ready = threading.Event()
|
||||
self.is_running = False
|
||||
self.cap = None
|
||||
|
||||
# Initialize Windows-specific components if on Windows
|
||||
if platform.system() == "Windows":
|
||||
self.graph = FilterGraph()
|
||||
# Verify device exists
|
||||
devices = self.graph.get_input_devices()
|
||||
if self.device_index >= len(devices):
|
||||
raise ValueError(
|
||||
f"Invalid device index {device_index}. Available devices: {len(devices)}"
|
||||
)
|
||||
|
||||
def start(self, width: int = 960, height: int = 540, fps: int = 60) -> bool:
|
||||
"""Initialize and start video capture"""
|
||||
try:
|
||||
if platform.system() == "Windows":
|
||||
# Windows-specific capture methods
|
||||
capture_methods = [
|
||||
(self.device_index, cv2.CAP_DSHOW), # Try DirectShow first
|
||||
(self.device_index, cv2.CAP_ANY), # Then try default backend
|
||||
(-1, cv2.CAP_ANY), # Try -1 as fallback
|
||||
(0, cv2.CAP_ANY), # Finally try 0 without specific backend
|
||||
]
|
||||
|
||||
for dev_id, backend in capture_methods:
|
||||
try:
|
||||
self.cap = cv2.VideoCapture(dev_id, backend)
|
||||
if self.cap.isOpened():
|
||||
break
|
||||
self.cap.release()
|
||||
except Exception:
|
||||
continue
|
||||
else:
|
||||
# Unix-like systems (Linux/Mac) capture method
|
||||
self.cap = cv2.VideoCapture(self.device_index)
|
||||
|
||||
if not self.cap or not self.cap.isOpened():
|
||||
raise RuntimeError("Failed to open camera")
|
||||
|
||||
# Configure format
|
||||
self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
|
||||
self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
|
||||
self.cap.set(cv2.CAP_PROP_FPS, fps)
|
||||
|
||||
self.is_running = True
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"Failed to start capture: {str(e)}")
|
||||
if self.cap:
|
||||
self.cap.release()
|
||||
return False
|
||||
|
||||
def read(self) -> Tuple[bool, Optional[np.ndarray]]:
|
||||
"""Read a frame from the camera"""
|
||||
if not self.is_running or self.cap is None:
|
||||
return False, None
|
||||
|
||||
ret, frame = self.cap.read()
|
||||
if ret:
|
||||
self._current_frame = frame
|
||||
if self.frame_callback:
|
||||
self.frame_callback(frame)
|
||||
return True, frame
|
||||
return False, None
|
||||
|
||||
def release(self) -> None:
|
||||
"""Stop capture and release resources"""
|
||||
if self.is_running and self.cap is not None:
|
||||
self.cap.release()
|
||||
self.is_running = False
|
||||
self.cap = None
|
||||
|
||||
def set_frame_callback(self, callback: Callable[[np.ndarray], None]) -> None:
|
||||
"""Set callback for frame processing"""
|
||||
self.frame_callback = callback
|
||||
|
Before Width: | Height: | Size: 13 KiB |
|
Before Width: | Height: | Size: 31 KiB |
|
|
@ -1,7 +1,7 @@
|
|||
--extra-index-url https://download.pytorch.org/whl/cu118
|
||||
|
||||
numpy>=1.23.5,<2
|
||||
opencv-python==4.8.1.78
|
||||
opencv-python==4.10.0.84
|
||||
cv2_enumerate_cameras==1.1.15
|
||||
onnx==1.16.0
|
||||
insightface==0.7.3
|
||||
|
|
@ -21,4 +21,4 @@ protobuf==4.23.2
|
|||
tqdm==4.66.4
|
||||
gfpgan==1.3.8
|
||||
tkinterdnd2==0.4.2
|
||||
customtkinter==5.2.2
|
||||
pygrabber==0.2
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
python run.py --execution-provider cuda --execution-threads 60 --max-memory 60
|
||||
python run.py --execution-provider cuda
|
||||
|
|
|
|||
|
|
@ -0,0 +1 @@
|
|||
python run.py --execution-provider dml
|
||||
|
|
@ -1 +0,0 @@
|
|||
python run.py --execution-provider dml
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
@echo off
|
||||
:: Installing Microsoft Visual C++ Runtime - all versions 1.0.1 if it's not already installed
|
||||
choco install vcredist-all
|
||||
:: Installing CUDA if it's not already installed
|
||||
choco install cuda
|
||||
:: Inatalling ffmpeg if it's not already installed
|
||||
choco install ffmpeg
|
||||
:: Installing Python if it's not already installed
|
||||
choco install python -y
|
||||
:: Assuming successful installation, we ensure pip is upgraded
|
||||
python -m ensurepip --upgrade
|
||||
:: Use pip to install the packages listed in 'requirements.txt'
|
||||
pip install -r requirements.txt
|
||||
|
|
@ -1,122 +0,0 @@
|
|||
@echo off
|
||||
setlocal EnableDelayedExpansion
|
||||
|
||||
:: 1. Setup your platform
|
||||
echo Setting up your platform...
|
||||
|
||||
:: Python
|
||||
where python >nul 2>&1
|
||||
if %ERRORLEVEL% neq 0 (
|
||||
echo Python is not installed. Please install Python 3.10 or later.
|
||||
pause
|
||||
exit /b
|
||||
)
|
||||
|
||||
:: Pip
|
||||
where pip >nul 2>&1
|
||||
if %ERRORLEVEL% neq 0 (
|
||||
echo Pip is not installed. Please install Pip.
|
||||
pause
|
||||
exit /b
|
||||
)
|
||||
|
||||
:: Git
|
||||
where git >nul 2>&1
|
||||
if %ERRORLEVEL% neq 0 (
|
||||
echo Git is not installed. Installing Git...
|
||||
winget install --id Git.Git -e --source winget
|
||||
)
|
||||
|
||||
:: FFMPEG
|
||||
where ffmpeg >nul 2>&1
|
||||
if %ERRORLEVEL% neq 0 (
|
||||
echo FFMPEG is not installed. Installing FFMPEG...
|
||||
winget install --id Gyan.FFmpeg -e --source winget
|
||||
)
|
||||
|
||||
:: Visual Studio 2022 Runtimes
|
||||
echo Installing Visual Studio 2022 Runtimes...
|
||||
winget install --id Microsoft.VC++2015-2022Redist-x64 -e --source winget
|
||||
|
||||
:: 2. Clone Repository
|
||||
if exist Deep-Live-Cam (
|
||||
echo Deep-Live-Cam directory already exists.
|
||||
set /p overwrite="Do you want to overwrite? (Y/N): "
|
||||
if /i "%overwrite%"=="Y" (
|
||||
rmdir /s /q Deep-Live-Cam
|
||||
git clone https://github.com/hacksider/Deep-Live-Cam.git
|
||||
) else (
|
||||
echo Skipping clone, using existing directory.
|
||||
)
|
||||
) else (
|
||||
git clone https://github.com/hacksider/Deep-Live-Cam.git
|
||||
)
|
||||
cd Deep-Live-Cam
|
||||
|
||||
:: 3. Download Models
|
||||
echo Downloading models...
|
||||
mkdir models
|
||||
curl -L -o models/GFPGANv1.4.pth https://path.to.model/GFPGANv1.4.pth
|
||||
curl -L -o models/inswapper_128_fp16.onnx https://path.to.model/inswapper_128_fp16.onnx
|
||||
|
||||
:: 4. Install dependencies
|
||||
echo Creating a virtual environment...
|
||||
python -m venv venv
|
||||
call venv\Scripts\activate
|
||||
|
||||
echo Installing required Python packages...
|
||||
pip install --upgrade pip
|
||||
pip install -r requirements.txt
|
||||
|
||||
echo Setup complete. You can now run the application.
|
||||
|
||||
:: GPU Acceleration Options
|
||||
echo.
|
||||
echo Choose the GPU Acceleration Option if applicable:
|
||||
echo 1. CUDA (Nvidia)
|
||||
echo 2. CoreML (Apple Silicon)
|
||||
echo 3. CoreML (Apple Legacy)
|
||||
echo 4. DirectML (Windows)
|
||||
echo 5. OpenVINO (Intel)
|
||||
echo 6. None
|
||||
set /p choice="Enter your choice (1-6): "
|
||||
|
||||
if "%choice%"=="1" (
|
||||
echo Installing CUDA dependencies...
|
||||
pip uninstall -y onnxruntime onnxruntime-gpu
|
||||
pip install onnxruntime-gpu==1.16.3
|
||||
set exec_provider="cuda"
|
||||
) else if "%choice%"=="2" (
|
||||
echo Installing CoreML (Apple Silicon) dependencies...
|
||||
pip uninstall -y onnxruntime onnxruntime-silicon
|
||||
pip install onnxruntime-silicon==1.13.1
|
||||
set exec_provider="coreml"
|
||||
) else if "%choice%"=="3" (
|
||||
echo Installing CoreML (Apple Legacy) dependencies...
|
||||
pip uninstall -y onnxruntime onnxruntime-coreml
|
||||
pip install onnxruntime-coreml==1.13.1
|
||||
set exec_provider="coreml"
|
||||
) else if "%choice%"=="4" (
|
||||
echo Installing DirectML dependencies...
|
||||
pip uninstall -y onnxruntime onnxruntime-directml
|
||||
pip install onnxruntime-directml==1.15.1
|
||||
set exec_provider="directml"
|
||||
) else if "%choice%"=="5" (
|
||||
echo Installing OpenVINO dependencies...
|
||||
pip uninstall -y onnxruntime onnxruntime-openvino
|
||||
pip install onnxruntime-openvino==1.15.0
|
||||
set exec_provider="openvino"
|
||||
) else (
|
||||
echo Skipping GPU acceleration setup.
|
||||
)
|
||||
|
||||
:: Run the application
|
||||
if defined exec_provider (
|
||||
echo Running the application with %exec_provider% execution provider...
|
||||
python run.py --execution-provider %exec_provider%
|
||||
) else (
|
||||
echo Running the application...
|
||||
python run.py
|
||||
)
|
||||
|
||||
pause
|
||||