Compare commits

...

49 Commits
1.7 ... main

Author SHA1 Message Date
Kenneth Estanislao 181144ce33
Update requirements.txt 2025-04-20 03:02:23 +08:00
Kenneth Estanislao 40e47a469c
Update requirements.txt 2025-04-19 03:41:00 +08:00
KRSHH 874abb4e59
v2 prebuilt 2025-04-17 09:34:10 +05:30
Kenneth Estanislao 18b259da70 Update requirements.txt
improves speed by 10 to 40%
2025-04-17 02:44:24 +08:00
Kenneth Estanislao 01900dcfb5 Revert "Update metadata.py"
This reverts commit 90d5c28542.
2025-04-17 02:39:05 +08:00
Kenneth Estanislao 07e30fe781 Revert "Update face_swapper.py"
This reverts commit 104d8cf4d6.
2025-04-17 02:03:34 +08:00
Kenneth Estanislao 3dda4f2179
Update requirements.txt 2025-04-14 17:45:07 +08:00
Kenneth Estanislao 71735e4f60 Update requirements.txt
update requirements.txt
2025-04-13 03:36:51 +08:00
Kenneth Estanislao 90d5c28542 Update metadata.py
- 40% faster than 1.8
- compatible with 50xx GPU
- onnxruntime 1.21
2025-04-13 03:34:10 +08:00
Kenneth Estanislao 104d8cf4d6 Update face_swapper.py
compatibility with inswapper 1.21
2025-04-13 01:13:40 +08:00
KRSHH ac3696b69d
remove prebuilt 2025-04-04 16:02:28 +05:30
Kenneth Estanislao 76fb209e6c
Update README.md 2025-03-29 03:28:22 +08:00
Kenneth Estanislao 2dcd552c4b
Update README.md 2025-03-29 03:23:49 +08:00
Kenneth Estanislao 66248a37b4
Merge pull request #990 from wpoPR/pr/improve-macos-installation-instructions
improve macOS Apple Silicon installation instructions
2025-03-24 18:26:28 +08:00
KRSHH aa9b7ed3b6 Add Tips and Tricks to README 2025-03-22 19:59:40 +05:30
Wesley Oliveira 51a4246050 adding uninstalling conflict python versions
follow sourcery-ai and add a note about uninstalling conflicting Python versions if users encounter issues.
2025-03-21 12:37:21 -03:00
Wesley Oliveira 3f1c072fac improve macOS Apple Silicon installation instructions
Followed the `README` but ran into some errors running it locally. Made a few tweaks and got it working on my M3 PRO. Found this PR (Failing to run on Apple Silicon Mac M3) and thought improving the instructions might help others. Hope this helps!

great tool guys, thx a lot
2025-03-20 16:47:01 -03:00
KRSHH f91f9203e7
Remove Mac Edition Temporarily 2025-03-19 03:00:32 +05:30
Kenneth Estanislao 80477676b4
Merge pull request #980 from aaddyy227/main
Fix face swapping crash due to None face embeddings
2025-03-16 00:03:39 +08:00
Adrian Zimbran c728994e6b fixed import and log message 2025-03-10 23:41:28 +02:00
Adrian Zimbran 65da3be2a4 Fix face swapping crash due to None face embeddings
- Add explicit checks for face detection results (source and target faces).
- Handle cases when face embeddings are not available, preventing AttributeError.
- Provide meaningful log messages for easier debugging in future scenarios.
2025-03-10 23:31:56 +02:00
Kenneth Estanislao 390b88216b
Update README.md 2025-02-14 17:33:33 +08:00
Kenneth Estanislao dabaa64695
Merge pull request #932 from harmeetsingh-work/patch-1
Update requirements.txt
2025-02-12 15:21:27 +08:00
Harmeet Singh 1fad1cd43a
Update requirements.txt
Made changes for apple silicon. 

Or getting
ERROR: Could not find a version that satisfies the requirement torch==2.5.1+cu118 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.4.0, 2.4.1, 2.5.0, 2.5.1, 2.6.0)
ERROR: No matching distribution found for torch==2.5.1+cu118
2025-02-11 18:44:23 +05:30
Kenneth Estanislao 2f67e2f159
Update requirements.txt 2025-02-09 14:17:49 +08:00
Kenneth Estanislao a3af249ea6
Update requirements.txt 2025-02-07 19:31:02 +08:00
Kenneth Estanislao 5bc3ada632
Update requirements.txt 2025-02-06 15:37:55 +08:00
KRSHH 650e89eb21
Reduced File Size 2025-02-06 10:40:32 +05:30
Kenneth Estanislao 4d2aea37b7
Update requirements.txt 2025-02-06 00:43:20 +08:00
Kenneth Estanislao 28c4b34db1
Merge pull request #911 from nimishgautam/main
Fix cv2 size errors on first run in ui.py
2025-02-05 12:51:39 +08:00
Kenneth Estanislao 49e8f78513
Merge pull request #913 from soulee-dev/main
fix: typo souce_target_map → source_target_map
2025-02-05 12:18:48 +08:00
Kenneth Estanislao d753f5d4b0
Merge pull request #917 from carpusherw/patch-1
Fix requirements.txt
2025-02-05 12:17:42 +08:00
KRSHH 4fb69476d8 Change img dimensions 2025-02-05 12:16:08 +08:00
carpusherw f3adfd194d Fix requirements.txt 2025-02-05 12:16:08 +08:00
Kenneth Estanislao e5f04cf917 Revert "Update requirements.txt"
This reverts commit d45dedc9a6.
2025-02-05 12:08:19 +08:00
Kenneth Estanislao 67394a3157 Revert "Update requirements.txt"
This reverts commit f82cebf86e.
2025-02-05 12:08:10 +08:00
carpusherw 186d155e1b
Fix requirements.txt 2025-02-05 09:17:11 +08:00
KRSHH 87081e78d0
Fixed typo 2025-02-04 21:20:54 +05:30
KRSHH f79373d4db
Updated Features Section 2025-02-04 21:08:36 +05:30
Soul Lee 513e413956 fix: typo souce_target_map → source_target_map 2025-02-03 20:33:44 +09:00
Kenneth Estanislao f82cebf86e
Update requirements.txt 2025-02-03 18:03:27 +08:00
Kenneth Estanislao d45dedc9a6
Update requirements.txt 2025-02-03 16:38:18 +08:00
Kenneth Estanislao 2d489b57ec
Update README.md 2025-02-03 13:13:56 +08:00
Nimish Gåtam ccc04983cf
Update ui.py
removed unnecessary code as per AI code review (which is a thing now because of course it is)
2025-02-01 12:38:37 +01:00
Nimish Gåtam 2506c5a261
Update ui.py
Some checks for first run when models are missing, so it doesn't error out with inv_scale_x > 0 in cv2
2025-02-01 11:52:49 +01:00
Kenneth Estanislao e862ff1456
Update requirements.txt
updated from CUDA 11.8 to CUDA 12.1
2025-02-01 12:21:55 +08:00
Kenneth Estanislao db594c0e7c
Update README.md 2025-01-29 14:02:07 +08:00
Kenneth Estanislao 6a5b75ec45
Update README.md 2025-01-29 14:00:41 +08:00
Kenneth Estanislao 79e1ce5093 Update requirements.txt
update pillow

In _imagingcms.c in Pillow before 10.3.0, a buffer overflow exists because strcpy is used instead of strncpy.
2025-01-28 14:22:05 +08:00
6 changed files with 147 additions and 64 deletions

117
README.md
View File

@ -14,22 +14,29 @@
## Disclaimer
###### This software is intended as a productive contribution to the AI-generated media industry. It aims to assist artists with tasks like animating custom characters or using them as models for clothing, etc.
This deepfake software is designed to be a productive tool for the AI-generated media industry. It can assist artists in animating custom characters, creating engaging content, and even using models for clothing design.
###### We are aware of the potential for unethical applications and are committed to preventative measures. A built-in check prevents the program from processing inappropriate media (nudity, graphic content, sensitive material like war footage, etc.). We will continue to develop this project responsibly, adhering to the law and ethics. We may shut down the project or add watermarks if legally required.
We are aware of the potential for unethical applications and are committed to preventative measures. A built-in check prevents the program from processing inappropriate media (nudity, graphic content, sensitive material like war footage, etc.). We will continue to develop this project responsibly, adhering to the law and ethics. We may shut down the project or add watermarks if legally required.
###### Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online. We are not responsible for end-user actions.
## Quick Start - Pre-built (Windows / Nvidia)
- Ethical Use: Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online.
<a href="https://hacksider.gumroad.com/l/vccdmm"> <img src="https://github.com/user-attachments/assets/7d993b32-e3e8-4cd3-bbfb-a549152ebdd5" width="285" height="77" />
- Content Restrictions: The software includes built-in checks to prevent processing inappropriate media, such as nudity, graphic content, or sensitive material.
- Legal Compliance: We adhere to all relevant laws and ethical guidelines. If legally required, we may shut down the project or add watermarks to the output.
- User Responsibility: We are not responsible for end-user actions. Users must ensure their use of the software aligns with ethical standards and legal requirements.
By using this software, you agree to these terms and commit to using it in a manner that respects the rights and dignity of others.
Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online. We are not responsible for end-user actions.
## Exclusive v2.0 Quick Start - Pre-built (Windows / Nvidia)
<a href="https://deeplivecam.net/index.php/quickstart"> <img src="https://github.com/user-attachments/assets/7d993b32-e3e8-4cd3-bbfb-a549152ebdd5" width="285" height="77" />
##### This is the fastest build you can get if you have a discrete NVIDIA GPU.
## Quick Start - Pre-built (Mac / Silicon)
<a href="https://krshh.gumroad.com/l/Deep-Live-Cam-Mac"> <img src="https://github.com/user-attachments/assets/d5d913b5-a7de-4609-96b9-979a5749a703" width="285" height="77" />
###### These Pre-builts are perfect for non-technical users or those who dont have time to, or can't manually install all the requirements. Just a heads-up: this is an open-source project, so you can also install it manually.
###### These Pre-builts are perfect for non-technical users or those who don't have time to, or can't manually install all the requirements. Just a heads-up: this is an open-source project, so you can also install it manually. This will be 60 days ahead on the open source version.
## TLDR; Live Deepfake in just 3 Clicks
![easysteps](https://github.com/user-attachments/assets/af825228-852c-411b-b787-ffd9aac72fc6)
@ -37,7 +44,7 @@
2. Select which camera to use
3. Press live!
## Features & Uses - Everything is real-time
## Features & Uses - Everything is in real-time
### Mouth Mask
@ -73,7 +80,7 @@
### Memes
**Create Your most viral meme yet**
**Create Your Most Viral Meme Yet**
<p align="center">
<img src="media/meme.gif" alt="show" width="450">
@ -81,6 +88,13 @@
<sub>Created using Many Faces feature in Deep-Live-Cam</sub>
</p>
### Omegle
**Surprise people on Omegle**
<p align="center">
<video src="https://github.com/user-attachments/assets/2e9b9b82-fa04-4b70-9f56-b1f68e7672d0" width="450" controls></video>
</p>
## Installation (Manual)
@ -104,7 +118,8 @@ This is more likely to work on your computer but will be slower as it utilizes t
**2. Clone the Repository**
```bash
https://github.com/hacksider/Deep-Live-Cam.git
git clone https://github.com/hacksider/Deep-Live-Cam.git
cd Deep-Live-Cam
```
**3. Download the Models**
@ -118,14 +133,44 @@ Place these files in the "**models**" folder.
We highly recommend using a `venv` to avoid issues.
For Windows:
```bash
python -m venv venv
venv\Scripts\activate
pip install -r requirements.txt
```
**For macOS:** Install or upgrade the `python-tk` package:
**For macOS:**
Apple Silicon (M1/M2/M3) requires specific setup:
```bash
# Install Python 3.10 (specific version is important)
brew install python@3.10
# Install tkinter package (required for the GUI)
brew install python-tk@3.10
# Create and activate virtual environment with Python 3.10
python3.10 -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
```
** In case something goes wrong and you need to reinstall the virtual environment **
```bash
# Deactivate the virtual environment
rm -rf venv
# Reinstall the virtual environment
python -m venv venv
source venv/bin/activate
# install the dependencies again
pip install -r requirements.txt
```
**Run:** If you don't have a GPU, you can run Deep-Live-Cam using `python run.py`. Note that initial execution will download models (~300MB).
@ -134,7 +179,7 @@ brew install python-tk@3.10
**CUDA Execution Provider (Nvidia)**
1. Install [CUDA Toolkit 11.8](https://developer.nvidia.com/cuda-11-8-0-download-archive) or [CUDA Toolkit 12.1.1](https://developer.nvidia.com/cuda-12-1-1-download-archive)
1. Install [CUDA Toolkit 11.8.0](https://developer.nvidia.com/cuda-11-8-0-download-archive)
2. Install dependencies:
```bash
@ -150,19 +195,39 @@ python run.py --execution-provider cuda
**CoreML Execution Provider (Apple Silicon)**
1. Install dependencies:
Apple Silicon (M1/M2/M3) specific installation:
1. Make sure you've completed the macOS setup above using Python 3.10.
2. Install dependencies:
```bash
pip uninstall onnxruntime onnxruntime-silicon
pip install onnxruntime-silicon==1.13.1
```
2. Usage:
3. Usage (important: specify Python 3.10):
```bash
python run.py --execution-provider coreml
python3.10 run.py --execution-provider coreml
```
**Important Notes for macOS:**
- You **must** use Python 3.10, not newer versions like 3.11 or 3.13
- Always run with `python3.10` command not just `python` if you have multiple Python versions installed
- If you get error about `_tkinter` missing, reinstall the tkinter package: `brew reinstall python-tk@3.10`
- If you get model loading errors, check that your models are in the correct folder
- If you encounter conflicts with other Python versions, consider uninstalling them:
```bash
# List all installed Python versions
brew list | grep python
# Uninstall conflicting versions if needed
brew uninstall --ignore-dependencies python@3.11 python@3.13
# Keep only Python 3.10
brew cleanup
```
**CoreML Execution Provider (Apple Legacy)**
1. Install dependencies:
@ -207,7 +272,6 @@ pip install onnxruntime-openvino==1.15.0
```bash
python run.py --execution-provider openvino
```
</details>
## Usage
@ -228,6 +292,19 @@ python run.py --execution-provider openvino
- Use a screen capture tool like OBS to stream.
- To change the face, select a new source image.
## Tips and Tricks
Check out these helpful guides to get the most out of Deep-Live-Cam:
- [Unlocking the Secrets to the Perfect Deepfake Image](https://deeplivecam.net/index.php/blog/tips-and-tricks/unlocking-the-secrets-to-the-perfect-deepfake-image) - Learn how to create the best deepfake with full head coverage
- [Video Call with DeepLiveCam](https://deeplivecam.net/index.php/blog/tips-and-tricks/video-call-with-deeplivecam) - Make your meetings livelier by using DeepLiveCam with OBS and meeting software
- [Have a Special Guest!](https://deeplivecam.net/index.php/blog/tips-and-tricks/have-a-special-guest) - Tutorial on how to use face mapping to add special guests to your stream
- [Watch Deepfake Movies in Realtime](https://deeplivecam.net/index.php/blog/tips-and-tricks/watch-deepfake-movies-in-realtime) - See yourself star in any video without processing the video
- [Better Quality without Sacrificing Speed](https://deeplivecam.net/index.php/blog/tips-and-tricks/better-quality-without-sacrificing-speed) - Tips for achieving better results without impacting performance
- [Instant Vtuber!](https://deeplivecam.net/index.php/blog/tips-and-tricks/instant-vtuber) - Create a new persona/vtuber easily using Metahuman Creator
Visit our [official blog](https://deeplivecam.net/index.php/blog/tips-and-tricks) for more tips and tutorials.
## Command Line Arguments (Unmaintained)
```
@ -301,5 +378,3 @@ Looking for a CLI mode? Using the -s/--source argument will make the run program
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=hacksider/deep-live-cam&type=Date" />
</picture>
</a>

View File

@ -39,13 +39,13 @@ def get_many_faces(frame: Frame) -> Any:
return None
def has_valid_map() -> bool:
for map in modules.globals.souce_target_map:
for map in modules.globals.source_target_map:
if "source" in map and "target" in map:
return True
return False
def default_source_face() -> Any:
for map in modules.globals.souce_target_map:
for map in modules.globals.source_target_map:
if "source" in map:
return map['source']['face']
return None
@ -53,7 +53,7 @@ def default_source_face() -> Any:
def simplify_maps() -> Any:
centroids = []
faces = []
for map in modules.globals.souce_target_map:
for map in modules.globals.source_target_map:
if "source" in map and "target" in map:
centroids.append(map['target']['face'].normed_embedding)
faces.append(map['source']['face'])
@ -64,10 +64,10 @@ def simplify_maps() -> Any:
def add_blank_map() -> Any:
try:
max_id = -1
if len(modules.globals.souce_target_map) > 0:
max_id = max(modules.globals.souce_target_map, key=lambda x: x['id'])['id']
if len(modules.globals.source_target_map) > 0:
max_id = max(modules.globals.source_target_map, key=lambda x: x['id'])['id']
modules.globals.souce_target_map.append({
modules.globals.source_target_map.append({
'id' : max_id + 1
})
except ValueError:
@ -75,14 +75,14 @@ def add_blank_map() -> Any:
def get_unique_faces_from_target_image() -> Any:
try:
modules.globals.souce_target_map = []
modules.globals.source_target_map = []
target_frame = cv2.imread(modules.globals.target_path)
many_faces = get_many_faces(target_frame)
i = 0
for face in many_faces:
x_min, y_min, x_max, y_max = face['bbox']
modules.globals.souce_target_map.append({
modules.globals.source_target_map.append({
'id' : i,
'target' : {
'cv2' : target_frame[int(y_min):int(y_max), int(x_min):int(x_max)],
@ -96,7 +96,7 @@ def get_unique_faces_from_target_image() -> Any:
def get_unique_faces_from_target_video() -> Any:
try:
modules.globals.souce_target_map = []
modules.globals.source_target_map = []
frame_face_embeddings = []
face_embeddings = []
@ -127,7 +127,7 @@ def get_unique_faces_from_target_video() -> Any:
face['target_centroid'] = closest_centroid_index
for i in range(len(centroids)):
modules.globals.souce_target_map.append({
modules.globals.source_target_map.append({
'id' : i
})
@ -135,7 +135,7 @@ def get_unique_faces_from_target_video() -> Any:
for frame in tqdm(frame_face_embeddings, desc=f"Mapping frame embeddings to centroids-{i}"):
temp.append({'frame': frame['frame'], 'faces': [face for face in frame['faces'] if face['target_centroid'] == i], 'location': frame['location']})
modules.globals.souce_target_map[i]['target_faces_in_frame'] = temp
modules.globals.source_target_map[i]['target_faces_in_frame'] = temp
# dump_faces(centroids, frame_face_embeddings)
default_target_face()
@ -144,7 +144,7 @@ def get_unique_faces_from_target_video() -> Any:
def default_target_face():
for map in modules.globals.souce_target_map:
for map in modules.globals.source_target_map:
best_face = None
best_frame = None
for frame in map['target_faces_in_frame']:

View File

@ -9,7 +9,7 @@ file_types = [
("Video", ("*.mp4", "*.mkv")),
]
souce_target_map = []
source_target_map = []
simple_map = {}
source_path = None

View File

@ -4,6 +4,7 @@ import insightface
import threading
import numpy as np
import modules.globals
import logging
import modules.processors.frame.core
from modules.core import update_status
from modules.face_analyser import get_one_face, get_many_faces, default_source_face
@ -105,24 +106,30 @@ def process_frame(source_face: Face, temp_frame: Frame) -> Frame:
many_faces = get_many_faces(temp_frame)
if many_faces:
for target_face in many_faces:
temp_frame = swap_face(source_face, target_face, temp_frame)
if source_face and target_face:
temp_frame = swap_face(source_face, target_face, temp_frame)
else:
print("Face detection failed for target/source.")
else:
target_face = get_one_face(temp_frame)
if target_face:
if target_face and source_face:
temp_frame = swap_face(source_face, target_face, temp_frame)
else:
logging.error("Face detection failed for target or source.")
return temp_frame
def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
if is_image(modules.globals.target_path):
if modules.globals.many_faces:
source_face = default_source_face()
for map in modules.globals.souce_target_map:
for map in modules.globals.source_target_map:
target_face = map["target"]["face"]
temp_frame = swap_face(source_face, target_face, temp_frame)
elif not modules.globals.many_faces:
for map in modules.globals.souce_target_map:
for map in modules.globals.source_target_map:
if "source" in map:
source_face = map["source"]["face"]
target_face = map["target"]["face"]
@ -131,7 +138,7 @@ def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
elif is_video(modules.globals.target_path):
if modules.globals.many_faces:
source_face = default_source_face()
for map in modules.globals.souce_target_map:
for map in modules.globals.source_target_map:
target_frame = [
f
for f in map["target_faces_in_frame"]
@ -143,7 +150,7 @@ def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
temp_frame = swap_face(source_face, target_face, temp_frame)
elif not modules.globals.many_faces:
for map in modules.globals.souce_target_map:
for map in modules.globals.source_target_map:
if "source" in map:
target_frame = [
f

View File

@ -397,7 +397,7 @@ def analyze_target(start: Callable[[], None], root: ctk.CTk):
return
if modules.globals.map_faces:
modules.globals.souce_target_map = []
modules.globals.source_target_map = []
if is_image(modules.globals.target_path):
update_status("Getting unique faces")
@ -406,8 +406,8 @@ def analyze_target(start: Callable[[], None], root: ctk.CTk):
update_status("Getting unique faces")
get_unique_faces_from_target_video()
if len(modules.globals.souce_target_map) > 0:
create_source_target_popup(start, root, modules.globals.souce_target_map)
if len(modules.globals.source_target_map) > 0:
create_source_target_popup(start, root, modules.globals.source_target_map)
else:
update_status("No faces found in target")
else:
@ -696,17 +696,21 @@ def check_and_ignore_nsfw(target, destroy: Callable = None) -> bool:
def fit_image_to_size(image, width: int, height: int):
if width is None and height is None:
if width is None or height is None or width <= 0 or height <= 0:
return image
h, w, _ = image.shape
ratio_h = 0.0
ratio_w = 0.0
if width > height:
ratio_h = height / h
else:
ratio_w = width / w
ratio = max(ratio_w, ratio_h)
new_size = (int(ratio * w), int(ratio * h))
ratio_w = width / w
ratio_h = height / h
# Use the smaller ratio to ensure the image fits within the given dimensions
ratio = min(ratio_w, ratio_h)
# Compute new dimensions, ensuring they're at least 1 pixel
new_width = max(1, int(ratio * w))
new_height = max(1, int(ratio * h))
new_size = (new_width, new_height)
return cv2.resize(image, dsize=new_size)
@ -787,9 +791,9 @@ def webcam_preview(root: ctk.CTk, camera_index: int):
return
create_webcam_preview(camera_index)
else:
modules.globals.souce_target_map = []
modules.globals.source_target_map = []
create_source_target_popup_for_webcam(
root, modules.globals.souce_target_map, camera_index
root, modules.globals.source_target_map, camera_index
)
@ -1199,4 +1203,4 @@ def update_webcam_target(
target_label_dict_live[button_num] = target_image
else:
update_pop_live_status("Face could not be detected in last upload!")
return map
return map

View File

@ -1,6 +1,7 @@
--extra-index-url https://download.pytorch.org/whl/cu118
numpy>=1.23.5,<2
typing-extensions>=4.8.0
opencv-python==4.10.0.84
cv2_enumerate_cameras==1.1.15
onnx==1.16.0
@ -8,17 +9,13 @@ insightface==0.7.3
psutil==5.9.8
tk==0.1.0
customtkinter==5.2.2
pillow==9.5.0
torch==2.0.1+cu118; sys_platform != 'darwin'
torch==2.0.1; sys_platform == 'darwin'
torchvision==0.15.2+cu118; sys_platform != 'darwin'
torchvision==0.15.2; sys_platform == 'darwin'
pillow==11.1.0
torch==2.5.1+cu118; sys_platform != 'darwin'
torch==2.5.1; sys_platform == 'darwin'
torchvision==0.20.1; sys_platform != 'darwin'
torchvision==0.20.1; sys_platform == 'darwin'
onnxruntime-silicon==1.16.3; sys_platform == 'darwin' and platform_machine == 'arm64'
onnxruntime-gpu==1.16.3; sys_platform != 'darwin'
tensorflow==2.12.1; sys_platform != 'darwin'
onnxruntime-gpu==1.17; sys_platform != 'darwin'
tensorflow; sys_platform != 'darwin'
opennsfw2==0.10.2
protobuf==4.23.2
tqdm==4.66.4
gfpgan==1.3.8
tkinterdnd2==0.4.2
pygrabber==0.2