Compare commits

...

19 Commits

Author SHA1 Message Date
Adam 19f8b0eaf6
Merge 44c5a41ccd into d5a3fb0c47 2025-05-13 03:55:14 +02:00
Kenneth Estanislao d5a3fb0c47
Merge pull request #1268 from jiacheng-0/main
Update __init__.py
2025-05-13 00:57:09 +08:00
Teo Jia Cheng 9690070399 Update __init__.py 2025-05-13 00:14:49 +08:00
Kenneth Estanislao f3e83b985c
Merge pull request #1210 from KunjShah01/main
Update __init__.py
2025-05-12 15:14:58 +08:00
Kenneth Estanislao e3e3638b79
Merge pull request #1232 from gboeer/patch-1
Add german localization and fix minor typos
2025-05-12 15:14:32 +08:00
Gordon Böer 75122da389
Create german localization 2025-05-07 13:30:22 +02:00
Gordon Böer 7063bba4b3
fix typos in zh.json 2025-05-07 13:24:54 +02:00
Gordon Böer bdbd7dcfbc
fix typos in ui.py 2025-05-07 13:23:31 +02:00
cloudflips32 44c5a41ccd quick start link adjusted per creators quick start link 2025-05-05 15:28:13 -04:00
cloudflips32 c78d1ac180 Merge branch 'main' of https://github.com/cloudflips32/Deep-Live-Cam into premain 2025-05-05 15:23:19 -04:00
cloudflips32 1a177f3a95 small readme edit, close <a> tag 2025-05-05 15:20:17 -04:00
Adam 96042facc0
Merge branch 'main' into main 2025-05-05 10:18:18 -04:00
KUNJ SHAH a64940def7 update 2025-05-05 13:19:46 +00:00
KUNJ SHAH fe4a87e8f2 update 2025-05-05 13:19:29 +00:00
KUNJ SHAH 9ecd2dab83 changes 2025-05-05 13:10:00 +00:00
KUNJ SHAH c9f36eb350
Update __init__.py 2025-05-05 18:29:44 +05:30
Adam b3625b35f0
Merge branch 'main' into main 2025-05-04 14:57:04 -04:00
Adam 7ce112adb5
Update README.md
App Design moved within README, resized for readability
2025-05-04 14:49:56 -04:00
cloudflips32 ee74eae727 README edit, App Design Diagram 2025-05-04 14:35:34 -04:00
6 changed files with 135 additions and 56 deletions

105
README.md
View File

@ -12,7 +12,7 @@
<img src="media/demo.gif" alt="Demo GIF" width="800">
</p>
## Disclaimer
## Disclaimer
This deepfake software is designed to be a productive tool for the AI-generated media industry. It can assist artists in animating custom characters, creating engaging content, and even using models for clothing design.
@ -30,16 +30,26 @@ By using this software, you agree to these terms and commit to using it in a man
Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online. We are not responsible for end-user actions.
## App Design
<p align="center">
<img src="media/app-design.gif" alt="App Design" width="1050" />
</p>
## Exclusive v2.0 Quick Start - Pre-built (Windows)
<a href="https://deeplivecam.net/index.php/quickstart"> <img src="media/Download.png" width="285" height="77" />
<a href="https://deeplivecam.net/index.php/quickstart">
<img src="media/Download.png" width="285" height="77" />
</a>
##### This is the fastest build you can get if you have a discrete NVIDIA or AMD GPU.
###### These Pre-builts are perfect for non-technical users or those who don't have time to, or can't manually install all the requirements. Just a heads-up: this is an open-source project, so you can also install it manually. This will be 60 days ahead on the open source version.
## TLDR; Live Deepfake in just 3 Clicks
![easysteps](https://github.com/user-attachments/assets/af825228-852c-411b-b787-ffd9aac72fc6)
1. Select a face
2. Select which camera to use
3. Press live!
@ -109,11 +119,11 @@ This is more likely to work on your computer but will be slower as it utilizes t
**1. Set up Your Platform**
- Python (3.10 recommended)
- pip
- git
- [ffmpeg](https://www.youtube.com/watch?v=OlNWCpFdVMA) - ```iex (irm ffmpeg.tc.ht)```
- [Visual Studio 2022 Runtimes (Windows)](https://visualstudio.microsoft.com/visual-cpp-build-tools/)
- Python (3.10 recommended)
- pip
- git
- [ffmpeg](https://www.youtube.com/watch?v=OlNWCpFdVMA) - `iex (irm ffmpeg.tc.ht)`
- [Visual Studio 2022 Runtimes (Windows)](https://visualstudio.microsoft.com/visual-cpp-build-tools/)
**2. Clone the Repository**
@ -125,7 +135,7 @@ cd Deep-Live-Cam
**3. Download the Models**
1. [GFPGANv1.4](https://huggingface.co/hacksider/deep-live-cam/resolve/main/GFPGANv1.4.pth)
2. [inswapper\_128\_fp16.onnx](https://huggingface.co/hacksider/deep-live-cam/resolve/main/inswapper_128_fp16.onnx)
2. [inswapper_128_fp16.onnx](https://huggingface.co/hacksider/deep-live-cam/resolve/main/inswapper_128_fp16.onnx)
Place these files in the "**models**" folder.
@ -133,14 +143,16 @@ Place these files in the "**models**" folder.
We highly recommend using a `venv` to avoid issues.
For Windows:
```bash
python -m venv venv
venv\Scripts\activate
pip install -r requirements.txt
```
For Linux:
```bash
# Ensure you use the installed Python 3.10
python3 -m venv venv
@ -220,18 +232,20 @@ python3.10 run.py --execution-provider coreml
```
**Important Notes for macOS:**
- You **must** use Python 3.10, not newer versions like 3.11 or 3.13
- Always run with `python3.10` command not just `python` if you have multiple Python versions installed
- If you get error about `_tkinter` missing, reinstall the tkinter package: `brew reinstall python-tk@3.10`
- If you get model loading errors, check that your models are in the correct folder
- If you encounter conflicts with other Python versions, consider uninstalling them:
```bash
# List all installed Python versions
brew list | grep python
# Uninstall conflicting versions if needed
brew uninstall --ignore-dependencies python@3.11 python@3.13
# Keep only Python 3.10
brew cleanup
```
@ -280,25 +294,26 @@ pip install onnxruntime-openvino==1.15.0
```bash
python run.py --execution-provider openvino
```
</details>
## Usage
**1. Image/Video Mode**
- Execute `python run.py`.
- Choose a source face image and a target image/video.
- Click "Start".
- The output will be saved in a directory named after the target video.
- Execute `python run.py`.
- Choose a source face image and a target image/video.
- Click "Start".
- The output will be saved in a directory named after the target video.
**2. Webcam Mode**
- Execute `python run.py`.
- Select a source face image.
- Click "Live".
- Wait for the preview to appear (10-30 seconds).
- Use a screen capture tool like OBS to stream.
- To change the face, select a new source image.
- Execute `python run.py`.
- Select a source face image.
- Click "Live".
- Wait for the preview to appear (10-30 seconds).
- Use a screen capture tool like OBS to stream.
- To change the face, select a new source image.
## Tips and Tricks
@ -344,32 +359,32 @@ Looking for a CLI mode? Using the -s/--source argument will make the run program
**We are always open to criticism and are ready to improve, that's why we didn't cherry-pick anything.**
- [*"Deep-Live-Cam goes viral, allowing anyone to become a digital doppelganger"*](https://arstechnica.com/information-technology/2024/08/new-ai-tool-enables-real-time-face-swapping-on-webcams-raising-fraud-concerns/) - Ars Technica
- [*"Thanks Deep Live Cam, shapeshifters are among us now"*](https://dataconomy.com/2024/08/15/what-is-deep-live-cam-github-deepfake/) - Dataconomy
- [*"This free AI tool lets you become anyone during video-calls"*](https://www.newsbytesapp.com/news/science/deep-live-cam-ai-impersonation-tool-goes-viral/story) - NewsBytes
- [*"OK, this viral AI live stream software is truly terrifying"*](https://www.creativebloq.com/ai/ok-this-viral-ai-live-stream-software-is-truly-terrifying) - Creative Bloq
- [*"Deepfake AI Tool Lets You Become Anyone in a Video Call With Single Photo"*](https://petapixel.com/2024/08/14/deep-live-cam-deepfake-ai-tool-lets-you-become-anyone-in-a-video-call-with-single-photo-mark-zuckerberg-jd-vance-elon-musk/) - PetaPixel
- [*"Deep-Live-Cam Uses AI to Transform Your Face in Real-Time, Celebrities Included"*](https://www.techeblog.com/deep-live-cam-ai-transform-face/) - TechEBlog
- [*"An AI tool that "makes you look like anyone" during a video call is going viral online"*](https://telegrafi.com/en/a-tool-that-makes-you-look-like-anyone-during-a-video-call-is-going-viral-on-the-Internet/) - Telegrafi
- [*"This Deepfake Tool Turning Images Into Livestreams is Topping the GitHub Charts"*](https://decrypt.co/244565/this-deepfake-tool-turning-images-into-livestreams-is-topping-the-github-charts) - Emerge
- [*"New Real-Time Face-Swapping AI Allows Anyone to Mimic Famous Faces"*](https://www.digitalmusicnews.com/2024/08/15/face-swapping-ai-real-time-mimic/) - Digital Music News
- [*"This real-time webcam deepfake tool raises alarms about the future of identity theft"*](https://www.diyphotography.net/this-real-time-webcam-deepfake-tool-raises-alarms-about-the-future-of-identity-theft/) - DIYPhotography
- [*"That's Crazy, Oh God. That's Fucking Freaky Dude... That's So Wild Dude"*](https://www.youtube.com/watch?time_continue=1074&v=py4Tc-Y8BcY) - SomeOrdinaryGamers
- [*"Alright look look look, now look chat, we can do any face we want to look like chat"*](https://www.youtube.com/live/mFsCe7AIxq8?feature=shared&t=2686) - IShowSpeed
- [_"Deep-Live-Cam goes viral, allowing anyone to become a digital doppelganger"_](https://arstechnica.com/information-technology/2024/08/new-ai-tool-enables-real-time-face-swapping-on-webcams-raising-fraud-concerns/) - Ars Technica
- [_"Thanks Deep Live Cam, shapeshifters are among us now"_](https://dataconomy.com/2024/08/15/what-is-deep-live-cam-github-deepfake/) - Dataconomy
- [_"This free AI tool lets you become anyone during video-calls"_](https://www.newsbytesapp.com/news/science/deep-live-cam-ai-impersonation-tool-goes-viral/story) - NewsBytes
- [_"OK, this viral AI live stream software is truly terrifying"_](https://www.creativebloq.com/ai/ok-this-viral-ai-live-stream-software-is-truly-terrifying) - Creative Bloq
- [_"Deepfake AI Tool Lets You Become Anyone in a Video Call With Single Photo"_](https://petapixel.com/2024/08/14/deep-live-cam-deepfake-ai-tool-lets-you-become-anyone-in-a-video-call-with-single-photo-mark-zuckerberg-jd-vance-elon-musk/) - PetaPixel
- [_"Deep-Live-Cam Uses AI to Transform Your Face in Real-Time, Celebrities Included"_](https://www.techeblog.com/deep-live-cam-ai-transform-face/) - TechEBlog
- [_"An AI tool that "makes you look like anyone" during a video call is going viral online"_](https://telegrafi.com/en/a-tool-that-makes-you-look-like-anyone-during-a-video-call-is-going-viral-on-the-Internet/) - Telegrafi
- [_"This Deepfake Tool Turning Images Into Livestreams is Topping the GitHub Charts"_](https://decrypt.co/244565/this-deepfake-tool-turning-images-into-livestreams-is-topping-the-github-charts) - Emerge
- [_"New Real-Time Face-Swapping AI Allows Anyone to Mimic Famous Faces"_](https://www.digitalmusicnews.com/2024/08/15/face-swapping-ai-real-time-mimic/) - Digital Music News
- [_"This real-time webcam deepfake tool raises alarms about the future of identity theft"_](https://www.diyphotography.net/this-real-time-webcam-deepfake-tool-raises-alarms-about-the-future-of-identity-theft/) - DIYPhotography
- [_"That's Crazy, Oh God. That's Fucking Freaky Dude... That's So Wild Dude"_](https://www.youtube.com/watch?time_continue=1074&v=py4Tc-Y8BcY) - SomeOrdinaryGamers
- [_"Alright look look look, now look chat, we can do any face we want to look like chat"_](https://www.youtube.com/live/mFsCe7AIxq8?feature=shared&t=2686) - IShowSpeed
## Credits
- [ffmpeg](https://ffmpeg.org/): for making video-related operations easy
- [deepinsight](https://github.com/deepinsight): for their [insightface](https://github.com/deepinsight/insightface) project which provided a well-made library and models. Please be reminded that the [use of the model is for non-commercial research purposes only](https://github.com/deepinsight/insightface?tab=readme-ov-file#license).
- [havok2-htwo](https://github.com/havok2-htwo): for sharing the code for webcam
- [GosuDRM](https://github.com/GosuDRM): for the open version of roop
- [pereiraroland26](https://github.com/pereiraroland26): Multiple faces support
- [vic4key](https://github.com/vic4key): For supporting/contributing to this project
- [kier007](https://github.com/kier007): for improving the user experience
- [qitianai](https://github.com/qitianai): for multi-lingual support
- and [all developers](https://github.com/hacksider/Deep-Live-Cam/graphs/contributors) behind libraries used in this project.
- Footnote: Please be informed that the base author of the code is [s0md3v](https://github.com/s0md3v/roop)
- All the wonderful users who helped make this project go viral by starring the repo ❤️
- [ffmpeg](https://ffmpeg.org/): for making video-related operations easy
- [deepinsight](https://github.com/deepinsight): for their [insightface](https://github.com/deepinsight/insightface) project which provided a well-made library and models. Please be reminded that the [use of the model is for non-commercial research purposes only](https://github.com/deepinsight/insightface?tab=readme-ov-file#license).
- [havok2-htwo](https://github.com/havok2-htwo): for sharing the code for webcam
- [GosuDRM](https://github.com/GosuDRM): for the open version of roop
- [pereiraroland26](https://github.com/pereiraroland26): Multiple faces support
- [vic4key](https://github.com/vic4key): For supporting/contributing to this project
- [kier007](https://github.com/kier007): for improving the user experience
- [qitianai](https://github.com/qitianai): for multi-lingual support
- and [all developers](https://github.com/hacksider/Deep-Live-Cam/graphs/contributors) behind libraries used in this project.
- Footnote: Please be informed that the base author of the code is [s0md3v](https://github.com/s0md3v/roop)
- All the wonderful users who helped make this project go viral by starring the repo ❤️
[![Stargazers](https://reporoster.com/stars/hacksider/Deep-Live-Cam)](https://github.com/hacksider/Deep-Live-Cam/stargazers)

46
locales/de.json 100644
View File

@ -0,0 +1,46 @@
{
"Source x Target Mapper": "Quelle x Ziel Zuordnung",
"select a source image": "Wähle ein Quellbild",
"Preview": "Vorschau",
"select a target image or video": "Wähle ein Zielbild oder Video",
"save image output file": "Bildausgabedatei speichern",
"save video output file": "Videoausgabedatei speichern",
"select a target image": "Wähle ein Zielbild",
"source": "Quelle",
"Select a target": "Wähle ein Ziel",
"Select a face": "Wähle ein Gesicht",
"Keep audio": "Audio beibehalten",
"Face Enhancer": "Gesichtsverbesserung",
"Many faces": "Mehrere Gesichter",
"Show FPS": "FPS anzeigen",
"Keep fps": "FPS beibehalten",
"Keep frames": "Frames beibehalten",
"Fix Blueish Cam": "Bläuliche Kamera korrigieren",
"Mouth Mask": "Mundmaske",
"Show Mouth Mask Box": "Mundmaskenrahmen anzeigen",
"Start": "Starten",
"Live": "Live",
"Destroy": "Beenden",
"Map faces": "Gesichter zuordnen",
"Processing...": "Verarbeitung läuft...",
"Processing succeed!": "Verarbeitung erfolgreich!",
"Processing ignored!": "Verarbeitung ignoriert!",
"Failed to start camera": "Kamera konnte nicht gestartet werden",
"Please complete pop-up or close it.": "Bitte das Pop-up komplettieren oder schließen.",
"Getting unique faces": "Einzigartige Gesichter erfassen",
"Please select a source image first": "Bitte zuerst ein Quellbild auswählen",
"No faces found in target": "Keine Gesichter im Zielbild gefunden",
"Add": "Hinzufügen",
"Clear": "Löschen",
"Submit": "Absenden",
"Select source image": "Quellbild auswählen",
"Select target image": "Zielbild auswählen",
"Please provide mapping!": "Bitte eine Zuordnung angeben!",
"At least 1 source with target is required!": "Mindestens eine Quelle mit einem Ziel ist erforderlich!",
"At least 1 source with target is required!": "Mindestens eine Quelle mit einem Ziel ist erforderlich!",
"Face could not be detected in last upload!": "Im letzten Upload konnte kein Gesicht erkannt werden!",
"Select Camera:": "Kamera auswählen:",
"All mappings cleared!": "Alle Zuordnungen gelöscht!",
"Mappings successfully submitted!": "Zuordnungen erfolgreich übermittelt!",
"Source x Target Mapper is already open.": "Quell-zu-Ziel-Zuordnung ist bereits geöffnet."
}

View File

@ -1,11 +1,11 @@
{
"Source x Target Mapper": "Source x Target Mapper",
"select an source image": "选择一个源图像",
"select a source image": "选择一个源图像",
"Preview": "预览",
"select an target image or video": "选择一个目标图像或视频",
"select a target image or video": "选择一个目标图像或视频",
"save image output file": "保存图像输出文件",
"save video output file": "保存视频输出文件",
"select an target image": "选择一个目标图像",
"select a target image": "选择一个目标图像",
"source": "源",
"Select a target": "选择一个目标",
"Select a face": "选择一张脸",
@ -36,11 +36,11 @@
"Select source image": "请选取源图像",
"Select target image": "请选取目标图像",
"Please provide mapping!": "请提供映射",
"Atleast 1 source with target is required!": "至少需要一个来源图像与目标图像相关!",
"At least 1 source with target is required!": "至少需要一个来源图像与目标图像相关!",
"At least 1 source with target is required!": "至少需要一个来源图像与目标图像相关!",
"Face could not be detected in last upload!": "最近上传的图像中没有检测到人脸!",
"Select Camera:": "选择摄像头",
"All mappings cleared!": "所有映射均已清除!",
"Mappings successfully submitted!": "成功提交映射!",
"Source x Target Mapper is already open.": "源 x 目标映射器已打开。"
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 321 KiB

View File

@ -0,0 +1,18 @@
import os
import cv2
import numpy as np
# Utility function to support unicode characters in file paths for reading
def imread_unicode(path, flags=cv2.IMREAD_COLOR):
return cv2.imdecode(np.fromfile(path, dtype=np.uint8), flags)
# Utility function to support unicode characters in file paths for writing
def imwrite_unicode(path, img, params=None):
root, ext = os.path.splitext(path)
if not ext:
ext = ".png"
result, encoded_img = cv2.imencode(ext, img, params if params else [])
result, encoded_img = cv2.imencode(f".{ext}", img, params if params is not None else [])
encoded_img.tofile(path)
return True
return False

View File

@ -429,7 +429,7 @@ def create_source_target_popup(
POPUP.destroy()
select_output_path(start)
else:
update_pop_status("Atleast 1 source with target is required!")
update_pop_status("At least 1 source with target is required!")
scrollable_frame = ctk.CTkScrollableFrame(
POPUP, width=POPUP_SCROLL_WIDTH, height=POPUP_SCROLL_HEIGHT
@ -489,7 +489,7 @@ def update_popup_source(
global source_label_dict
source_path = ctk.filedialog.askopenfilename(
title=_("select an source image"),
title=_("select a source image"),
initialdir=RECENT_DIRECTORY_SOURCE,
filetypes=[img_ft],
)
@ -584,7 +584,7 @@ def select_source_path() -> None:
PREVIEW.withdraw()
source_path = ctk.filedialog.askopenfilename(
title=_("select an source image"),
title=_("select a source image"),
initialdir=RECENT_DIRECTORY_SOURCE,
filetypes=[img_ft],
)
@ -627,7 +627,7 @@ def select_target_path() -> None:
PREVIEW.withdraw()
target_path = ctk.filedialog.askopenfilename(
title=_("select an target image or video"),
title=_("select a target image or video"),
initialdir=RECENT_DIRECTORY_TARGET,
filetypes=[img_ft, vid_ft],
)
@ -1108,7 +1108,7 @@ def update_webcam_source(
global source_label_dict_live
source_path = ctk.filedialog.askopenfilename(
title=_("select an source image"),
title=_("select a source image"),
initialdir=RECENT_DIRECTORY_SOURCE,
filetypes=[img_ft],
)
@ -1160,7 +1160,7 @@ def update_webcam_target(
global target_label_dict_live
target_path = ctk.filedialog.askopenfilename(
title=_("select an target image"),
title=_("select a target image"),
initialdir=RECENT_DIRECTORY_SOURCE,
filetypes=[img_ft],
)