* ffmpeg platform-agnostic hardware-acceleration
* clear CUDA cache after swapping
on low VRAM + ffmpeg cuda acceleration, clearing cache prevent cuda out-of-memory error
* check torch gpu before clearing cache
* torch check nvidia only
* syntax error
* Adjust comment
* Normalize ARGS
* Remove path normalization
* Remove args overrides
* Run test on Linux and Windows
* Run test on Linux and Windows
* Run test on Linux and Windows
* Run test on Linux and Windows
* Run test on Linux and Windows
* Run test on Linux and Windows
* Run test on Linux and Windows
* Revert to Ubuntu test only as Windows hangs
* Simplified the way to maintain aspect ratio of the preview, and maintaining aspect ratio of the miniatures
* Change face and target images from contain to fit
* Improve status output
* Massive utilities and core refactoring
* Fix sound
* Fix sound part2
* Fix more
* Move every UI related thing to ui.py
* Refactor UI
* Introduce render_video_preview()
* Add preview back part1
* Add preview back part2, Introduce --video-quality for CLI
* Get the preview working
* Couple if minor UI fixes
* Add video encoder via CLI
* Change default video quality, Integrate recent directories for UI
* Move temporary files to temp/{target-name}
* Fix fps detection
* Rename method
* Introduce suggest methods for args defaults, output mode and core/threads count via postfix
* Fix max_memory and output memory in progress bar too
* Turns out mac has a different memory unit
* Add typing to swapper
* Fix FileNotFoundError while deleting temp
* Updated requirements.txt for macs.
(cherry picked from commit
|
||
|---|---|---|
| .github | ||
| roop | ||
| .flake8 | ||
| .gitignore | ||
| CONTRIBUTING.md | ||
| demo.gif | ||
| gui-demo.png | ||
| LICENSE | ||
| mypi.ini | ||
| README.md | ||
| requirements-ci.txt | ||
| requirements.txt | ||
| run.py | ||
Take a video and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training.
You can watch some demos here. A StableDiffusion extension is also available, here.
Disclaimer
This software is meant to be a productive contribution to the rapidly growing AI-generated media industry. It will help artists with tasks such as animating a custom character or using the character as a model for clothing etc.
The developers of this software are aware of its possible unethical applicaitons and are committed to take preventative measures against them. It has a built-in check which prevents the program from working on inappropriate media including but not limited to nudity, graphic content, sensitive material such as war footage etc. We will continue to develop this project in the positive direction while adhering to law and ethics. This project may be shut down or include watermarks on the output if requested by law.
Users of this software are expected to use this software responsibly while abiding the local law. If face of a real person is being used, users are suggested to get consent from the concerned person and clearly mention that it is a deepfake when posting content online. Developers of this software will not be responsible for actions of end-users.
How do I install it?
Issues according installation will be closed without ceremony from now on, we cannot handle the amount of requests.
There are two types of installations: basic and gpu-powered.
-
Basic: It is more likely to work on your computer but it will also be very slow. You can follow instructions for the basic install here.
-
GPU: If you have a good GPU and are ready for solving any software issues you may face, you can enable GPU which is wayyy faster. To do this, first follow the basic install instructions given above and then follow GPU-specific instructions here.
How do I use it?
Note: When you run this program for the first time, it will download some models ~300MB in size.
Executing python run.py command will launch this window:

Choose a face (image with desired face) and the target image/video (image/video in which you want to replace the face) and click on Start. Open file explorer and navigate to the directory you select your output to be in. You will find a directory named <video_title> where you can see the frames being swapped in realtime. Once the processing is done, it will create the output file. That's it.
Don't touch the FPS checkbox unless you know what you are doing.
Additional command line arguments are given below:
options:
-h, --help show this help message and exit
-s SOURCE_PATH, --source SOURCE_PATH
select an source image
-t TARGET_PATH, --target TARGET_PATH
select an target image or video
-o OUTPUT_PATH, --output OUTPUT_PATH
select output file or directory
--frame-processor {face_swapper,face_enhancer} [{face_swapper,face_enhancer} ...]
pipeline of frame processors
--keep-fps keep original fps
--keep-audio keep original audio
--keep-frames keep temporary frames
--many-faces process every face
--video-encoder {libx264,libx265,libvpx-vp9}
adjust output video encoder
--video-quality VIDEO_QUALITY
adjust output video quality
--max-memory MAX_MEMORY
maximum amount of RAM in GB
--execution-provider {cpu,...} [{cpu,...} ...]
execution provider
--execution-threads EXECUTION_THREADS
number of execution threads
-v, --version show program's version number and exit
Looking for a CLI mode? Using the -s/--source argument will make the program in cli mode.
Credits
- henryruhs: for being an irreplaceable contributor to the project
- ffmpeg: for making video related operations easy
- deepinsight: for their insightface project which provided a well-made library and models.
- and all developers behind libraries used in this project.
