Anime Video Models
:white_check_mark: We add small models that are optimized for anime videos :-)
More comparisons can be found in anime_comparisons.md
Models | Scale | Description |
---|---|---|
realesr-animevideov3 | X4 1 | Anime video model with XS size |
Note:
1 This model can also be used for X1, X2, X3.
The following are some demos (best view in the full screen mode).
How to Use
PyTorch Inference
# download model
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth -P weights
# single gpu and single process inference
CUDA_VISIBLE_DEVICES=0 python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n realesr-animevideov3 -s 2 --suffix outx2
# single gpu and multi process inference (you can use multi-processing to improve GPU utilization)
CUDA_VISIBLE_DEVICES=0 python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n realesr-animevideov3 -s 2 --suffix outx2 --num_process_per_gpu 2
# multi gpu and multi process inference
CUDA_VISIBLE_DEVICES=0,1,2,3 python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n realesr-animevideov3 -s 2 --suffix outx2 --num_process_per_gpu 2
Usage:
--num_process_per_gpu The total number of process is num_gpu * num_process_per_gpu. The bottleneck of
the program lies on the IO, so the GPUs are usually not fully utilized. To alleviate
this issue, you can use multi-processing by setting this parameter. As long as it
does not exceed the CUDA memory
--extract_frame_first If you encounter ffmpeg error when using multi-processing, you can turn this option on.
NCNN Executable File
Step 1: Use ffmpeg to extract frames from video
ffmpeg -i onepiece_demo.mp4 -qscale:v 1 -qmin 1 -qmax 1 -vsync 0 tmp_frames/frame%08d.png
- Remember to create the folder
tmp_frames
ahead
Step 2: Inference with Real-ESRGAN executable file
Download the latest portable Windows / Linux / MacOS executable files for Intel/AMD/Nvidia GPU
Taking the Windows as example, run:
./realesrgan-ncnn-vulkan.exe -i tmp_frames -o out_frames -n realesr-animevideov3 -s 2 -f jpg
- Remember to create the folder
out_frames
ahead
- Remember to create the folder
Step 3: Merge the enhanced frames back into a video
First obtain fps from input videos by
ffmpeg -i onepiece_demo.mp4
Usage: -i input video path
You will get the output similar to the following screenshot.
Merge frames
ffmpeg -r 23.98 -i out_frames/frame%08d.jpg -c:v libx264 -r 23.98 -pix_fmt yuv420p output.mp4
Usage: -i input video path -c:v video encoder (usually we use libx264) -r fps, remember to modify it to meet your needs -pix_fmt pixel format in video
If you also want to copy audio from the input videos, run:
ffmpeg -r 23.98 -i out_frames/frame%08d.jpg -i onepiece_demo.mp4 -map 0:v:0 -map 1:a:0 -c:a copy -c:v libx264 -r 23.98 -pix_fmt yuv420p output_w_audio.mp4
Usage: -i input video path, here we use two input streams -c:v video encoder (usually we use libx264) -r fps, remember to modify it to meet your needs -pix_fmt pixel format in video
More Demos
Input video for One Piece:
Out video for One Piece
More comparisons