|
--- |
|
license: unknown |
|
--- |
|
# Stable Video Diffusion Temporal Controlnet |
|
|
|
## Overview |
|
Introducing the Stable Video Diffusion Temporal Controlnet! This tool uses a controlnet style encoder with the svd base. It's designed to enhance your video diffusion projects by providing precise temporal control. |
|
|
|
|
|
## Setup |
|
- **Controlnet Model:** download the inference repo from here: https://github.com/CiaraStrawberry/sdv_controlnet |
|
- **Installation:** run `pip install -r requirements.txt` |
|
- **Execution:** Run "run_inference.py". |
|
|
|
## Demo |
|
|
|
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/63357214eb6132ca653020e7/RkjfJ8IKuZA-tYa-XS99y.mp4"></video> |
|
|
|
|
|
|
|
## Notes |
|
- **Focus on Central Object:** The system tends to extract motion features primarily from a central object and, occasionally, from the background. It's best to avoid overly complex motion or obscure objects. |
|
- **Simplicity in Motion:** Stick to motions that svd can handle well without the controlnet. This ensures it will be able to apply the motion. |
|
|
|
## Acknowledgements |
|
- **Diffusers Team:** For the svd implementation. |
|
- **Pixeli99:** For providing a practical svd training script: [SVD_Xtend](https://github.com/pixeli99/SVD_Xtend) |