CiaraRowles
commited on
Commit
•
8c223b1
1
Parent(s):
4a1207e
Update README.md
Browse files
README.md
CHANGED
@@ -8,23 +8,30 @@ base_model: runwayml/stable-diffusion-v1-5
|
|
8 |
---
|
9 |
Introducing the Beta Version of TemporalNet
|
10 |
|
11 |
-
TemporalNet is a ControlNet model designed to enhance the temporal consistency of generated outputs
|
12 |
|
13 |
-
|
14 |
|
15 |
-
|
|
|
16 |
|
17 |
-
|
18 |
|
19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
- A PNG file called "init.png" that is pre-stylized in your desired style
|
21 |
- The "temporalvideo.py" script
|
22 |
|
23 |
-
|
24 |
|
25 |
-
|
26 |
|
27 |
-
|
28 |
|
29 |
*Please note that the "init.png" image will not significantly influence the style of the output video. Its primary purpose is to prevent a drastic change in aesthetics during the first few frames.*
|
30 |
|
|
|
8 |
---
|
9 |
Introducing the Beta Version of TemporalNet
|
10 |
|
11 |
+
TemporalNet is a ControlNet model designed to enhance the temporal consistency of generated outputs
|
12 |
|
13 |
+
TemporalNet 2 is an evolution on the concept, where the generated outputs are guided by both the last frame *and* an optical flow map between the frames, improving generation consistency.
|
14 |
|
15 |
+
This took some modification of the original controlnet code so you'll have to do some extra things. If you just want to run a gradio example or look at the modified controlnet code,
|
16 |
+
that's here: https://github.com/CiaraStrawberry/TemporalNet Just drop the model from this directory into that model folder and make sure the gradio_temporalnet.py script poitns at the model.
|
17 |
|
18 |
+
To use with stable diffusion, you can either use it with TemporalKit, or use it just by accessing the base api through the temporalvideo.py script:
|
19 |
|
20 |
+
1) move your controlnet webui install to this branch: https://github.com/CiaraStrawberry/sd-webui-controlnet-TemporalNet-API
|
21 |
+
|
22 |
+
2) Add the model "diff_control_sd15_temporalnet_fp16.safetensors" to your models folder in the ControlNet extension in Automatic1111's Web UI.
|
23 |
+
|
24 |
+
3) Check you have:
|
25 |
+
|
26 |
+
- A folder named "Input_Images" with the input frames
|
27 |
- A PNG file called "init.png" that is pre-stylized in your desired style
|
28 |
- The "temporalvideo.py" script
|
29 |
|
30 |
+
4) Customize the "temporalvideo.py" script according to your preferences, such as the image resolution, prompt, and control net settings.
|
31 |
|
32 |
+
5) Launch Automatic1111's Web UI with the --api setting enabled.
|
33 |
|
34 |
+
6) Execute the Python script.
|
35 |
|
36 |
*Please note that the "init.png" image will not significantly influence the style of the output video. Its primary purpose is to prevent a drastic change in aesthetics during the first few frames.*
|
37 |
|