kyujinpy commited on
Commit
222f45c
โ€ข
1 Parent(s): 33de5e3

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md CHANGED
@@ -1,3 +1,60 @@
1
  ---
2
  license: creativeml-openrail-m
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: creativeml-openrail-m
3
+ base_model: kyujinpy/KO-anything-v4-5
4
+ training_prompt: A bear is playing guitar
5
+ tags:
6
+ - tune-a-video
7
+ - text-to-video
8
+ - diffusers
9
+ - korean
10
+ inference: false
11
  ---
12
+
13
+ # Tune-A-VideKO-anything
14
+ Github: [Kyujinpy/Tune-A-VideKO](https://github.com/KyujinHan/Tune-A-VideKO/tree/master)
15
+
16
+ ## Model Description
17
+ - Base model: [kyujinpy/KO-anything-v4-5](https://huggingface.co/kyujinpy/KO-anything-v4-5)
18
+ - Training prompt: A bear is playing guitar
19
+ ![sample-train](bear.gif)
20
+
21
+ ## Samples
22
+
23
+ ![sample-500](video1.gif)
24
+ Test prompt: 1์†Œ๋…€๋Š” ๊ธฐํƒ€๋ฅผ ์—ฐ์ฃผํ•˜๊ณ  ์žˆ๋‹ค, ํฐ ๋จธ๋ฆฌ, ์ค‘๊ฐ„ ๋จธ๋ฆฌ, ๊ณ ์–‘์ด ๊ท€, ๊ท€์—ฌ์šด, ์Šค์นดํ”„, ์žฌํ‚ท, ์•ผ์™ธ, ๊ฑฐ๋ฆฌ, ์†Œ๋…€
25
+
26
+ ![sample-500](video2.gif)
27
+ Test prompt: 1์†Œ๋…€๊ฐ€ ๊ธฐํƒ€ ์—ฐ์ฃผ๋ฅผ ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค, ๋ฐ”๋‹ค, ๋ˆˆ์„ ๊ฐ์Œ, ๊ธด ๋จธ๋ฆฌ, ์นด๋ฆฌ์Šค๋งˆ
28
+
29
+ ![sample-500](video3.gif)
30
+ Test prompt: 1์†Œ๋…„, ๊ธฐํƒ€ ์—ฐ์ฃผ, ์ž˜์ƒ๊น€, ์•‰์•„์žˆ๋Š”, ๋นจ๊ฐ„์ƒ‰ ๊ธฐํƒ€, ํ•ด๋ณ€
31
+
32
+ ## Usage
33
+ Clone the github repo
34
+ ```bash
35
+ git clone https://github.com/showlab/Tune-A-Video.git
36
+ ```
37
+
38
+ Run inference code
39
+
40
+ ```python
41
+ from tuneavideo.pipelines.pipeline_tuneavideo import TuneAVideoPipeline
42
+ from tuneavideo.models.unet import UNet3DConditionModel
43
+ from tuneavideo.util import save_videos_grid
44
+ import torch
45
+
46
+ pretrained_model_path = "kyujinpy/KO-anything-v4-5"
47
+ unet_model_path = "kyujinpy/Tune-A-VideKO-anything"
48
+ unet = UNet3DConditionModel.from_pretrained(unet_model_path, subfolder='unet', torch_dtype=torch.float16).to('cuda')
49
+ pipe = TuneAVideoPipeline.from_pretrained(pretrained_model_path, unet=unet, torch_dtype=torch.float16).to("cuda")
50
+ pipe.enable_xformers_memory_efficient_attention()
51
+
52
+ prompt = "1์†Œ๋…€๋Š” ๊ธฐํƒ€๋ฅผ ์—ฐ์ฃผํ•˜๊ณ  ์žˆ๋‹ค, ํฐ ๋จธ๋ฆฌ, ์ค‘๊ฐ„ ๋จธ๋ฆฌ, ๊ณ ์–‘์ด ๊ท€, ๊ท€์—ฌ์šด, ์Šค์นดํ”„, ์žฌํ‚ท, ์•ผ์™ธ, ๊ฑฐ๋ฆฌ, ์†Œ๋…€"
53
+ video = pipe(prompt, video_length=8, height=512, width=512, num_inference_steps=50, guidance_scale=12.5).videos
54
+
55
+ save_videos_grid(video, f"./{prompt}.gif")
56
+ ```
57
+
58
+ ## Related Papers:
59
+ - [Tune-A-Video](https://arxiv.org/abs/2212.11565): One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
60
+ - [Stable Diffusion](https://arxiv.org/abs/2112.10752): High-Resolution Image Synthesis with Latent Diffusion Models