Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ inference:
|
|
17 |
|
18 |
# ddpo-alignment
|
19 |
|
20 |
-
This model was finetuned from [Stable Diffusion v1-
|
21 |
|
22 |
The model was finetuned for 200 iterations with a batch size of 256 samples per iteration. During finetuning, we used prompts of the form: "_a(n) \<animal\> \<activity\>_". We selected the animal and activity from the following lists, so try those for the best results. However, we also observed limited generalization to other prompts.
|
23 |
|
|
|
17 |
|
18 |
# ddpo-alignment
|
19 |
|
20 |
+
This model was finetuned from [Stable Diffusion v1-4](https:/CompVis/stable-diffusion-v1-4) using [DDPO](https://arxiv.org/abs/2305.13301) and a reward function that uses [LLaVA](https://llava-vl.github.io/) to measure prompt-image alignment. See [the project website](https://rl-diffusion.github.io/) for more details.
|
21 |
|
22 |
The model was finetuned for 200 iterations with a batch size of 256 samples per iteration. During finetuning, we used prompts of the form: "_a(n) \<animal\> \<activity\>_". We selected the animal and activity from the following lists, so try those for the best results. However, we also observed limited generalization to other prompts.
|
23 |
|