fine_tuned_model / README.md
Trkkk's picture
End of training
990d32b verified
|
raw
history blame
2.74 kB
---
base_model: CompVis/stable-diffusion-v1-4
library_name: diffusers
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text-to-image finetuning - Trkkk/fine_tuned_model
This pipeline was finetuned from **CompVis/stable-diffusion-v1-4** on the **Trkkk/txt_zu_img** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['A busy urban street filled with cars stuck in traffic. Vehicles of various types, including sedans, SUVs, and buses, are lined up bumper to bumper. The road is crowded with vehicles, and drivers seem impatient. Streetlights, traffic signs, and nearby buildings add to the busy city atmosphere, while pedestrians wait on the sidewalks. The scene is set during daylight, with clear skies above, but the road is completely congested with no cars moving.']:
![val_imgs_grid](./val_imgs_grid.png)
## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("Trkkk/fine_tuned_model", torch_dtype=torch.float16)
prompt = "A busy urban street filled with cars stuck in traffic. Vehicles of various types, including sedans, SUVs, and buses, are lined up bumper to bumper. The road is crowded with vehicles, and drivers seem impatient. Streetlights, traffic signs, and nearby buildings add to the busy city atmosphere, while pedestrians wait on the sidewalks. The scene is set during daylight, with clear skies above, but the road is completely congested with no cars moving."
image = pipeline(prompt).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 1
* Learning rate: 1e-05
* Batch size: 1
* Gradient accumulation steps: 4
* Image resolution: 256
* Mixed-precision: fp16
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/elounitarek921-leibniz-universit-t-hannover1064/text2image-fine-tune/runs/gfbx32e8).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]