MaxReynolds commited on
Commit
d2418d4
1 Parent(s): 25403b8

End of training

Browse files
README.md CHANGED
@@ -14,7 +14,7 @@ inference: true
14
 
15
  # Text-to-image finetuning - MaxReynolds/MyPatternModel
16
 
17
- This pipeline was finetuned from **CompVis/stable-diffusion-v1-4** on the **MaxReynolds/MyPatternDataset** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: [',<r4nd0m-l4b3l>']:
18
 
19
  ![val_imgs_grid](./val_imgs_grid.png)
20
 
@@ -28,7 +28,7 @@ from diffusers import DiffusionPipeline
28
  import torch
29
 
30
  pipeline = DiffusionPipeline.from_pretrained("MaxReynolds/MyPatternModel", torch_dtype=torch.float16)
31
- prompt = ",<r4nd0m-l4b3l>"
32
  image = pipeline(prompt).images[0]
33
  image.save("my_image.png")
34
  ```
@@ -37,7 +37,7 @@ image.save("my_image.png")
37
 
38
  These are the key hyperparameters used during training:
39
 
40
- * Epochs: 32
41
  * Learning rate: 1e-05
42
  * Batch size: 1
43
  * Gradient accumulation steps: 4
@@ -45,4 +45,4 @@ These are the key hyperparameters used during training:
45
  * Mixed-precision: fp16
46
 
47
 
48
- More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/max-f-reynolds/text2image-fine-tune/runs/8v2yfdkf).
 
14
 
15
  # Text-to-image finetuning - MaxReynolds/MyPatternModel
16
 
17
+ This pipeline was finetuned from **CompVis/stable-diffusion-v1-4** on the **MaxReynolds/MyPatternDataset** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['<r4nd0m-l4b3l>']:
18
 
19
  ![val_imgs_grid](./val_imgs_grid.png)
20
 
 
28
  import torch
29
 
30
  pipeline = DiffusionPipeline.from_pretrained("MaxReynolds/MyPatternModel", torch_dtype=torch.float16)
31
+ prompt = "<r4nd0m-l4b3l>"
32
  image = pipeline(prompt).images[0]
33
  image.save("my_image.png")
34
  ```
 
37
 
38
  These are the key hyperparameters used during training:
39
 
40
+ * Epochs: 22
41
  * Learning rate: 1e-05
42
  * Batch size: 1
43
  * Gradient accumulation steps: 4
 
45
  * Mixed-precision: fp16
46
 
47
 
48
+ More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/max-f-reynolds/text2image-fine-tune/runs/kybv4sem).
feature_extractor/preprocessor_config.json CHANGED
@@ -24,5 +24,6 @@
24
  "rescale_factor": 0.00392156862745098,
25
  "size": {
26
  "shortest_edge": 224
27
- }
 
28
  }
 
24
  "rescale_factor": 0.00392156862745098,
25
  "size": {
26
  "shortest_edge": 224
27
+ },
28
+ "use_square_size": false
29
  }
model_index.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "_class_name": "StableDiffusionPipeline",
3
- "_diffusers_version": "0.22.0.dev0",
4
  "_name_or_path": "CompVis/stable-diffusion-v1-4",
5
  "feature_extractor": [
6
  "transformers",
 
1
  {
2
  "_class_name": "StableDiffusionPipeline",
3
+ "_diffusers_version": "0.23.0.dev0",
4
  "_name_or_path": "CompVis/stable-diffusion-v1-4",
5
  "feature_extractor": [
6
  "transformers",
safety_checker/config.json CHANGED
@@ -15,7 +15,7 @@
15
  "num_attention_heads": 12
16
  },
17
  "torch_dtype": "float32",
18
- "transformers_version": "4.34.1",
19
  "vision_config": {
20
  "dropout": 0.0,
21
  "hidden_size": 1024,
 
15
  "num_attention_heads": 12
16
  },
17
  "torch_dtype": "float32",
18
+ "transformers_version": "4.35.0",
19
  "vision_config": {
20
  "dropout": 0.0,
21
  "hidden_size": 1024,
scheduler/scheduler_config.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "_class_name": "PNDMScheduler",
3
- "_diffusers_version": "0.22.0.dev0",
4
  "beta_end": 0.012,
5
  "beta_schedule": "scaled_linear",
6
  "beta_start": 0.00085,
 
1
  {
2
  "_class_name": "PNDMScheduler",
3
+ "_diffusers_version": "0.23.0.dev0",
4
  "beta_end": 0.012,
5
  "beta_schedule": "scaled_linear",
6
  "beta_start": 0.00085,
text_encoder/config.json CHANGED
@@ -20,6 +20,6 @@
20
  "pad_token_id": 1,
21
  "projection_dim": 512,
22
  "torch_dtype": "float16",
23
- "transformers_version": "4.34.1",
24
  "vocab_size": 49408
25
  }
 
20
  "pad_token_id": 1,
21
  "projection_dim": 512,
22
  "torch_dtype": "float16",
23
+ "transformers_version": "4.35.0",
24
  "vocab_size": 49408
25
  }
unet/config.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "_class_name": "UNet2DConditionModel",
3
- "_diffusers_version": "0.22.0.dev0",
4
  "_name_or_path": "CompVis/stable-diffusion-v1-4",
5
  "act_fn": "silu",
6
  "addition_embed_type": null,
 
1
  {
2
  "_class_name": "UNet2DConditionModel",
3
+ "_diffusers_version": "0.23.0.dev0",
4
  "_name_or_path": "CompVis/stable-diffusion-v1-4",
5
  "act_fn": "silu",
6
  "addition_embed_type": null,
unet/diffusion_pytorch_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:371c140ab76858085f1a81995f7e78a11e19deee8d93481373a8a6b7b98f4364
3
  size 3438167536
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc3d75a7ba6037cd8999e81d7be6b02cc0955b0221f2e7189a29cb553b1d7439
3
  size 3438167536
vae/config.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "_class_name": "AutoencoderKL",
3
- "_diffusers_version": "0.22.0.dev0",
4
  "_name_or_path": "CompVis/stable-diffusion-v1-4",
5
  "act_fn": "silu",
6
  "block_out_channels": [
 
1
  {
2
  "_class_name": "AutoencoderKL",
3
+ "_diffusers_version": "0.23.0.dev0",
4
  "_name_or_path": "CompVis/stable-diffusion-v1-4",
5
  "act_fn": "silu",
6
  "block_out_channels": [
val_imgs_grid.png CHANGED