vapari commited on
Commit
a2c64ab
1 Parent(s): 90fcbf2

End of training

Browse files
Files changed (2) hide show
  1. README.md +16 -12
  2. generation_config.json +1 -1
README.md CHANGED
@@ -18,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli/fi dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.4543
22
 
23
  ## Model description
24
 
@@ -41,27 +41,31 @@ The following hyperparameters were used during training:
41
  - train_batch_size: 8
42
  - eval_batch_size: 2
43
  - seed: 42
44
- - gradient_accumulation_steps: 4
45
- - total_train_batch_size: 32
46
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_steps: 250
49
- - training_steps: 2000
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
  | Training Loss | Epoch | Step | Validation Loss |
55
  |:-------------:|:-------:|:----:|:---------------:|
56
- | 0.5205 | 8.0972 | 500 | 0.4739 |
57
- | 0.498 | 16.1943 | 1000 | 0.4578 |
58
- | 0.4906 | 24.2915 | 1500 | 0.4567 |
59
- | 0.4874 | 32.3887 | 2000 | 0.4543 |
 
 
 
 
60
 
61
 
62
  ### Framework versions
63
 
64
- - Transformers 4.44.2
65
- - Pytorch 2.5.0+cu121
66
  - Datasets 3.1.0
67
- - Tokenizers 0.19.1
 
18
 
19
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli/fi dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.4489
22
 
23
  ## Model description
24
 
 
41
  - train_batch_size: 8
42
  - eval_batch_size: 2
43
  - seed: 42
44
+ - gradient_accumulation_steps: 2
45
+ - total_train_batch_size: 16
46
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_steps: 250
49
+ - training_steps: 4000
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
  | Training Loss | Epoch | Step | Validation Loss |
55
  |:-------------:|:-------:|:----:|:---------------:|
56
+ | 0.5539 | 1.4577 | 500 | 0.4861 |
57
+ | 0.5132 | 2.9155 | 1000 | 0.4683 |
58
+ | 0.5069 | 4.3732 | 1500 | 0.4607 |
59
+ | 0.4886 | 5.8309 | 2000 | 0.4555 |
60
+ | 0.495 | 7.2886 | 2500 | 0.4539 |
61
+ | 0.4846 | 8.7464 | 3000 | 0.4504 |
62
+ | 0.4843 | 10.2041 | 3500 | 0.4495 |
63
+ | 0.4932 | 11.6618 | 4000 | 0.4489 |
64
 
65
 
66
  ### Framework versions
67
 
68
+ - Transformers 4.46.2
69
+ - Pytorch 2.5.1+cu121
70
  - Datasets 3.1.0
71
+ - Tokenizers 0.20.3
generation_config.json CHANGED
@@ -5,5 +5,5 @@
5
  "eos_token_id": 2,
6
  "max_length": 1876,
7
  "pad_token_id": 1,
8
- "transformers_version": "4.44.2"
9
  }
 
5
  "eos_token_id": 2,
6
  "max_length": 1876,
7
  "pad_token_id": 1,
8
+ "transformers_version": "4.46.2"
9
  }