meg51 commited on
Commit
5249667
1 Parent(s): ce40116

End of training

Browse files
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: openai/whisper-large-v3
3
+ datasets:
4
+ - google/fleurs
5
+ language:
6
+ - hi
7
+ library_name: peft
8
+ license: apache-2.0
9
+ metrics:
10
+ - wer
11
+ tags:
12
+ - generated_from_trainer
13
+ model-index:
14
+ - name: Whisper Large-v3 Hindi -megha sharma
15
+ results:
16
+ - task:
17
+ type: automatic-speech-recognition
18
+ name: Automatic Speech Recognition
19
+ dataset:
20
+ name: Google Fleurs
21
+ type: google/fleurs
22
+ config: hi_in
23
+ split: None
24
+ args: 'config: hi, split: test'
25
+ metrics:
26
+ - type: wer
27
+ value: 18.4303006638032
28
+ name: Wer
29
+ ---
30
+
31
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
+ should probably proofread and complete it, then remove this comment. -->
33
+
34
+ # Whisper Large-v3 Hindi -megha sharma
35
+
36
+ This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Google Fleurs dataset.
37
+ It achieves the following results on the evaluation set:
38
+ - Loss: 0.1607
39
+ - Wer: 18.4303
40
+
41
+ ## Model description
42
+
43
+ More information needed
44
+
45
+ ## Intended uses & limitations
46
+
47
+ More information needed
48
+
49
+ ## Training and evaluation data
50
+
51
+ More information needed
52
+
53
+ ## Training procedure
54
+
55
+ ### Training hyperparameters
56
+
57
+ The following hyperparameters were used during training:
58
+ - learning_rate: 5e-06
59
+ - train_batch_size: 8
60
+ - eval_batch_size: 8
61
+ - seed: 42
62
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
+ - lr_scheduler_type: linear
64
+ - lr_scheduler_warmup_steps: 2000
65
+ - training_steps: 20000
66
+ - mixed_precision_training: Native AMP
67
+
68
+ ### Training results
69
+
70
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
71
+ |:-------------:|:-------:|:-----:|:---------------:|:-------:|
72
+ | 0.1781 | 6.7797 | 2000 | 0.1785 | 21.1734 |
73
+ | 0.1519 | 13.5593 | 4000 | 0.1621 | 19.2405 |
74
+ | 0.1286 | 20.3390 | 6000 | 0.1577 | 18.7427 |
75
+ | 0.1259 | 27.1186 | 8000 | 0.1564 | 18.2058 |
76
+ | 0.111 | 33.8983 | 10000 | 0.1568 | 17.9032 |
77
+ | 0.1067 | 40.6780 | 12000 | 0.1582 | 17.8153 |
78
+ | 0.1034 | 47.4576 | 14000 | 0.1591 | 18.8403 |
79
+ | 0.0995 | 54.2373 | 16000 | 0.1603 | 18.8598 |
80
+ | 0.0929 | 61.0169 | 18000 | 0.1607 | 18.4303 |
81
+
82
+
83
+ ### Framework versions
84
+
85
+ - PEFT 0.12.1.dev0
86
+ - Transformers 4.45.0.dev0
87
+ - Pytorch 2.4.0+cu121
88
+ - Datasets 2.21.0
89
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:da4851288bf7d480860bdfb7c522397d99fd910157a9549f3696bc970a74aea5
3
  size 62969640
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:967b057fcbff544d1103ee5cfd6d3fb36e1b72d6aa00fdfb900715f7fb460bc0
3
  size 62969640
runs/Sep06_02-51-40_speech-to-text-vm/events.out.tfevents.1725591101.speech-to-text-vm CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3bdff3452676a03901b4df330b83b949ecdf10197ffad7380c846986dd375a94
3
- size 75920
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b58cbdb5d7ff653fe773c49b2773e05baf05a0a410d9a794eed634247cdab034
3
+ size 85494