indrajitharidas commited on
Commit
cdd923d
1 Parent(s): 3b67f85

Model save

Browse files
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: facebook/wav2vec2-base
5
+ tags:
6
+ - generated_from_trainer
7
+ datasets:
8
+ - audiofolder
9
+ metrics:
10
+ - f1
11
+ - precision
12
+ - recall
13
+ model-index:
14
+ - name: my_awesome_mind_model
15
+ results:
16
+ - task:
17
+ name: Audio Classification
18
+ type: audio-classification
19
+ dataset:
20
+ name: audiofolder
21
+ type: audiofolder
22
+ config: default
23
+ split: train
24
+ args: default
25
+ metrics:
26
+ - name: F1
27
+ type: f1
28
+ value: 0.5864661654135338
29
+ - name: Precision
30
+ type: precision
31
+ value: 0.42391304347826086
32
+ - name: Recall
33
+ type: recall
34
+ value: 0.9512195121951219
35
+ ---
36
+
37
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
38
+ should probably proofread and complete it, then remove this comment. -->
39
+
40
+ # my_awesome_mind_model
41
+
42
+ This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the audiofolder dataset.
43
+ It achieves the following results on the evaluation set:
44
+ - Loss: 0.9514
45
+ - F1: 0.5865
46
+ - Precision: 0.4239
47
+ - Recall: 0.9512
48
+
49
+ ## Model description
50
+
51
+ More information needed
52
+
53
+ ## Intended uses & limitations
54
+
55
+ More information needed
56
+
57
+ ## Training and evaluation data
58
+
59
+ More information needed
60
+
61
+ ## Training procedure
62
+
63
+ ### Training hyperparameters
64
+
65
+ The following hyperparameters were used during training:
66
+ - learning_rate: 3e-05
67
+ - train_batch_size: 32
68
+ - eval_batch_size: 32
69
+ - seed: 42
70
+ - gradient_accumulation_steps: 4
71
+ - total_train_batch_size: 128
72
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
73
+ - lr_scheduler_type: linear
74
+ - lr_scheduler_warmup_ratio: 0.1
75
+ - num_epochs: 10
76
+
77
+ ### Training results
78
+
79
+ | Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall |
80
+ |:-------------:|:------:|:----:|:---------------:|:------:|:---------:|:------:|
81
+ | No log | 0.5714 | 1 | 1.0873 | 0.0 | 0.0 | 0.0 |
82
+ | No log | 1.7143 | 3 | 1.0299 | 0.1111 | 0.2308 | 0.0732 |
83
+ | No log | 2.8571 | 5 | 0.9925 | 0.3736 | 0.34 | 0.4146 |
84
+ | No log | 4.0 | 7 | 0.9678 | 0.5397 | 0.4 | 0.8293 |
85
+ | No log | 4.5714 | 8 | 0.9594 | 0.5538 | 0.4045 | 0.8780 |
86
+ | 1.006 | 5.7143 | 10 | 0.9514 | 0.5865 | 0.4239 | 0.9512 |
87
+
88
+
89
+ ### Framework versions
90
+
91
+ - Transformers 4.44.2
92
+ - Pytorch 2.4.1
93
+ - Datasets 3.0.0
94
+ - Tokenizers 0.19.1
runs/Sep20_14-16-20_Indrajits-MBP/events.out.tfevents.1726866983.Indrajits-MBP.54539.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e9c18673b1de170baa652fb7c686c65895e918ca283aa0005bec7296db9574e1
3
- size 9057
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:223ee6bc5de0f0031ae14b758fddaec65e79fcf39f7d219924f74e7f8234eaf9
3
+ size 9817