Kurokabe commited on
Commit
f86c802
1 Parent(s): 927d898

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -15
README.md CHANGED
@@ -22,7 +22,7 @@ model-index:
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.88
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,8 +32,8 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 1.3004
36
- - Accuracy: 0.73
37
 
38
  ## Model description
39
 
@@ -52,7 +52,7 @@ More information needed
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
- - learning_rate: 0.0001
56
  - train_batch_size: 32
57
  - eval_batch_size: 32
58
  - seed: 42
@@ -61,22 +61,32 @@ The following hyperparameters were used during training:
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_ratio: 0.1
64
- - num_epochs: 10
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
- | 2.3007 | 0.97 | 7 | 2.2260 | 0.34 |
71
- | 2.2424 | 1.93 | 14 | 2.0328 | 0.39 |
72
- | 1.9803 | 2.9 | 21 | 1.8298 | 0.41 |
73
- | 1.8344 | 4.0 | 29 | 1.6637 | 0.52 |
74
- | 1.608 | 4.97 | 36 | 1.5523 | 0.58 |
75
- | 1.5644 | 5.93 | 43 | 1.4443 | 0.67 |
76
- | 1.4354 | 6.9 | 50 | 1.3870 | 0.7 |
77
- | 1.38 | 8.0 | 58 | 1.3434 | 0.69 |
78
- | 1.3521 | 8.97 | 65 | 1.3051 | 0.76 |
79
- | 1.3542 | 9.66 | 70 | 1.3004 | 0.73 |
 
 
 
 
 
 
 
 
 
 
80
 
81
 
82
  ### Framework versions
 
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.81
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.7392
36
+ - Accuracy: 0.81
37
 
38
  ## Model description
39
 
 
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
+ - learning_rate: 5e-05
56
  - train_batch_size: 32
57
  - eval_batch_size: 32
58
  - seed: 42
 
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_ratio: 0.1
64
+ - num_epochs: 20
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
+ | 1.3055 | 0.97 | 7 | 1.2863 | 0.73 |
71
+ | 1.2903 | 1.93 | 14 | 1.2504 | 0.7 |
72
+ | 1.2118 | 2.9 | 21 | 1.1450 | 0.77 |
73
+ | 1.1443 | 4.0 | 29 | 1.1224 | 0.74 |
74
+ | 1.006 | 4.97 | 36 | 1.0376 | 0.79 |
75
+ | 1.0174 | 5.93 | 43 | 0.9681 | 0.8 |
76
+ | 0.9155 | 6.9 | 50 | 0.9322 | 0.81 |
77
+ | 0.8781 | 8.0 | 58 | 0.9266 | 0.78 |
78
+ | 0.819 | 8.97 | 65 | 0.8473 | 0.79 |
79
+ | 0.7984 | 9.93 | 72 | 0.8225 | 0.77 |
80
+ | 0.7254 | 10.9 | 79 | 0.8096 | 0.81 |
81
+ | 0.6752 | 12.0 | 87 | 0.7801 | 0.81 |
82
+ | 0.6132 | 12.97 | 94 | 0.7687 | 0.8 |
83
+ | 0.615 | 13.93 | 101 | 0.7603 | 0.79 |
84
+ | 0.6162 | 14.9 | 108 | 0.7599 | 0.82 |
85
+ | 0.5678 | 16.0 | 116 | 0.7414 | 0.81 |
86
+ | 0.548 | 16.97 | 123 | 0.7423 | 0.81 |
87
+ | 0.5495 | 17.93 | 130 | 0.7378 | 0.81 |
88
+ | 0.5185 | 18.9 | 137 | 0.7396 | 0.81 |
89
+ | 0.5544 | 19.31 | 140 | 0.7392 | 0.81 |
90
 
91
 
92
  ### Framework versions