MacByner commited on
Commit
c9cecf8
1 Parent(s): db86be0

End of training

Browse files
Files changed (1) hide show
  1. README.md +9 -8
README.md CHANGED
@@ -28,11 +28,12 @@ model-index:
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
  should probably proofread and complete it, then remove this comment. -->
30
 
 
31
  # distilhubert-finetuned-gtzan
32
 
33
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.7953
36
  - Accuracy: 0.73
37
 
38
  ## Model description
@@ -53,8 +54,8 @@ More information needed
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 5e-05
56
- - train_batch_size: 2
57
- - eval_batch_size: 2
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
@@ -66,14 +67,14 @@ The following hyperparameters were used during training:
66
 
67
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
68
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
69
- | 1.4423 | 1.0 | 450 | 1.2938 | 0.62 |
70
- | 1.2445 | 2.0 | 900 | 0.9108 | 0.7 |
71
- | 0.2709 | 3.0 | 1350 | 0.7953 | 0.73 |
72
 
73
 
74
  ### Framework versions
75
 
76
- - Transformers 4.40.0
77
  - Pytorch 2.2.1+cu121
78
- - Datasets 2.19.0
79
  - Tokenizers 0.19.1
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
  should probably proofread and complete it, then remove this comment. -->
30
 
31
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>]()
32
  # distilhubert-finetuned-gtzan
33
 
34
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 1.0125
37
  - Accuracy: 0.73
38
 
39
  ## Model description
 
54
 
55
  The following hyperparameters were used during training:
56
  - learning_rate: 5e-05
57
+ - train_batch_size: 1
58
+ - eval_batch_size: 1
59
  - seed: 42
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: linear
 
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
+ | 1.3647 | 1.0 | 899 | 1.2020 | 0.65 |
71
+ | 1.0999 | 2.0 | 1798 | 1.1490 | 0.7 |
72
+ | 0.1482 | 3.0 | 2697 | 1.0125 | 0.73 |
73
 
74
 
75
  ### Framework versions
76
 
77
+ - Transformers 4.41.0.dev0
78
  - Pytorch 2.2.1+cu121
79
+ - Datasets 2.17.1
80
  - Tokenizers 0.19.1