End of training
Browse files
README.md
CHANGED
@@ -28,11 +28,12 @@ model-index:
|
|
28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
29 |
should probably proofread and complete it, then remove this comment. -->
|
30 |
|
|
|
31 |
# distilhubert-finetuned-gtzan
|
32 |
|
33 |
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
|
34 |
It achieves the following results on the evaluation set:
|
35 |
-
- Loss:
|
36 |
- Accuracy: 0.73
|
37 |
|
38 |
## Model description
|
@@ -53,8 +54,8 @@ More information needed
|
|
53 |
|
54 |
The following hyperparameters were used during training:
|
55 |
- learning_rate: 5e-05
|
56 |
-
- train_batch_size:
|
57 |
-
- eval_batch_size:
|
58 |
- seed: 42
|
59 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
60 |
- lr_scheduler_type: linear
|
@@ -66,14 +67,14 @@ The following hyperparameters were used during training:
|
|
66 |
|
67 |
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|
68 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
|
69 |
-
| 1.
|
70 |
-
| 1.
|
71 |
-
| 0.
|
72 |
|
73 |
|
74 |
### Framework versions
|
75 |
|
76 |
-
- Transformers 4.
|
77 |
- Pytorch 2.2.1+cu121
|
78 |
-
- Datasets 2.
|
79 |
- Tokenizers 0.19.1
|
|
|
28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
29 |
should probably proofread and complete it, then remove this comment. -->
|
30 |
|
31 |
+
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>]()
|
32 |
# distilhubert-finetuned-gtzan
|
33 |
|
34 |
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
|
35 |
It achieves the following results on the evaluation set:
|
36 |
+
- Loss: 1.0125
|
37 |
- Accuracy: 0.73
|
38 |
|
39 |
## Model description
|
|
|
54 |
|
55 |
The following hyperparameters were used during training:
|
56 |
- learning_rate: 5e-05
|
57 |
+
- train_batch_size: 1
|
58 |
+
- eval_batch_size: 1
|
59 |
- seed: 42
|
60 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
61 |
- lr_scheduler_type: linear
|
|
|
67 |
|
68 |
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|
69 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
|
70 |
+
| 1.3647 | 1.0 | 899 | 1.2020 | 0.65 |
|
71 |
+
| 1.0999 | 2.0 | 1798 | 1.1490 | 0.7 |
|
72 |
+
| 0.1482 | 3.0 | 2697 | 1.0125 | 0.73 |
|
73 |
|
74 |
|
75 |
### Framework versions
|
76 |
|
77 |
+
- Transformers 4.41.0.dev0
|
78 |
- Pytorch 2.2.1+cu121
|
79 |
+
- Datasets 2.17.1
|
80 |
- Tokenizers 0.19.1
|