NicolasDenier commited on
Commit
189ea89
1 Parent(s): 46559d8

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -36
README.md CHANGED
@@ -1,28 +1,13 @@
1
  ---
2
  license: apache-2.0
3
- base_model: NicolasDenier/distilhubert-finetuned-gtzan
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
  - marsyas/gtzan
8
- metrics:
9
- - accuracy
10
  model-index:
11
  - name: distilhubert-finetuned-gtzan
12
- results:
13
- - task:
14
- name: Audio Classification
15
- type: audio-classification
16
- dataset:
17
- name: GTZAN
18
- type: marsyas/gtzan
19
- config: all
20
- split: train
21
- args: all
22
- metrics:
23
- - name: Accuracy
24
- type: accuracy
25
- value: 0.85
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -30,10 +15,15 @@ should probably proofread and complete it, then remove this comment. -->
30
 
31
  # distilhubert-finetuned-gtzan
32
 
33
- This model is a fine-tuned version of [NicolasDenier/distilhubert-finetuned-gtzan](https://huggingface.co/NicolasDenier/distilhubert-finetuned-gtzan) on the GTZAN dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.6529
36
- - Accuracy: 0.85
 
 
 
 
 
37
 
38
  ## Model description
39
 
@@ -53,30 +43,19 @@ More information needed
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 5e-05
56
- - train_batch_size: 2
57
- - eval_batch_size: 2
58
  - seed: 42
59
- - gradient_accumulation_steps: 4
60
  - total_train_batch_size: 8
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_ratio: 0.1
64
- - num_epochs: 5
65
-
66
- ### Training results
67
-
68
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
- | 0.6294 | 1.0 | 224 | 0.6803 | 0.85 |
71
- | 0.4995 | 2.0 | 449 | 0.6409 | 0.87 |
72
- | 0.3727 | 3.0 | 674 | 0.5873 | 0.87 |
73
- | 0.1291 | 4.0 | 899 | 0.6303 | 0.86 |
74
- | 0.0569 | 4.98 | 1120 | 0.6529 | 0.85 |
75
-
76
 
77
  ### Framework versions
78
 
79
  - Transformers 4.32.0.dev0
80
- - Pytorch 2.0.1+cu118
81
  - Datasets 2.13.1
82
  - Tokenizers 0.13.3
 
1
  ---
2
  license: apache-2.0
3
+ base_model: ntu-spml/distilhubert
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
  - marsyas/gtzan
 
 
8
  model-index:
9
  - name: distilhubert-finetuned-gtzan
10
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
15
 
16
  # distilhubert-finetuned-gtzan
17
 
18
+ This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
19
  It achieves the following results on the evaluation set:
20
+ - eval_loss: 0.7834
21
+ - eval_accuracy: 0.78
22
+ - eval_runtime: 46.4733
23
+ - eval_samples_per_second: 2.152
24
+ - eval_steps_per_second: 0.538
25
+ - epoch: 12.0
26
+ - step: 1350
27
 
28
  ## Model description
29
 
 
43
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 5e-05
46
+ - train_batch_size: 4
47
+ - eval_batch_size: 4
48
  - seed: 42
49
+ - gradient_accumulation_steps: 2
50
  - total_train_batch_size: 8
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_ratio: 0.1
54
+ - num_epochs: 15
 
 
 
 
 
 
 
 
 
 
 
55
 
56
  ### Framework versions
57
 
58
  - Transformers 4.32.0.dev0
59
+ - Pytorch 2.0.1+cu117
60
  - Datasets 2.13.1
61
  - Tokenizers 0.13.3