JvThunder commited on
Commit
c8dc207
1 Parent(s): e9a1552

End of training

Browse files
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
- license: bsd-3-clause
3
- base_model: MIT/ast-finetuned-audioset-10-10-0.4593
4
  tags:
5
  - generated_from_trainer
6
  datasets:
@@ -15,15 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # distilhubert-finetuned-gtzan
17
 
18
- This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
19
- It achieves the following results on the evaluation set:
20
- - eval_loss: 0.8117
21
- - eval_accuracy: 0.83
22
- - eval_runtime: 46.4112
23
- - eval_samples_per_second: 2.155
24
- - eval_steps_per_second: 0.539
25
- - epoch: 12.9956
26
- - step: 731
27
 
28
  ## Model description
29
 
@@ -42,21 +34,19 @@ More information needed
42
  ### Training hyperparameters
43
 
44
  The following hyperparameters were used during training:
45
- - learning_rate: 0.0001
46
- - train_batch_size: 4
47
- - eval_batch_size: 4
48
  - seed: 42
49
- - gradient_accumulation_steps: 4
50
- - total_train_batch_size: 16
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_ratio: 0.1
54
- - num_epochs: 20
55
  - mixed_precision_training: Native AMP
56
 
57
  ### Framework versions
58
 
59
  - Transformers 4.41.2
60
- - Pytorch 2.3.0+cu121
61
- - Datasets 2.20.0
62
  - Tokenizers 0.19.1
 
1
  ---
2
+ license: apache-2.0
3
+ base_model: ntu-spml/distilhubert
4
  tags:
5
  - generated_from_trainer
6
  datasets:
 
15
 
16
  # distilhubert-finetuned-gtzan
17
 
18
+ This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
 
 
 
 
 
 
 
 
19
 
20
  ## Model description
21
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - learning_rate: 5e-05
38
+ - train_batch_size: 8
39
+ - eval_batch_size: 8
40
  - seed: 42
 
 
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
  - lr_scheduler_warmup_ratio: 0.1
44
+ - num_epochs: 10
45
  - mixed_precision_training: Native AMP
46
 
47
  ### Framework versions
48
 
49
  - Transformers 4.41.2
50
+ - Pytorch 2.1.2
51
+ - Datasets 2.19.2
52
  - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8000d92dda6f735f67f0f4d0d035a3ab29555841a6c5dd49d365d7d40d20ee02
3
  size 94771728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad842a6f93e4bc8cefe66c9f4af1b8350195481103166081bc92e03d99f10caf
3
  size 94771728
runs/Jun21_17-13-28_c2457e59e213/events.out.tfevents.1718990010.c2457e59e213.34.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e80f9c7de1bc540d806c935ede03aaf46e38fa059e4c01e1331cd4fdf5a63ca
3
+ size 6040
runs/Jun21_17-28-01_c2457e59e213/events.out.tfevents.1718990886.c2457e59e213.34.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf6222de77012f031b215ee9d2a723480b93d010b5898d30a0caf84b890df455
3
+ size 6042
runs/Jun21_17-28-01_c2457e59e213/events.out.tfevents.1718991170.c2457e59e213.34.2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5cc2212be42ee00f175d6e85e3ae39c8b97ea4f9699e1f80388e8274f154bbe6
3
+ size 6042
runs/Jun21_17-33-42_c2457e59e213/events.out.tfevents.1718991225.c2457e59e213.34.3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f524115e4b02fee74d2b225c18d9463c347bad358183d7d82ac3b6b74b58abcb
3
+ size 6042
runs/Jun21_17-36-11_c2457e59e213/events.out.tfevents.1718991373.c2457e59e213.34.5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3dfd7cd576d32859905046fb9b3e00e6fa6bd333e4e2922d469676cf99bc9245
3
+ size 6042
runs/Jun21_17-36-39_c2457e59e213/events.out.tfevents.1718991399.c2457e59e213.34.6 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:669b2277f7e39f37f92d4c520ac8996205813e557a00d2985601c0bd50180451
3
+ size 6042
runs/Jun21_17-37-34_c2457e59e213/events.out.tfevents.1718991455.c2457e59e213.34.7 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac279db8d17f7659de2e5304967219f8091871f9ac514d7e0b31267bd90ea79e
3
+ size 6042
runs/Jun21_17-38-03_c2457e59e213/events.out.tfevents.1718991483.c2457e59e213.34.8 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b66c886d36b5a9da1c061baec773be154555945139de379f18aac104acae75f
3
+ size 6042
runs/Jun21_17-38-17_c2457e59e213/events.out.tfevents.1718991498.c2457e59e213.34.9 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9141b826e94839c007bfc61efb4b1f05e98201c7c657ec17259a77f4c349ce64
3
+ size 6042
runs/Jun21_17-38-57_c2457e59e213/events.out.tfevents.1718991538.c2457e59e213.34.10 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9251f876048665d89407dde6e1470b679a70381deba6edb9853f7bc476650013
3
+ size 8513
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fb8a6a7adeab2647479345153581ec0ea267fc5e7607ebf69a8506a26865db43
3
  size 5112
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1c695468c791c651eec23bac5b38a9771ed7f9bf246719618db7707cfb95758
3
  size 5112