|
--- |
|
license: apache-2.0 |
|
base_model: facebook/wav2vec2-base |
|
tags: |
|
- audio-classification |
|
- generated_from_trainer |
|
metrics: |
|
- accuracy |
|
model-index: |
|
- name: facebook_wav2vec2-base |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# facebook_wav2vec2-base |
|
|
|
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.4228 |
|
- Accuracy: 0.8974 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 0.0003 |
|
- train_batch_size: 16 |
|
- eval_batch_size: 2 |
|
- seed: 0 |
|
- gradient_accumulation_steps: 4 |
|
- total_train_batch_size: 64 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_ratio: 0.1 |
|
- num_epochs: 10.0 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Accuracy | |
|
|:-------------:|:-----:|:----:|:---------------:|:--------:| |
|
| 0.352 | 0.25 | 100 | 0.0266 | 0.9961 | |
|
| 0.2689 | 0.5 | 200 | 0.2177 | 0.9808 | |
|
| 1.2723 | 0.76 | 300 | 0.0354 | 0.9924 | |
|
| 0.6624 | 1.01 | 400 | 0.4243 | 0.8974 | |
|
| 0.5959 | 1.26 | 500 | 0.4805 | 0.8974 | |
|
| 0.594 | 1.51 | 600 | 0.4401 | 0.8974 | |
|
| 0.6017 | 1.76 | 700 | 0.4308 | 0.8974 | |
|
| 0.5973 | 2.02 | 800 | 0.3904 | 0.8974 | |
|
| 0.6096 | 2.27 | 900 | 0.4004 | 0.8974 | |
|
| 0.5936 | 2.52 | 1000 | 0.4180 | 0.8974 | |
|
| 0.5932 | 2.77 | 1100 | 0.4600 | 0.8974 | |
|
| 0.5884 | 3.02 | 1200 | 0.4335 | 0.8974 | |
|
| 0.5815 | 3.28 | 1300 | 0.3711 | 0.8974 | |
|
| 0.5923 | 3.53 | 1400 | 0.4266 | 0.8974 | |
|
| 0.6062 | 3.78 | 1500 | 0.4494 | 0.8974 | |
|
| 0.6025 | 4.03 | 1600 | 0.4098 | 0.8974 | |
|
| 0.5907 | 4.28 | 1700 | 0.3796 | 0.8974 | |
|
| 0.5933 | 4.54 | 1800 | 0.4114 | 0.8974 | |
|
| 0.5997 | 4.79 | 1900 | 0.4284 | 0.8974 | |
|
| 0.6028 | 5.04 | 2000 | 0.4269 | 0.8974 | |
|
| 0.5936 | 5.29 | 2100 | 0.4423 | 0.8974 | |
|
| 0.5994 | 5.55 | 2200 | 0.4397 | 0.8974 | |
|
| 0.5937 | 5.8 | 2300 | 0.4305 | 0.8974 | |
|
| 0.5958 | 6.05 | 2400 | 0.4338 | 0.8974 | |
|
| 0.5984 | 6.3 | 2500 | 0.3945 | 0.8974 | |
|
| 0.5943 | 6.55 | 2600 | 0.3878 | 0.8974 | |
|
| 0.5819 | 6.81 | 2700 | 0.4235 | 0.8974 | |
|
| 0.594 | 7.06 | 2800 | 0.4160 | 0.8974 | |
|
| 0.5883 | 7.31 | 2900 | 0.4076 | 0.8974 | |
|
| 0.5877 | 7.56 | 3000 | 0.4213 | 0.8974 | |
|
| 0.5939 | 7.81 | 3100 | 0.4089 | 0.8974 | |
|
| 0.6025 | 8.07 | 3200 | 0.4385 | 0.8974 | |
|
| 0.6016 | 8.32 | 3300 | 0.4373 | 0.8974 | |
|
| 0.5815 | 8.57 | 3400 | 0.4191 | 0.8974 | |
|
| 0.5915 | 8.82 | 3500 | 0.4216 | 0.8974 | |
|
| 0.602 | 9.07 | 3600 | 0.4337 | 0.8974 | |
|
| 0.5907 | 9.33 | 3700 | 0.4129 | 0.8974 | |
|
| 0.603 | 9.58 | 3800 | 0.4216 | 0.8974 | |
|
| 0.593 | 9.83 | 3900 | 0.4227 | 0.8974 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.34.0.dev0 |
|
- Pytorch 2.0.0.post302 |
|
- Datasets 2.14.5 |
|
- Tokenizers 0.13.3 |
|
|