Supreeta03's picture
Supreeta03/wav2vec-base-CREMA-sentiment-analysis-best1
9d92666 verified
|
raw
history blame
4.55 kB
metadata
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: wav2vec-base-CREMA-sentiment-analysis
    results: []

wav2vec-base-CREMA-sentiment-analysis

This model is a fine-tuned version of facebook/wav2vec2-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3834
  • Accuracy: 0.5756

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.7533 0.99 37 1.7438 0.3056
1.5942 1.99 74 1.5781 0.3409
1.4434 2.98 111 1.5167 0.3568
1.4327 4.0 149 1.4241 0.4156
1.3499 4.99 186 1.3148 0.4819
1.2126 5.99 223 1.2915 0.4887
1.1838 6.98 260 1.2672 0.4971
1.1141 8.0 298 1.2433 0.5046
1.1163 8.99 335 1.1968 0.5281
1.0099 9.99 372 1.1610 0.5516
0.9566 10.98 409 1.1547 0.5567
0.906 12.0 447 1.1565 0.5584
0.8275 12.99 484 1.1442 0.5718
0.7813 13.99 521 1.2570 0.5550
0.711 14.98 558 1.1654 0.5567
0.7146 16.0 596 1.4391 0.5323
0.6597 16.99 633 1.2309 0.5659
0.5579 17.99 670 1.2385 0.5760
0.5874 18.98 707 1.2609 0.5760
0.4905 20.0 745 1.3433 0.5777
0.5089 20.99 782 1.3727 0.5584
0.4414 21.99 819 1.3488 0.5676
0.3837 22.98 856 1.3572 0.5819
0.4419 24.0 894 1.5063 0.5651
0.387 24.99 931 1.4656 0.5659
0.4068 25.99 968 1.5354 0.5701
0.3496 26.98 1005 1.4607 0.5684
0.3579 28.0 1043 1.5049 0.5651
0.3135 28.99 1080 1.4441 0.5743
0.3612 29.99 1117 1.5329 0.5701
0.2599 30.98 1154 1.5920 0.5668
0.2517 32.0 1192 1.5633 0.5626
0.2439 32.99 1229 1.5979 0.5718
0.2891 33.99 1266 1.5590 0.5785
0.2564 34.98 1303 1.5978 0.5751
0.2132 36.0 1341 1.6400 0.5634
0.1882 36.99 1378 1.6309 0.5718
0.2027 37.99 1415 1.6376 0.5743
0.2555 38.98 1452 1.7064 0.5693
0.1872 40.0 1490 1.6575 0.5819
0.1891 40.99 1527 1.6606 0.5735
0.1795 41.99 1564 1.6507 0.5735
0.1931 42.98 1601 1.6627 0.5760
0.1574 44.0 1639 1.6944 0.5802
0.1842 44.99 1676 1.7082 0.5768
0.1859 45.99 1713 1.7004 0.5768
0.2088 46.98 1750 1.7002 0.5802
0.197 48.0 1788 1.6969 0.5751
0.1902 48.99 1825 1.6996 0.5743
0.1771 49.66 1850 1.6998 0.5743

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2