Edit model card

robbert-v2-dutch-base-finetuned-emotion-valence

This model is a fine-tuned version of pdelobelle/robbert-v2-dutch-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0317
  • Rmse: 0.1781

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Rmse
0.0813 1.0 25 0.0510 0.2258
0.0445 2.0 50 0.0381 0.1952
0.0409 3.0 75 0.0466 0.2158
0.0308 4.0 100 0.0351 0.1874
0.0257 5.0 125 0.0393 0.1983
0.0231 6.0 150 0.0442 0.2103
0.0203 7.0 175 0.0447 0.2115
0.0191 8.0 200 0.0372 0.1929
0.0156 9.0 225 0.0425 0.2061
0.0154 10.0 250 0.0367 0.1917
0.0138 11.0 275 0.0365 0.1910
0.0128 12.0 300 0.0432 0.2078
0.0137 13.0 325 0.0329 0.1814
0.0118 14.0 350 0.0327 0.1809
0.0118 15.0 375 0.0378 0.1945
0.0109 16.0 400 0.0360 0.1897
0.0103 17.0 425 0.0325 0.1803
0.0096 18.0 450 0.0327 0.1809
0.0091 19.0 475 0.0430 0.2072
0.0081 20.0 500 0.0345 0.1856
0.0094 21.0 525 0.0365 0.1912
0.0084 22.0 550 0.0350 0.1870
0.0075 23.0 575 0.0324 0.1800
0.0069 24.0 600 0.0330 0.1816
0.0087 25.0 625 0.0347 0.1863
0.0079 26.0 650 0.0297 0.1722
0.0071 27.0 675 0.0311 0.1763
0.0076 28.0 700 0.0322 0.1795
0.0064 29.0 725 0.0338 0.1839
0.0067 30.0 750 0.0326 0.1806
0.0061 31.0 775 0.0327 0.1808
0.0064 32.0 800 0.0339 0.1842
0.0062 33.0 825 0.0300 0.1732
0.0062 34.0 850 0.0331 0.1819
0.0055 35.0 875 0.0318 0.1782
0.0059 36.0 900 0.0323 0.1797
0.0056 37.0 925 0.0311 0.1765
0.0055 38.0 950 0.0310 0.1762
0.0053 39.0 975 0.0325 0.1802
0.0056 40.0 1000 0.0310 0.1761
0.0054 41.0 1025 0.0323 0.1799
0.0057 42.0 1050 0.0351 0.1873
0.0053 43.0 1075 0.0347 0.1861
0.0054 44.0 1100 0.0330 0.1816
0.0059 45.0 1125 0.0313 0.1769
0.0053 46.0 1150 0.0312 0.1766
0.0051 47.0 1175 0.0325 0.1804
0.0057 48.0 1200 0.0304 0.1745
0.0048 49.0 1225 0.0317 0.1782
0.005 50.0 1250 0.0317 0.1781

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
8
Safetensors
Model size
117M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for antalvdb/robbert-v2-dutch-base-finetuned-emotion-valence

Finetuned
(40)
this model