Edit model card

Tagset

  • O
  • B-CITATION
  • I-CITATION
  • B-LAW
  • I-LAW

Training

  • The model was trained with the following hyperparamters:
    • batch size: 64
    • learning_rate: 0.00001
    • number of training epochs: 50 (actually trained: 23)
    • early stopping patience: 5

Predict scores

metric score
de_predict/_CITATION_f1 97.93
de_predict/_CITATION_precision 98.53
de_predict/_CITATION_recall 97.34
de_predict/_LAW_f1 92.08
de_predict/_LAW_precision 85.99
de_predict/_LAW_recall 99.1
de_predict/_accuracy_normalized 98.8
de_predict/_macro-f1 95.04
de_predict/_macro-precision 98.22
de_predict/_macro-recall 92.32
de_predict/_micro-f1 94.06
de_predict/_micro-precision 98.49
de_predict/_micro-recall 90.01
de_predict/_steps_per_second 54.9
de_predict/_weighted-f1 93.97
de_predict/_weighted-precision 98.55
de_predict/_weighted-recall 90.01
fr_predict/_CITATION_f1 95.55
fr_predict/_CITATION_precision 96.85
fr_predict/_CITATION_recall 94.28
fr_predict/_LAW_f1 91.01
fr_predict/_LAW_precision 83.67
fr_predict/_LAW_recall 99.76
fr_predict/_accuracy_normalized 98.31
fr_predict/_macro-f1 93.3
fr_predict/_macro-precision 97.02
fr_predict/_macro-recall 90.3
fr_predict/_micro-f1 92.06
fr_predict/_micro-precision 98.42
fr_predict/_micro-recall 86.47
fr_predict/_steps_per_second 59.3
fr_predict/_weighted-f1 91.99
fr_predict/_weighted-precision 98.62
fr_predict/_weighted-recall 86.47
it_predict/_CITATION_f1 97.04
it_predict/_CITATION_precision 97.7
it_predict/_CITATION_recall 96.39
it_predict/_LAW_f1 90.99
it_predict/_LAW_precision 84.23
it_predict/_LAW_recall 98.94
it_predict/_accuracy_normalized 98.92
it_predict/_macro-f1 94.13
it_predict/_macro-precision 97.66
it_predict/_macro-recall 91.2
it_predict/_micro-f1 93.11
it_predict/_micro-precision 98.03
it_predict/_micro-recall 88.67
it_predict/_steps_per_second 56.3
it_predict/_weighted-f1 93
it_predict/_weighted-precision 98.13
it_predict/_weighted-recall 88.67
predict/_CITATION_f1 97.36
predict/_CITATION_precision 98.11
predict/_CITATION_recall 96.62
predict/_LAW_f1 91.68
predict/_LAW_precision 85.15
predict/_LAW_recall 99.3
predict/_accuracy_normalized 98.68
predict/_macro-f1 94.56
predict/_macro-precision 97.96
predict/_macro-recall 91.7
predict/_micro-f1 93.43
predict/_micro-precision 98.45
predict/_micro-recall 88.91
predict/_steps_per_second 55.7
predict/_weighted-f1 93.34
predict/_weighted-precision 98.54
predict/_weighted-recall 88.91
predict_samples 28218
Downloads last month
5
Safetensors
Model size
118M params
Tensor type
I64
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.