Edit model card

Model Specification

  • Model: RoBERTa Tagalog Base (Jan Christian Blaise Cruz)
  • Randomized training order of languages
  • Training Data:
    • Combined English, Serbian, Slovenian, & Naija corpora (Top 4 Languages)
  • Training Details:
    • Base configurations with learning rate 5e-5

Evaluation

  • Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)
  • Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 72.97% Accuracy)

POS Tags

  • ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Dataset used to train iceman2434/roberta-tagalog-base-ft-udpos213-top4langrandom

Collection including iceman2434/roberta-tagalog-base-ft-udpos213-top4langrandom