Edit model card

DistilBERT model fine-tuned on 117567 English language tweets from a range of political agencies (EU_Commission, UN, OECD, IMF, ECB, Council of the European Union, UK government, Scottish government). Fine-tuned with learning rate = 2e-5, 16-sample batches, chunk_size = 50 tokens, 100 epochs (early stopping patience = 3 epochs), 3 warmup epochs. More details can be found at: https://github.com/rbroc/eucomm-twitter. No evaluation was performed, as fine-tuning was merely functional to providing checkpoints for a contextualized topic model.

Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.