DistilROBERTA fine-tuned for bias detection
This model is based on distilroberta-base pretrained weights, with a classification head fine-tuned to classify text into 2 categories (neutral, biased).
Training data
The dataset used to fine-tune the model is wikirev-bias, extracted from English wikipedia revisions, see https://github.com/rpryzant/neutralizing-bias for details on the WNC wiki edits corpus.
Inputs
Similar to its base model, this model accepts inputs with a maximum length of 512 tokens.
- Downloads last month
- 18,014
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.