Edit model card

roberta-large-condaqa-neg-tag-token-classifier

This model is a fine-tuned version of roberta-large on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0268
  • Precision: 0.0
  • Recall: 0.0
  • F1: 0.0
  • Accuracy: 0.9899

Model description

Negation detector. A roberta used for detecting negation words in sentences. A negation word will get label "Y".

Intended uses & limitations

Because the training dataset is small and one sentence is long, maybe some short sentence detection is not that satisfing.

Training and evaluation data

Using negation annotation and sentence from CondaQA. You can get the dataset through both github and huggingface. github: https://github.com/AbhilashaRavichander/CondaQA Common negation cues in CondaQA: ['halt', 'inhospitable', 'unhappy', 'unserviceable', 'dislike', 'unaware', 'unfavorable', 'barely', 'unseen', 'unoccupied', 'unreliability', 'insulator', 'stop', 'indistinguishable', 'unrestricted', 'unfairly', 'unsupervised', 'unicameral', 'forbid', 'unforgettable', 'reject', 'uneducated', 'unlimited', 'illegal', 'uncertainty', 'nonhuman', 'unborn', 'unshaven', 'uncanny', 'incomplete', 'unsure', 'unconscious', 'atypical', 'indirectly', 'unloaded', 'disadvantage', 'contrary', 'infrequent', 'unofficial', 'few', 'untouched', 'refuse', 'inequitable', 'disproportionate', 'unexpected', 'displeased', 'unpaved', 'unwieldy', 'not at all', 'absent', 'unnoticed', 'unpleasant', 'unsafe', 'unsigned', 'not', 'inaccurate', 'cannot', 'involuntary', 'unequipped', 'illiterate', 'cease', 'disagreeable', 'prohibit', 'unable', 'unstable', 'uninhabited', 'unclean', 'useless', 'disapprove', 'insensitive', 'in the absence of', 'impractical', 'unorthodox', 'untreated', 'unsuccessful', 'unwitting', 'unfashionable', 'disagreement', 'unmyelinated', 'unfortunate', 'unknown', 'ineffective', 'a lack of', 'instead of', 'refused', 'illegitimate', 'little', 'unpaid', 'fail', 'unintentionally', 'unglazed', "didn't", 'unprocessed', 'inability', 'undeveloped', 'exclude', 'neither', 'except', 'unequivocal', 'unconventional', 'incorrectly', 'unconditional', 'prevent', 'dissimilar', 'uncommon', 'inorganic', 'unquestionable', 'uncoated', 'unassisted', 'unprecedented', 'nonviolent', 'unarmed', 'unpopular', 'inadequate', 'uncomfortable', 'unwilling', 'unaffected', 'unfaithful', 'nobody', 'loss', 'without', 'undamaged', 'nothing', 'could not', 'impossible to', 'unaccompanied', 'unlike', 'oppose', 'compromising', 'unmarried', 'rarely', 'unlighted', 'inexperienced', 'rather than', 'unrelated', 'untied', 'dishonest', 'insecure', 'uneven', 'harmless', 'avoid', 'with the exception of', 'no', 'undefeated', 'no longer', 'inadvertently', 'absence', 'lack', 'unconnected', 'unfinished', 'invalid', 'unnecessary', 'invisibility', 'unusual', 'none', 'incredulous', 'impossible', 'never', 'untrained', 'incorrect', 'immobility', 'unclear', 'impartial', 'unlucky', 'deny', 'uncertain', 'hardly', 'unsaturated', 'informal', 'irregular', 'dissatisfaction']

Training procedure

Use code from huggingface source

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 256
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 6.0

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
No log 1.0 4 0.1526 0.0 0.0 0.0 0.9588
No log 2.0 8 0.0875 0.0 0.0 0.0 0.9588
No log 3.0 12 0.0396 0.0 0.0 0.0 0.9877
No log 4.0 16 0.0322 0.0 0.0 0.0 0.9899
No log 5.0 20 0.0270 0.0 0.0 0.0 0.9906
No log 6.0 24 0.0268 0.0 0.0 0.0 0.9899

Framework versions

  • Transformers 4.25.0.dev0
  • Pytorch 1.10.1
  • Datasets 2.6.1
  • Tokenizers 0.13.1
Downloads last month
14
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.