Why do these classification results look so polarized?
#8
by
const7
- opened
I randomly selected some sentences and fed them into this model. Most neutral sentences were classified as highly negative. It's really weird.
from transformers import pipeline
classifier_sent = pipeline("sentiment-analysis")
def print_sent(sentence):
print(f"{sentence}: {classifier_sent(sentence)[0]}")
# positive
print_sent("I love you !")
# negative
print_sent("Why do you do this ?")
# official example. pos & neg
print_sent("We are very happy to show you the π€ Transformers library.")
print_sent("We hope you don't hate it.")
# neutral
print_sent("Where are you going?")
print_sent("What is the name of the repository ?")
print_sent("Pipeline has been included in the huggingface / transformers repository")
print_sent("Confounding effects of antipsychotic medication can be excluded")
# results
# I love you !: {'label': 'POSITIVE', 'score': 0.9998782873153687}
# Why do you do this ?: {'label': 'NEGATIVE', 'score': 0.9973565340042114}
# We are very happy to show you the π€ Transformers library.: {'label': 'POSITIVE', 'score': 0.9997795224189758}
# We hope you don't hate it.: {'label': 'NEGATIVE', 'score': 0.5308617353439331}
# Where are you going?: {'label': 'NEGATIVE', 'score': 0.9721148014068604}
# What is the name of the repository ?: {'label': 'NEGATIVE', 'score': 0.9987552165985107}
# Pipeline has been included in the huggingface / transformers repository: {'label': 'NEGATIVE', 'score': 0.9925073981285095}
# Confounding effects of antipsychotic medication can be excluded: {'label': 'NEGATIVE', 'score': 0.9893580675125122}