|
--- |
|
language: |
|
- zh |
|
license: apache-2.0 |
|
|
|
tags: |
|
- bert |
|
- NLU |
|
- NLI |
|
|
|
inference: false |
|
|
|
--- |
|
# Erlangshen-Roberta-110M-Semtiment, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). |
|
We collect 8 paraphrace datasets in the Chinese domain for finetune, with a total of 227347 samples. Our model is mainly based on [roberta](https://huggingface.co/hfl/chinese-roberta-wwm-ext) |
|
|
|
## Usage |
|
```python |
|
from transformers import BertForSequenceClassification |
|
from transformers import BertTokenizer |
|
import torch |
|
|
|
tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-110M-Sentiment') |
|
model=BertForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-110M-Sentiment') |
|
|
|
text='今天心情不好' |
|
|
|
output=model(torch.tensor([tokenizer.encode(text)])) |
|
print(torch.nn.functional.softmax(output.logits,dim=-1)) |
|
|
|
``` |
|
## Scores on downstream chinese tasks(The dev datasets of BUSTM and AFQMC may exist in the train set) |
|
| Model | ASAP-SENT | ASAP-ASPECT | ChnSentiCorp | |
|
| :--------: | :-----: | :----: | :-----: | |
|
| Erlangshen-Roberta-110M-Sentiment | 97.77 | 97.31 | 96.61 | |
|
| Erlangshen-Roberta-330M-Sentiment | 97.9 | 97.51 | 96.66 | |
|
|
|
## Citation |
|
If you find the resource is useful, please cite the following website in your paper. |
|
``` |
|
@misc{Fengshenbang-LM, |
|
title={Fengshenbang-LM}, |
|
author={IDEA-CCNL}, |
|
year={2021}, |
|
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, |
|
} |
|
``` |