Edit model card

Model Overview

This is the model presented in the paper "Detecting Text Formality: A Study of Text Classification Approaches".

The original model is mDistilBERT (base). Then, it was fine-tuned on the multilingual corpus for fomality classiication X-FORMAL that consists of 4 languages -- English (from GYAFC), French, Italian, and Brazilian Portuguese. In our experiments, the model showed the best results within Transformer-based models for the cross-lingual formality classification knowledge transfer task. More details, code and data can be found here.

Evaluation Results

Here, we provide several metrics of the best models from each category participated in the comparison to understand the ranks of values. We report accuracy score for two setups -- multilingual model fine-tuned for each language separately and then fine-tuned on all languages. For cross-lingual experiments results, please, refer to the paper.

En It Po Fr All
bag-of-words 79.1 71.3 70.6 72.5 ---
CharBiLSTM 87.0 79.1 75.9 81.3 82.7
mDistilBERT-cased 86.6 76.8 75.9 79.1 79.4
mDeBERTa-base 87.3 76.6 75.8 78.9 79.9

How to use

from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = 's-nlp/mdistilbert-base-formality-ranker'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)

Citation

@inproceedings{dementieva-etal-2023-detecting,
    title = "Detecting Text Formality: A Study of Text Classification Approaches",
    author = "Dementieva, Daryna  and
      Babakov, Nikolay  and
      Panchenko, Alexander",
    editor = "Mitkov, Ruslan  and
      Angelova, Galia",
    booktitle = "Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing",
    month = sep,
    year = "2023",
    address = "Varna, Bulgaria",
    publisher = "INCOMA Ltd., Shoumen, Bulgaria",
    url = "https://aclanthology.org/2023.ranlp-1.31",
    pages = "274--284",
    abstract = "Formality is one of the important characteristics of text documents. The automatic detection of the formality level of a text is potentially beneficial for various natural language processing tasks. Before, two large-scale datasets were introduced for multiple languages featuring formality annotation{---}GYAFC and X-FORMAL. However, they were primarily used for the training of style transfer models. At the same time, the detection of text formality on its own may also be a useful application. This work proposes the first to our knowledge systematic study of formality detection methods based on statistical, neural-based, and Transformer-based machine learning methods and delivers the best-performing models for public usage. We conducted three types of experiments {--} monolingual, multilingual, and cross-lingual. The study shows the overcome of Char BiLSTM model over Transformer-based ones for the monolingual and multilingual formality classification task, while Transformer-based classifiers are more stable to cross-lingual knowledge transfer.",
}

Licensing Information

This model is licensed under the OpenRAIL++ License, which supports the development of various technologies—both industrial and academic—that serve the public good.

Downloads last month
24
Safetensors
Model size
135M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for s-nlp/mdistilbert-base-formality-ranker

Finetuned
(195)
this model