Empirical Study of Mutual Reinforcement Effect and Application in Few-shot Text Classification Tasks via Prompt
Abstract
The Mutual Reinforcement Effect (MRE) investigates the synergistic relationship between word-level and text-level classifications in text classification tasks. It posits that the performance of both classification levels can be mutually enhanced. However, this mechanism has not been adequately demonstrated or explained in prior research. To address this gap, we employ empirical experiment to observe and substantiate the MRE theory. Our experiments on 21 MRE mix datasets revealed the presence of MRE in the model and its impact. Specifically, we conducted compare experiments use fine-tune. The results of findings from comparison experiments corroborates the existence of MRE. Furthermore, we extended the application of MRE to prompt learning, utilizing word-level information as a verbalizer to bolster the model's prediction of text-level classification labels. In our final experiment, the F1-score significantly surpassed the baseline in 18 out of 21 MRE Mix datasets, further validating the notion that word-level information enhances the language model's comprehension of the text as a whole.
Community
This paper discussion with Mutual Reinforcement Effect in Information Extraction. And conduct a series experiment to demonstrate MRE theory.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Manual Verbalizer Enrichment for Few-Shot Text Classification (2024)
- Investigating LLM Applications in E-Commerce (2024)
- SciPrompt: Knowledge-augmented Prompting for Fine-grained Categorization of Scientific Topics (2024)
- An Effective Deployment of Diffusion LM for Data Augmentation in Low-Resource Sentiment Classification (2024)
- Are Large Language Models Good Classifiers? A Study on Edit Intent Classification in Scientific Document Revisions (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper