Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ tags:
|
|
11 |
# Introduction
|
12 |
|
13 |
MentaLLaMA-chat-7B is part of the [MentaLLaMA](https://github.com/SteveKGYang/MentalLLaMA) project, the first open-source large language model (LLM) series for
|
14 |
-
interpretable mental health analysis with instruction-following capability.
|
15 |
The model is expected to make complex mental health analysis for various mental health conditions and give reliable explanations for each of its predictions.
|
16 |
It is fine-tuned on the IMHI dataset with 75K high-quality natural language instructions to boost its performance in downstream tasks.
|
17 |
We perform a comprehensive evaluation on the IMHI benchmark with 20K test samples. The result shows that MentalLLaMA approaches state-of-the-art discriminative
|
@@ -33,11 +33,11 @@ mental health monitoring systems.
|
|
33 |
|
34 |
In addition to MentaLLaMA-chat-7B, the MentaLLaMA project includes another model: MentaLLaMA-chat-13B, MentalBART, MentalT5.
|
35 |
|
36 |
-
- **MentaLLaMA-chat-13B**: This model
|
37 |
|
38 |
-
- **MentalBART**: This model
|
39 |
|
40 |
-
- **MentalT5**: This model
|
41 |
|
42 |
## Usage
|
43 |
|
@@ -56,11 +56,6 @@ use the GPU if it's available.
|
|
56 |
|
57 |
MentaLLaMA-chat-7B is licensed under MIT. For more details, please see the MIT file.
|
58 |
|
59 |
-
## About
|
60 |
-
|
61 |
-
This model is part of the MentaLLaMA project.
|
62 |
-
For more information, you can visit the [MentaLLaMA](https://github.com/SteveKGYang/MentalLLaMA) project on GitHub.
|
63 |
-
|
64 |
## Citation
|
65 |
|
66 |
If you use MentaLLaMA-chat-7B in your work, please cite the our paper:
|
|
|
11 |
# Introduction
|
12 |
|
13 |
MentaLLaMA-chat-7B is part of the [MentaLLaMA](https://github.com/SteveKGYang/MentalLLaMA) project, the first open-source large language model (LLM) series for
|
14 |
+
interpretable mental health analysis with instruction-following capability. This model is finetuned based on the Meta LLaMA2-chat-7B foundation model and the full IMHI instruction tuning data.
|
15 |
The model is expected to make complex mental health analysis for various mental health conditions and give reliable explanations for each of its predictions.
|
16 |
It is fine-tuned on the IMHI dataset with 75K high-quality natural language instructions to boost its performance in downstream tasks.
|
17 |
We perform a comprehensive evaluation on the IMHI benchmark with 20K test samples. The result shows that MentalLLaMA approaches state-of-the-art discriminative
|
|
|
33 |
|
34 |
In addition to MentaLLaMA-chat-7B, the MentaLLaMA project includes another model: MentaLLaMA-chat-13B, MentalBART, MentalT5.
|
35 |
|
36 |
+
- **MentaLLaMA-chat-13B**: This model is finetuned based on the Meta LLaMA2-chat-7B foundation model and the full IMHI instruction tuning data. The training data covers 10 mental health analysis tasks.
|
37 |
|
38 |
+
- **MentalBART**: This model is finetuned based on the BART-large foundation model and the full IMHI-completion data. The training data covers 10 mental health analysis tasks. This model doesn't have instruction-following ability but is more lightweight and performs well in interpretable mental health analysis in a completion-based manner.
|
39 |
|
40 |
+
- **MentalT5**: This model is finetuned based on the T5-large foundation model and the full IMHI-completion data. The training data covers 10 mental health analysis tasks. This model doesn't have instruction-following ability but is more lightweight and performs well in interpretable mental health analysis in a completion-based manner.
|
41 |
|
42 |
## Usage
|
43 |
|
|
|
56 |
|
57 |
MentaLLaMA-chat-7B is licensed under MIT. For more details, please see the MIT file.
|
58 |
|
|
|
|
|
|
|
|
|
|
|
59 |
## Citation
|
60 |
|
61 |
If you use MentaLLaMA-chat-7B in your work, please cite the our paper:
|