klyang commited on
Commit
3ca1890
1 Parent(s): 7a88fec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -1
README.md CHANGED
@@ -8,7 +8,7 @@ tags:
8
  - medical
9
  ---
10
 
11
- # MentaLLaMA-chat-13B
12
 
13
  MentaLLaMA-chat-13B is part of the [MentaLLaMA](https://github.com/SteveKGYang/MentalLLaMA) project, the first open-source large language model (LLM) series for
14
  interpretable mental health analysis with instruction-following capability. This model is finetuned based on the Meta LLaMA2-chat-13B foundation model and the full IMHI instruction tuning data.
@@ -17,6 +17,18 @@ It is fine-tuned on the IMHI dataset with 75K high-quality natural language inst
17
  We perform a comprehensive evaluation on the IMHI benchmark with 20K test samples. The result shows that MentalLLaMA approaches state-of-the-art discriminative
18
  methods in correctness and generates high-quality explanations.
19
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ## Other Models in MentaLLaMA
21
 
22
  In addition to MentaLLaMA-chat-13B, the MentaLLaMA project includes another model: MentaLLaMA-chat-7B, MentalBART, MentalT5.
 
8
  - medical
9
  ---
10
 
11
+ # Introduction
12
 
13
  MentaLLaMA-chat-13B is part of the [MentaLLaMA](https://github.com/SteveKGYang/MentalLLaMA) project, the first open-source large language model (LLM) series for
14
  interpretable mental health analysis with instruction-following capability. This model is finetuned based on the Meta LLaMA2-chat-13B foundation model and the full IMHI instruction tuning data.
 
17
  We perform a comprehensive evaluation on the IMHI benchmark with 20K test samples. The result shows that MentalLLaMA approaches state-of-the-art discriminative
18
  methods in correctness and generates high-quality explanations.
19
 
20
+ # Ethical Consideration
21
+
22
+ Although experiments on MentaLLaMA show promising performance on interpretable mental health analysis, we stress that
23
+ all predicted results and generated explanations should only used
24
+ for non-clinical research, and the help-seeker should get assistance
25
+ from professional psychiatrists or clinical practitioners. In addition,
26
+ recent studies have indicated LLMs may introduce some potential
27
+ bias, such as gender gaps. Meanwhile, some incorrect prediction results, inappropriate explanations, and over-generalization
28
+ also illustrate the potential risks of current LLMs. Therefore, there
29
+ are still many challenges in applying the model to real-scenario
30
+ mental health monitoring systems.
31
+
32
  ## Other Models in MentaLLaMA
33
 
34
  In addition to MentaLLaMA-chat-13B, the MentaLLaMA project includes another model: MentaLLaMA-chat-7B, MentalBART, MentalT5.