File size: 2,418 Bytes
af76557
c74906f
 
 
3f7b778
c74906f
d5f68be
 
 
 
 
 
 
 
af76557
 
 
 
 
0ac1935
af76557
c54a1ba
 
 
af76557
 
 
0ac1935
af76557
 
 
75a1cca
af76557
75a1cca
af76557
 
 
 
 
 
 
7de25ed
af76557
7de25ed
 
 
 
 
 
c54a1ba
 
 
af76557
 
 
 
 
 
d5f68be
 
c54a1ba
 
af76557
 
 
 
 
d5f68be
af76557
 
 
 
 
 
 
 
1a92268
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
datasets:
- heliosbrahma/mental_health_chatbot_dataset
- mpingale/mental-health-chat-dataset
library_name: peft
pipeline_tag: text-generation
tags:
- SFT
- PEFT
- Mental Health
- Psychotherapy
- Fine-tuning
- Text Generation
- Chatbot
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->
A LORA fine tuned version of Llama 3 8B instruct that is meant to serve you as a outlet to your negative thoughts

- **Developed by:** John4Blues (Alt account for 9Skies)
- **Finetuned from model:** https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
- **Demo:** https://huggingface.co/spaces/John4Blues/Therapy_Llama_3_8B



## Risks and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

- **By no means is this suppose to replace a therapy counselor, please seek professional help if you believe you need it.**

- **The responses from the model may not be factually accurate, please double check with other sources when you believe you need to.**



## How to Get Started with the Model

Use the code below to get started with the model.

The LORA/PEFT has already been merged with the uploaded model.

```
from transformers import AutoModelForCausalLM
model_id = "John4Blues/Llama-3-8B-Therapy"
model = AutoModelForCausalLM.from_pretrained(model_id)

```



## Training Details

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

1. [Amod/mental_health_counseling_conversations](https://huggingface.co/datasets/Amod/mental_health_counseling_conversations)
2. [mpingale/mental-health-chat-dataset](https://huggingface.co/datasets/mpingale/mental-health-chat-dataset) (processed)



### Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->

[google colab notebook](https://huggingface.co/John4Blues/Llama-3-8B-Therapy/blob/main/Therapy_LORA_Fined_Tuned_Llama3_8B.ipynb)



#### Training Hyperparameters

<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->


- batch_size: 2
- gradient_accumulation_steps: 2
- epochs: 3
- learning_rate: 2e-4
- warmup_ratio: 0.03
- dtype: fp16