Casual-Autopsy commited on
Commit
dd2a05c
1 Parent(s): e22bc5f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -27
README.md CHANGED
@@ -1,24 +1,65 @@
1
  ---
2
  base_model:
 
 
 
 
 
 
 
3
  - Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
4
  - Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
5
  tags:
6
  - merge
7
  - mergekit
8
  - lazymergekit
9
- - Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
10
- - Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
11
  ---
12
 
13
  # Psyche-3
14
 
15
  Psyche-3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
 
 
 
 
 
 
 
16
  * [Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B](https://huggingface.co/Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B)
17
  * [Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B](https://huggingface.co/Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B)
18
 
19
  ## 🧩 Configuration
20
 
21
  ```yaml
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  models:
23
  - model: Casual-Autopsy/Psyche-2
24
  - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
@@ -30,29 +71,4 @@ models:
30
  merge_method: task_arithmetic
31
  base_model: Casual-Autopsy/Psyche-2
32
  dtype: bfloat16
33
- ```
34
-
35
- ## 💻 Usage
36
-
37
- ```python
38
- !pip install -qU transformers accelerate
39
-
40
- from transformers import AutoTokenizer
41
- import transformers
42
- import torch
43
-
44
- model = "Casual-Autopsy/Psyche-3"
45
- messages = [{"role": "user", "content": "What is a large language model?"}]
46
-
47
- tokenizer = AutoTokenizer.from_pretrained(model)
48
- prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
49
- pipeline = transformers.pipeline(
50
- "text-generation",
51
- model=model,
52
- torch_dtype=torch.float16,
53
- device_map="auto",
54
- )
55
-
56
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
57
- print(outputs[0]["generated_text"])
58
  ```
 
1
  ---
2
  base_model:
3
+ - TheSkullery/llama-3-cat-8b-instruct-v1
4
+ - victunes/TherapyLlama-8B-v1
5
+ - herisan/llama-3-8b_mental_health_counseling_conversations
6
+ - Falah/lora_model_mental_health_llama3
7
+ - Abdo36/MentalLORALLAMA3
8
+ - zementalist/llama-3-8B-chat-psychotherapist
9
+ - PrahmodhRaj/Llama-3_Psychiatrist_Chat
10
  - Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
11
  - Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
12
  tags:
13
  - merge
14
  - mergekit
15
  - lazymergekit
 
 
16
  ---
17
 
18
  # Psyche-3
19
 
20
  Psyche-3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
21
+ * [TheSkullery/llama-3-cat-8b-instruct-v1](https://huggingface.co/TheSkullery/llama-3-cat-8b-instruct-v1)
22
+ * [victunes/TherapyLlama-8B-v1](https://huggingface.co/victunes/TherapyLlama-8B-v1)
23
+ * [herisan/llama-3-8b_mental_health_counseling_conversations](https://huggingface.co/herisan/llama-3-8b_mental_health_counseling_conversations)
24
+ * [Falah/lora_model_mental_health_llama3](https://huggingface.co/Falah/lora_model_mental_health_llama3)
25
+ * [Abdo36/MentalLORALLAMA3](https://huggingface.co/Abdo36/MentalLORALLAMA3)
26
+ * [zementalist/llama-3-8B-chat-psychotherapist](https://huggingface.co/zementalist/llama-3-8B-chat-psychotherapist)
27
+ * [PrahmodhRaj/Llama-3_Psychiatrist_Chat](https://huggingface.co/PrahmodhRaj/Llama-3_Psychiatrist_Chat)
28
  * [Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B](https://huggingface.co/Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B)
29
  * [Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B](https://huggingface.co/Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B)
30
 
31
  ## 🧩 Configuration
32
 
33
  ```yaml
34
+ slices:
35
+ - sources:
36
+ - model: TheSkullery/llama-3-cat-8b-instruct-v1
37
+ layer_range: [0, 32]
38
+ - model: victunes/TherapyLlama-8B-v1
39
+ layer_range: [0, 32]
40
+ parameters:
41
+ density: 0.55
42
+ weight: 0.35
43
+ - model: herisan/llama-3-8b_mental_health_counseling_conversations
44
+ layer_range: [0, 32]
45
+ parameters:
46
+ density: 0.55
47
+ weight: 0.35
48
+ merge_method: ties
49
+ base_model: TheSkullery/llama-3-cat-8b-instruct-v1
50
+ parameters:
51
+ int8_mask: true
52
+ dtype: bfloat16
53
+ ---
54
+ models:
55
+ - model: Casual-Autopsy/Psyche-1+Falah/lora_model_mental_health_llama3
56
+ - model: Casual-Autopsy/Psyche-1+Abdo36/MentalLORALLAMA3
57
+ - model: Casual-Autopsy/Psyche-1+zementalist/llama-3-8B-chat-psychotherapist
58
+ - model: Casual-Autopsy/Psyche-1+PrahmodhRaj/Llama-3_Psychiatrist_Chat
59
+ merge_method: model_stock
60
+ base_model: Casual-Autopsy/Psyche-1
61
+ dtype: bfloat16
62
+ ---
63
  models:
64
  - model: Casual-Autopsy/Psyche-2
65
  - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
 
71
  merge_method: task_arithmetic
72
  base_model: Casual-Autopsy/Psyche-2
73
  dtype: bfloat16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74
  ```