SE6446 commited on
Commit
9c71d59
1 Parent(s): a035832

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -13
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  license: mit
3
  base_model: microsoft/phi-2
4
  tags:
@@ -7,10 +8,23 @@ tags:
7
  model-index:
8
  - name: Phasmid-2_v2
9
  results: []
 
 
 
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
 
 
 
 
 
14
 
15
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
16
  <details><summary>See axolotl config</summary>
@@ -99,23 +113,49 @@ special_tokens:
99
 
100
  </details><br>
101
 
 
102
  # Phasmid-2_v2
103
 
104
- This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
105
  It achieves the following results on the evaluation set:
106
  - Loss: 2.2924
107
 
108
  ## Model description
109
-
110
- More information needed
111
-
112
  ## Intended uses & limitations
113
-
114
- More information needed
115
-
116
- ## Training and evaluation data
117
-
118
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
119
 
120
  ## Training procedure
121
 
@@ -158,4 +198,4 @@ The following hyperparameters were used during training:
158
  - Transformers 4.37.0.dev0
159
  - Pytorch 2.0.1+cu118
160
  - Datasets 2.16.1
161
- - Tokenizers 0.15.0
 
1
  ---
2
+ inference: false
3
  license: mit
4
  base_model: microsoft/phi-2
5
  tags:
 
8
  model-index:
9
  - name: Phasmid-2_v2
10
  results: []
11
+ datasets:
12
+ - PygmalionAI/PIPPA
13
+ - HuggingFaceH4/no_robots
14
  ---
15
 
16
+
17
+ ```
18
+ _ (`-. ('-. .-. ('-. .-') _ .-') _ .-') _
19
+ ( (OO )( OO ) / ( OO ).-. ( OO ).( '.( OO )_ ( ( OO) )
20
+ _.` \,--. ,--. / . --. /(_)---\_),--. ,--.) ,-.-') \ .'_
21
+ (__...--''| | | | | \-. \ / _ | | `.' | | |OO),`'--..._)
22
+ | / | || .| |.-'-' | |\ :` `. | | | | \| | \ '
23
+ | |_.' || | \| |_.' | '..`''.)| |'.'| | | |(_/| | ' |
24
+ | .___.'| .-. | | .-. |.-._) \| | | | ,| |_.'| | / :
25
+ | | | | | | | | | |\ /| | | |(_| | | '--' /
26
+ `--' `--' `--' `--' `--' `-----' `--' `--' `--' `-------'
27
+ ```
28
 
29
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
30
  <details><summary>See axolotl config</summary>
 
113
 
114
  </details><br>
115
 
116
+
117
  # Phasmid-2_v2
118
 
119
+ This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on a mix of no_robots and the PIPPA dataset.
120
  It achieves the following results on the evaluation set:
121
  - Loss: 2.2924
122
 
123
  ## Model description
124
+ Phasmid-2 has been trained on intructional data and thus can perform far better at instruction following than phi-2. However I have not extensively tested the model.
 
 
125
  ## Intended uses & limitations
126
+ This model is little more than a side project and I shall treat it as such.
127
+ Phasmid-2 (due to it's size), can still suffer from problematic hallucinations and poor information. No effort was made to reduce potentially toxic responses, as such you should train this model further if you require it to do so.
128
+ ## Inference
129
+ Phi doesn't like device_map = auto, therefore you should specify as like the following:
130
+
131
+ 1. FP16 / Flash-Attention / CUDA:
132
+ ```python
133
+ model = AutoModelForCausalLM.from_pretrained("SE6446/Phasmid-2", torch_dtype="auto", flash_attn=True, flash_rotary=True, fused_dense=True, device_map="cuda", trust_remote_code=True)
134
+ ```
135
+ 2. FP16 / CUDA:
136
+ ```python
137
+ model = AutoModelForCausalLM.from_pretrained("SE6446/Phasmid-2", torch_dtype="auto", device_map="cuda", trust_remote_code=True)
138
+ ```
139
+ 3. FP32 / CUDA:
140
+ ```python
141
+ model = AutoModelForCausalLM.from_pretrained("SE6446/Phasmid-2", torch_dtype=torch.float32, device_map="cuda", trust_remote_code=True)
142
+ ```
143
+ 4. FP32 / CPU:
144
+ ```python
145
+ model = AutoModelForCausalLM.from_pretrained("SE6446/Phasmid-2", torch_dtype=torch.float32, device_map="cpu", trust_remote_code=True)
146
+ ```
147
+
148
+ And then use the following snippet
149
+ ```python
150
+ tokenizer = AutoTokenizer.from_pretrained("SE6446/Phasmid-1_5-V0_1", trust_remote_code=True, torch_dtype="auto")
151
+ inputs = tokenizer('''SYSTEM: You are a helpful assistant. Please answer truthfully and write out your thinking step by step to be sure you get the right answer. If you make a mistake or encounter an error in your thinking, say so out loud and attempt to correct it. If you don't know or aren't sure about something, say so clearly. You will act as a professional logician, mathematician, and physicist. You will also act as the most appropriate type of expert to answer any particular question or solve the relevant problem; state which expert type your are, if so. Also think of any particular named expert that would be ideal to answer the relevant question or solve the relevant problem; name and act as them, if appropriate. (add your custom prompt like a character description in here)\n
152
+ USER: {{userinput}}\n
153
+ ASSISTANT: {{character name if applicable}}:''', return_tensors="pt", return_attention_mask=False)
154
+ outputs = model.generate(**inputs, max_length=200)
155
+ text = tokenizer.batch_decode(outputs)[0]
156
+ print(text)
157
+ ```
158
+ it should generate after "ASSISTANT:".
159
 
160
  ## Training procedure
161
 
 
198
  - Transformers 4.37.0.dev0
199
  - Pytorch 2.0.1+cu118
200
  - Datasets 2.16.1
201
+ - Tokenizers 0.15.0