Update README.md
#2
by
rombodawg
- opened
README.md
CHANGED
@@ -156,6 +156,10 @@ This is one of the first models trained on the LosslessMegaCodeTrainingV2_1m_Evo
|
|
156 |
|
157 |
- This model was made as a colaboration between me and andreaskoepf who is an affiliate of Open Assistant.
|
158 |
|
|
|
|
|
|
|
|
|
159 |
Prompt template:
|
160 |
|
161 |
- chatml format is used: "<|im_start|>system\n{system message}<|im_end|>\n<|im_start|>user\n{user prompt}<|im_end|>\n<|im_start|>assistant\n{Assistant answer}<|im_end|>\n"
|
@@ -169,6 +173,42 @@ multi-line:
|
|
169 |
<|im_start|>assistant
|
170 |
{Assistant answer}<|im_end|>
|
171 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
172 |
Training data:
|
173 |
|
174 |
- https://wandb.ai/open-assistant/epfl-mt-sft/runs/run34_megacode2_min100_13b
|
@@ -183,4 +223,4 @@ Link for the filtered dataset used to make this model are bellow:
|
|
183 |
|
184 |
The original posting for this model was uploaded at the link bellow.
|
185 |
|
186 |
-
- https://huggingface.co/andreaskoepf/llama2-13b-megacode2_min100
|
|
|
156 |
|
157 |
- This model was made as a colaboration between me and andreaskoepf who is an affiliate of Open Assistant.
|
158 |
|
159 |
+
This Model score .29 on humaneval+ the same as LLaMA-2 70B Chat Link bellow (in this benchmark the model is called andreaskoepf/llama2-13b-megacode2_min100)
|
160 |
+
|
161 |
+
- https://tju01.github.io/FastEval-OpenAssistant/
|
162 |
+
|
163 |
Prompt template:
|
164 |
|
165 |
- chatml format is used: "<|im_start|>system\n{system message}<|im_end|>\n<|im_start|>user\n{user prompt}<|im_end|>\n<|im_start|>assistant\n{Assistant answer}<|im_end|>\n"
|
|
|
173 |
<|im_start|>assistant
|
174 |
{Assistant answer}<|im_end|>
|
175 |
```
|
176 |
+
Gpt4all template:
|
177 |
+
|
178 |
+
- System prompt
|
179 |
+
```
|
180 |
+
<|im_start|>system
|
181 |
+
{system message}
|
182 |
+
```
|
183 |
+
- Prompt template
|
184 |
+
```
|
185 |
+
<|im_end|>
|
186 |
+
<|im_start|>user
|
187 |
+
%1<|im_end|>
|
188 |
+
<|im_start|>assistant
|
189 |
+
```
|
190 |
+
|
191 |
+
Oobagooba Text-Generation-Webui Template
|
192 |
+
- user:
|
193 |
+
```
|
194 |
+
<|im_start|>user
|
195 |
+
{User string}<|im_end|>
|
196 |
+
```
|
197 |
+
- bot:
|
198 |
+
```
|
199 |
+
<|im_start|>assistant
|
200 |
+
{Bot string}<|im_end|>
|
201 |
+
```
|
202 |
+
- turn_template:
|
203 |
+
```
|
204 |
+
<|user|>\n<|user-message|>\n\n<|bot|>\n<|bot-message|>\n\n
|
205 |
+
```
|
206 |
+
- context:
|
207 |
+
```
|
208 |
+
<|im_start|>system
|
209 |
+
Below is an instruction that describes a task. Write a response that appropriately completes the request.<|im_end|>
|
210 |
+
```
|
211 |
+
|
212 |
Training data:
|
213 |
|
214 |
- https://wandb.ai/open-assistant/epfl-mt-sft/runs/run34_megacode2_min100_13b
|
|
|
223 |
|
224 |
The original posting for this model was uploaded at the link bellow.
|
225 |
|
226 |
+
- https://huggingface.co/andreaskoepf/llama2-13b-megacode2_min100
|