Update README.md
Browse files
README.md
CHANGED
@@ -10,17 +10,21 @@ base_model: h2oai/h2o-danube2-1.8b-base
|
|
10 |
|
11 |
# h2o-danube2 with ChatML template
|
12 |
|
13 |
-
This is a [BAdam fine-tuned](https://arxiv.org/abs/2404.02827 "BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models
|
14 |
-
") danube2 base model. It uses the ChatML template and was trained on the [Airoboros-3.2](https://huggingface.co/datasets/jondurbin/airoboros-3.2) (CC BY 4.0) dataset from [jondurbin](https://huggingface.co/jondurbin).
|
15 |
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
## Template
|
18 |
|
19 |
```jinja
|
20 |
-
<|im_start
|
21 |
{{instruction}}<|im_end|>
|
22 |
-
<|im_start
|
23 |
-
{{response}}<|im_end
|
24 |
```
|
25 |
|
26 |
## BAdam
|
|
|
10 |
|
11 |
# h2o-danube2 with ChatML template
|
12 |
|
13 |
+
This is a [BAdam fine-tuned](https://arxiv.org/abs/2404.02827 "BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models") and [LoRA+](https://arxiv.org/abs/2402.12354 "LoRA+: Efficient Low Rank Adaptation of Large Models") danube2 base model. It uses the ChatML template and was trained on the [Airoboros-3.2](https://huggingface.co/datasets/jondurbin/airoboros-3.2) (CC BY 4.0) dataset from [jondurbin](https://huggingface.co/jondurbin).
|
|
|
14 |
|
15 |
+
## Quants
|
16 |
+
|
17 |
+
Thank you [mradermacher](https://huggingface.co/mradermacher)!
|
18 |
+
|
19 |
+
- [mradermacher/danube2-1.8b-airoboros-3.2-GGUF](https://huggingface.co/mradermacher/danube2-1.8b-airoboros-3.2-GGUF)
|
20 |
|
21 |
## Template
|
22 |
|
23 |
```jinja
|
24 |
+
<|im_start|>user
|
25 |
{{instruction}}<|im_end|>
|
26 |
+
<|im_start|>assistant
|
27 |
+
{{response}}<|im_end|>
|
28 |
```
|
29 |
|
30 |
## BAdam
|