File size: 3,696 Bytes
97b3cee
 
 
 
 
 
 
 
 
 
 
 
ae553d1
97b3cee
ae553d1
 
 
 
 
85ee806
 
 
 
ae553d1
85ee806
ae553d1
 
85ee806
 
 
 
 
97b3cee
 
 
85ee806
97b3cee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85ee806
97b3cee
 
 
 
 
85ee806
97b3cee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3dc1454
 
 
 
 
 
 
 
 
 
 
 
85ee806
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79a7696
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
---
license: apache-2.0
datasets:
- jondurbin/airoboros-3.2
language:
- en
library_name: transformers
base_model: h2oai/h2o-danube2-1.8b-base
---

# h2o-danube2 with ChatML template

This is a [BAdam fine-tuned](https://arxiv.org/abs/2404.02827 "BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models") and [LoRA+](https://arxiv.org/abs/2402.12354 "LoRA+: Efficient Low Rank Adaptation of Large Models") danube2 base model. It uses the ChatML template and was trained on the [Airoboros-3.2](https://huggingface.co/datasets/jondurbin/airoboros-3.2) (CC BY 4.0) dataset from [jondurbin](https://huggingface.co/jondurbin).

## Quants

Thank you [mradermacher](https://huggingface.co/mradermacher)!

- [mradermacher/danube2-1.8b-airoboros-3.2-GGUF](https://huggingface.co/mradermacher/danube2-1.8b-airoboros-3.2-GGUF)

## Template

```jinja
<|im_start|>user
{{instruction}}<|im_end|>
<|im_start|>assistant
{{response}}<|im_end|>
```

## BAdam

**System:** You are a helpful assistant.

```yaml
### model
model_name_or_path: danube2-base-chatml

### method
stage: sft
do_train: true
finetuning_type: full
use_badam: true
badam_switch_mode: ascending
badam_switch_interval: 50
badam_verbose: 1
badam_start_block: 13
badam_mask_mode: scatter
seed: 314

### dataset
dataset: airoboros32
template: hermes_chatml
cutoff_len: 8192
overwrite_cache: false
preprocessing_num_workers: 12

### output
output_dir: airoboros32-chatml-badam
logging_steps: 5
save_steps: 1
save_strategy: epoch
plot_loss: true
overwrite_output_dir: false

### train
per_device_train_batch_size: 2
gradient_accumulation_steps: 8
learning_rate: 0.00001
num_train_epochs: 2
lr_scheduler_type: cosine
warmup_ratio: 0.01
pure_bf16: true
flash_attn: fa2

### eval
val_size: 0.01
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 1000
```

### BAdam Training results

| Training Loss | Epoch  | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9124        | 0.2753 | 1000 | 0.9466          |
| 0.8072        | 0.5506 | 2000 | 0.9149          |
| 0.9017        | 0.8258 | 3000 | 0.8982          |
| 0.8883        | 1.1011 | 4000 | 0.8844          |
| 0.8405        | 1.3764 | 5000 | 0.8786          |
| 0.864         | 1.6517 | 6000 | 0.8754          |
| 0.7758        | 1.9270 | 7000 | 0.8752          |

## QLoRA+

**System:** None

```yaml
### model
model_name_or_path: airoboros32-chatml-badam

### method
stage: sft
do_train: true
finetuning_type: lora
lora_target: all
loraplus_lr_ratio: 16.0
lora_rank: 8
lora_alpha: 16
use_unsloth: true
quantization_bit: 4
upcast_layernorm: true
seed: 314

### dataset
dataset: airoboros32
template: hermes_chatml
cutoff_len: 8192
overwrite_cache: false
preprocessing_num_workers: 12

### output
output_dir: airoboros32-chatml-badam/loraplus
logging_steps: 1
save_steps: 1
save_strategy: epoch
plot_loss: true
overwrite_output_dir: false

### train
per_device_train_batch_size: 4
gradient_accumulation_steps: 4
learning_rate: 0.0001
num_train_epochs: 2.0
lr_scheduler_type: cosine
warmup_ratio: 0.01
bf16: true
flash_attn: fa2

### eval
val_size: 0.02
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 1000
```

### QLoRA+ Training results

| Training Loss | Epoch  | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9691        | 0.2781 | 1000 | 0.8704          |
| 0.7387        | 0.5562 | 2000 | 0.8443          |
| 0.6769        | 0.8343 | 3000 | 0.8250          |
| 0.5156        | 1.1123 | 4000 | 0.8134          |
| 0.4142        | 1.3904 | 5000 | 0.8029          |
| 0.6328        | 1.6685 | 6000 | 0.7953          |
| 0.872         | 1.9466 | 7000 | 0.7927          |