aashish1904 commited on
Commit
b521500
1 Parent(s): 9313bee

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +184 -0
README.md ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ library_name: transformers
5
+ base_model: jeiku/Magic_8B
6
+ tags:
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: outputs/out
10
+ results: []
11
+
12
+ ---
13
+
14
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
15
+
16
+
17
+ # QuantFactory/Fatgirl_8B-GGUF
18
+ This is quantized version of [jeiku/Fatgirl_8B](https://huggingface.co/jeiku/Fatgirl_8B) created using llama.cpp
19
+
20
+ # Original Model Card
21
+
22
+
23
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
24
+ should probably proofread and complete it, then remove this comment. -->
25
+
26
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
27
+ <details><summary>See axolotl config</summary>
28
+
29
+ axolotl version: `0.4.1`
30
+ ```yaml
31
+ base_model: jeiku/Magic_8B
32
+ model_type: AutoModelForCausalLM
33
+ tokenizer_type: AutoTokenizer
34
+
35
+ load_in_8bit: false
36
+ load_in_4bit: false
37
+ strict: false
38
+
39
+ datasets:
40
+ - path: anthracite-org/stheno-filtered-v1.1
41
+ type: sharegpt
42
+ conversation: chatml
43
+ - path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
44
+ type: sharegpt
45
+ conversation: chatml
46
+ - path: ResplendentAI/bluemoon
47
+ type: sharegpt
48
+ conversation: chatml
49
+ - path: openerotica/freedom-rp
50
+ type: sharegpt
51
+ conversation: chatml
52
+ - path: MinervaAI/Aesir-Preview
53
+ type: sharegpt
54
+ conversation: chatml
55
+
56
+ chat_template: chatml
57
+
58
+ val_set_size: 0.01
59
+ output_dir: ./outputs/out
60
+
61
+ adapter:
62
+ lora_r:
63
+ lora_alpha:
64
+ lora_dropout:
65
+ lora_target_linear:
66
+
67
+ sequence_len: 8192
68
+ # sequence_len: 32768
69
+ sample_packing: true
70
+ eval_sample_packing: false
71
+ pad_to_sequence_len: true
72
+
73
+ plugins:
74
+ - axolotl.integrations.liger.LigerPlugin
75
+ liger_rope: true
76
+ liger_rms_norm: true
77
+ liger_swiglu: true
78
+ liger_fused_linear_cross_entropy: true
79
+
80
+ wandb_project: New8B
81
+ wandb_entity:
82
+ wandb_watch:
83
+ wandb_name: New8B
84
+ wandb_log_model:
85
+
86
+ gradient_accumulation_steps: 32
87
+ micro_batch_size: 1
88
+ num_epochs: 2
89
+ optimizer: adamw_bnb_8bit
90
+ lr_scheduler: cosine
91
+ learning_rate: 0.00001
92
+ weight_decay: 0.05
93
+
94
+ train_on_inputs: false
95
+ group_by_length: false
96
+ bf16: auto
97
+ fp16:
98
+ tf32: true
99
+
100
+ gradient_checkpointing: true
101
+ early_stopping_patience:
102
+ resume_from_checkpoint:
103
+ local_rank:
104
+ logging_steps: 1
105
+ xformers_attention:
106
+ flash_attention: true
107
+
108
+ warmup_ratio: 0.1
109
+ evals_per_epoch: 4
110
+ eval_table_size:
111
+ eval_max_new_tokens: 128
112
+ saves_per_epoch: 2
113
+
114
+ debug:
115
+ deepspeed:
116
+ fsdp:
117
+ fsdp_config:
118
+
119
+ special_tokens:
120
+ pad_token: <pad>
121
+
122
+
123
+ ```
124
+
125
+ </details><br>
126
+
127
+ # outputs/out
128
+
129
+ This model is a fine-tuned version of [jeiku/Magic_8B](https://huggingface.co/jeiku/Magic_8B) on the None dataset.
130
+ It achieves the following results on the evaluation set:
131
+ - Loss: 1.3029
132
+
133
+ ## Model description
134
+
135
+ More information needed
136
+
137
+ ## Intended uses & limitations
138
+
139
+ More information needed
140
+
141
+ ## Training and evaluation data
142
+
143
+ More information needed
144
+
145
+ ## Training procedure
146
+
147
+ ### Training hyperparameters
148
+
149
+ The following hyperparameters were used during training:
150
+ - learning_rate: 1e-05
151
+ - train_batch_size: 1
152
+ - eval_batch_size: 1
153
+ - seed: 42
154
+ - distributed_type: multi-GPU
155
+ - num_devices: 2
156
+ - gradient_accumulation_steps: 32
157
+ - total_train_batch_size: 64
158
+ - total_eval_batch_size: 2
159
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
160
+ - lr_scheduler_type: cosine
161
+ - lr_scheduler_warmup_steps: 32
162
+ - num_epochs: 2
163
+
164
+ ### Training results
165
+
166
+ | Training Loss | Epoch | Step | Validation Loss |
167
+ |:-------------:|:------:|:----:|:---------------:|
168
+ | 1.447 | 0.0062 | 1 | 1.4349 |
169
+ | 1.3437 | 0.2530 | 41 | 1.3502 |
170
+ | 1.3734 | 0.5060 | 82 | 1.3237 |
171
+ | 1.3543 | 0.7590 | 123 | 1.3128 |
172
+ | 1.319 | 1.0102 | 164 | 1.3064 |
173
+ | 1.2886 | 1.2636 | 205 | 1.3042 |
174
+ | 1.2387 | 1.5169 | 246 | 1.3031 |
175
+ | 1.3746 | 1.7702 | 287 | 1.3029 |
176
+
177
+
178
+ ### Framework versions
179
+
180
+ - Transformers 4.45.0.dev0
181
+ - Pytorch 2.4.0+cu121
182
+ - Datasets 2.21.0
183
+ - Tokenizers 0.19.1
184
+