kstecenko commited on
Commit
5136890
1 Parent(s): 96d5e4a

Upload model

Browse files
Files changed (1) hide show
  1. README.md +36 -0
README.md CHANGED
@@ -251,4 +251,40 @@ The following `bitsandbytes` quantization config was used during training:
251
  ### Framework versions
252
 
253
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
254
  - PEFT 0.6.0.dev0
 
251
  ### Framework versions
252
 
253
 
254
+ - PEFT 0.6.0.dev0
255
+ ## Training procedure
256
+
257
+
258
+ The following `bitsandbytes` quantization config was used during training:
259
+ - load_in_8bit: False
260
+ - load_in_4bit: True
261
+ - llm_int8_threshold: 6.0
262
+ - llm_int8_skip_modules: None
263
+ - llm_int8_enable_fp32_cpu_offload: False
264
+ - llm_int8_has_fp16_weight: False
265
+ - bnb_4bit_quant_type: nf4
266
+ - bnb_4bit_use_double_quant: True
267
+ - bnb_4bit_compute_dtype: bfloat16
268
+
269
+ ### Framework versions
270
+
271
+
272
+ - PEFT 0.6.0.dev0
273
+ ## Training procedure
274
+
275
+
276
+ The following `bitsandbytes` quantization config was used during training:
277
+ - load_in_8bit: False
278
+ - load_in_4bit: True
279
+ - llm_int8_threshold: 6.0
280
+ - llm_int8_skip_modules: None
281
+ - llm_int8_enable_fp32_cpu_offload: False
282
+ - llm_int8_has_fp16_weight: False
283
+ - bnb_4bit_quant_type: nf4
284
+ - bnb_4bit_use_double_quant: True
285
+ - bnb_4bit_compute_dtype: bfloat16
286
+
287
+ ### Framework versions
288
+
289
+
290
  - PEFT 0.6.0.dev0