Upload model
Browse files
README.md
CHANGED
@@ -414,4 +414,22 @@ The following `bitsandbytes` quantization config was used during training:
|
|
414 |
### Framework versions
|
415 |
|
416 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
417 |
- PEFT 0.6.2
|
|
|
414 |
### Framework versions
|
415 |
|
416 |
|
417 |
+
- PEFT 0.6.2
|
418 |
+
## Training procedure
|
419 |
+
|
420 |
+
|
421 |
+
The following `bitsandbytes` quantization config was used during training:
|
422 |
+
- load_in_8bit: True
|
423 |
+
- load_in_4bit: False
|
424 |
+
- llm_int8_threshold: 6.0
|
425 |
+
- llm_int8_skip_modules: None
|
426 |
+
- llm_int8_enable_fp32_cpu_offload: False
|
427 |
+
- llm_int8_has_fp16_weight: False
|
428 |
+
- bnb_4bit_quant_type: fp4
|
429 |
+
- bnb_4bit_use_double_quant: False
|
430 |
+
- bnb_4bit_compute_dtype: float32
|
431 |
+
|
432 |
+
### Framework versions
|
433 |
+
|
434 |
+
|
435 |
- PEFT 0.6.2
|