PepBun commited on
Commit
b7844b2
1 Parent(s): 1aad0ca

Upload model

Browse files
Files changed (2) hide show
  1. README.md +18 -0
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -2412,4 +2412,22 @@ The following `bitsandbytes` quantization config was used during training:
2412
  ### Framework versions
2413
 
2414
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2415
  - PEFT 0.6.2
 
2412
  ### Framework versions
2413
 
2414
 
2415
+ - PEFT 0.6.2
2416
+ ## Training procedure
2417
+
2418
+
2419
+ The following `bitsandbytes` quantization config was used during training:
2420
+ - load_in_8bit: True
2421
+ - load_in_4bit: False
2422
+ - llm_int8_threshold: 6.0
2423
+ - llm_int8_skip_modules: None
2424
+ - llm_int8_enable_fp32_cpu_offload: False
2425
+ - llm_int8_has_fp16_weight: False
2426
+ - bnb_4bit_quant_type: fp4
2427
+ - bnb_4bit_use_double_quant: False
2428
+ - bnb_4bit_compute_dtype: float32
2429
+
2430
+ ### Framework versions
2431
+
2432
+
2433
  - PEFT 0.6.2
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5df5d67f7879424a2f8a4b7141b238150a3f15e8dd26128ad9a3ab1ad4b65cfb
3
  size 21020682
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98c21ecc7700df9afde17fce3613172594418955e274acc3b5ca821499ed6c42
3
  size 21020682