PepBun commited on
Commit
e26847c
1 Parent(s): c1c249c

Upload model

Browse files
Files changed (2) hide show
  1. README.md +18 -0
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -1764,4 +1764,22 @@ The following `bitsandbytes` quantization config was used during training:
1764
  ### Framework versions
1765
 
1766
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1767
  - PEFT 0.6.2
 
1764
  ### Framework versions
1765
 
1766
 
1767
+ - PEFT 0.6.2
1768
+ ## Training procedure
1769
+
1770
+
1771
+ The following `bitsandbytes` quantization config was used during training:
1772
+ - load_in_8bit: True
1773
+ - load_in_4bit: False
1774
+ - llm_int8_threshold: 6.0
1775
+ - llm_int8_skip_modules: None
1776
+ - llm_int8_enable_fp32_cpu_offload: False
1777
+ - llm_int8_has_fp16_weight: False
1778
+ - bnb_4bit_quant_type: fp4
1779
+ - bnb_4bit_use_double_quant: False
1780
+ - bnb_4bit_compute_dtype: float32
1781
+
1782
+ ### Framework versions
1783
+
1784
+
1785
  - PEFT 0.6.2
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5d06c4f95d400e7aa427c837e9a6fe7ad11263520de9dd9d7c71726e59e60279
3
  size 21020682
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89a4d2085f10ffa3a116c8c6f3f5cf1e098d0ca0a99f72bb20f8fc1901fc080d
3
  size 21020682