PepBun commited on
Commit
921a5a3
1 Parent(s): b0ca176

Upload model

Browse files
Files changed (2) hide show
  1. README.md +18 -0
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -1260,4 +1260,22 @@ The following `bitsandbytes` quantization config was used during training:
1260
  ### Framework versions
1261
 
1262
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1263
  - PEFT 0.6.2
 
1260
  ### Framework versions
1261
 
1262
 
1263
+ - PEFT 0.6.2
1264
+ ## Training procedure
1265
+
1266
+
1267
+ The following `bitsandbytes` quantization config was used during training:
1268
+ - load_in_8bit: True
1269
+ - load_in_4bit: False
1270
+ - llm_int8_threshold: 6.0
1271
+ - llm_int8_skip_modules: None
1272
+ - llm_int8_enable_fp32_cpu_offload: False
1273
+ - llm_int8_has_fp16_weight: False
1274
+ - bnb_4bit_quant_type: fp4
1275
+ - bnb_4bit_use_double_quant: False
1276
+ - bnb_4bit_compute_dtype: float32
1277
+
1278
+ ### Framework versions
1279
+
1280
+
1281
  - PEFT 0.6.2
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e04cfa0b2b8a09f79e83ecbac2603710093930d25e9ee524b84dd24901166cba
3
  size 21020682
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98f7b4b2cff59f9fde7aa6039df404373dd3b554c868fc3145642b3e77002f0f
3
  size 21020682