PepBun commited on
Commit
975f304
1 Parent(s): 450ca50

Upload model

Browse files
Files changed (3) hide show
  1. README.md +3 -897
  2. adapter_config.json +2 -2
  3. adapter_model.bin +3 -0
README.md CHANGED
@@ -203,908 +203,14 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
203
 
204
 
205
  The following `bitsandbytes` quantization config was used during training:
206
- - quant_method: bitsandbytes
207
- - load_in_8bit: True
208
- - load_in_4bit: False
209
  - llm_int8_threshold: 6.0
210
  - llm_int8_skip_modules: None
211
  - llm_int8_enable_fp32_cpu_offload: False
212
  - llm_int8_has_fp16_weight: False
213
  - bnb_4bit_quant_type: fp4
214
- - bnb_4bit_use_double_quant: False
215
- - bnb_4bit_compute_dtype: float32
216
-
217
- ### Framework versions
218
-
219
-
220
- - PEFT 0.6.2
221
- ## Training procedure
222
-
223
-
224
- The following `bitsandbytes` quantization config was used during training:
225
- - quant_method: bitsandbytes
226
- - load_in_8bit: True
227
- - load_in_4bit: False
228
- - llm_int8_threshold: 6.0
229
- - llm_int8_skip_modules: None
230
- - llm_int8_enable_fp32_cpu_offload: False
231
- - llm_int8_has_fp16_weight: False
232
- - bnb_4bit_quant_type: fp4
233
- - bnb_4bit_use_double_quant: False
234
- - bnb_4bit_compute_dtype: float32
235
-
236
- ### Framework versions
237
-
238
-
239
- - PEFT 0.6.2
240
- ## Training procedure
241
-
242
-
243
- The following `bitsandbytes` quantization config was used during training:
244
- - quant_method: bitsandbytes
245
- - load_in_8bit: True
246
- - load_in_4bit: False
247
- - llm_int8_threshold: 6.0
248
- - llm_int8_skip_modules: None
249
- - llm_int8_enable_fp32_cpu_offload: False
250
- - llm_int8_has_fp16_weight: False
251
- - bnb_4bit_quant_type: fp4
252
- - bnb_4bit_use_double_quant: False
253
- - bnb_4bit_compute_dtype: float32
254
-
255
- ### Framework versions
256
-
257
-
258
- - PEFT 0.6.2
259
- ## Training procedure
260
-
261
-
262
- The following `bitsandbytes` quantization config was used during training:
263
- - quant_method: bitsandbytes
264
- - load_in_8bit: True
265
- - load_in_4bit: False
266
- - llm_int8_threshold: 6.0
267
- - llm_int8_skip_modules: None
268
- - llm_int8_enable_fp32_cpu_offload: False
269
- - llm_int8_has_fp16_weight: False
270
- - bnb_4bit_quant_type: fp4
271
- - bnb_4bit_use_double_quant: False
272
- - bnb_4bit_compute_dtype: float32
273
-
274
- ### Framework versions
275
-
276
-
277
- - PEFT 0.6.2
278
- ## Training procedure
279
-
280
-
281
- The following `bitsandbytes` quantization config was used during training:
282
- - quant_method: bitsandbytes
283
- - load_in_8bit: True
284
- - load_in_4bit: False
285
- - llm_int8_threshold: 6.0
286
- - llm_int8_skip_modules: None
287
- - llm_int8_enable_fp32_cpu_offload: False
288
- - llm_int8_has_fp16_weight: False
289
- - bnb_4bit_quant_type: fp4
290
- - bnb_4bit_use_double_quant: False
291
- - bnb_4bit_compute_dtype: float32
292
-
293
- ### Framework versions
294
-
295
-
296
- - PEFT 0.6.2
297
- ## Training procedure
298
-
299
-
300
- The following `bitsandbytes` quantization config was used during training:
301
- - quant_method: bitsandbytes
302
- - load_in_8bit: True
303
- - load_in_4bit: False
304
- - llm_int8_threshold: 6.0
305
- - llm_int8_skip_modules: None
306
- - llm_int8_enable_fp32_cpu_offload: False
307
- - llm_int8_has_fp16_weight: False
308
- - bnb_4bit_quant_type: fp4
309
- - bnb_4bit_use_double_quant: False
310
- - bnb_4bit_compute_dtype: float32
311
-
312
- ### Framework versions
313
-
314
-
315
- - PEFT 0.6.2
316
- ## Training procedure
317
-
318
-
319
- The following `bitsandbytes` quantization config was used during training:
320
- - quant_method: bitsandbytes
321
- - load_in_8bit: True
322
- - load_in_4bit: False
323
- - llm_int8_threshold: 6.0
324
- - llm_int8_skip_modules: None
325
- - llm_int8_enable_fp32_cpu_offload: False
326
- - llm_int8_has_fp16_weight: False
327
- - bnb_4bit_quant_type: fp4
328
- - bnb_4bit_use_double_quant: False
329
- - bnb_4bit_compute_dtype: float32
330
-
331
- ### Framework versions
332
-
333
-
334
- - PEFT 0.6.2
335
- ## Training procedure
336
-
337
-
338
- The following `bitsandbytes` quantization config was used during training:
339
- - quant_method: bitsandbytes
340
- - load_in_8bit: True
341
- - load_in_4bit: False
342
- - llm_int8_threshold: 6.0
343
- - llm_int8_skip_modules: None
344
- - llm_int8_enable_fp32_cpu_offload: False
345
- - llm_int8_has_fp16_weight: False
346
- - bnb_4bit_quant_type: fp4
347
- - bnb_4bit_use_double_quant: False
348
- - bnb_4bit_compute_dtype: float32
349
-
350
- ### Framework versions
351
-
352
-
353
- - PEFT 0.6.2
354
- ## Training procedure
355
-
356
-
357
- The following `bitsandbytes` quantization config was used during training:
358
- - quant_method: bitsandbytes
359
- - load_in_8bit: True
360
- - load_in_4bit: False
361
- - llm_int8_threshold: 6.0
362
- - llm_int8_skip_modules: None
363
- - llm_int8_enable_fp32_cpu_offload: False
364
- - llm_int8_has_fp16_weight: False
365
- - bnb_4bit_quant_type: fp4
366
- - bnb_4bit_use_double_quant: False
367
- - bnb_4bit_compute_dtype: float32
368
-
369
- ### Framework versions
370
-
371
-
372
- - PEFT 0.6.2
373
- ## Training procedure
374
-
375
-
376
- The following `bitsandbytes` quantization config was used during training:
377
- - quant_method: bitsandbytes
378
- - load_in_8bit: True
379
- - load_in_4bit: False
380
- - llm_int8_threshold: 6.0
381
- - llm_int8_skip_modules: None
382
- - llm_int8_enable_fp32_cpu_offload: False
383
- - llm_int8_has_fp16_weight: False
384
- - bnb_4bit_quant_type: fp4
385
- - bnb_4bit_use_double_quant: False
386
- - bnb_4bit_compute_dtype: float32
387
-
388
- ### Framework versions
389
-
390
-
391
- - PEFT 0.6.2
392
- ## Training procedure
393
-
394
-
395
- The following `bitsandbytes` quantization config was used during training:
396
- - quant_method: bitsandbytes
397
- - load_in_8bit: True
398
- - load_in_4bit: False
399
- - llm_int8_threshold: 6.0
400
- - llm_int8_skip_modules: None
401
- - llm_int8_enable_fp32_cpu_offload: False
402
- - llm_int8_has_fp16_weight: False
403
- - bnb_4bit_quant_type: fp4
404
- - bnb_4bit_use_double_quant: False
405
- - bnb_4bit_compute_dtype: float32
406
-
407
- ### Framework versions
408
-
409
-
410
- - PEFT 0.6.2
411
- ## Training procedure
412
-
413
-
414
- The following `bitsandbytes` quantization config was used during training:
415
- - quant_method: bitsandbytes
416
- - load_in_8bit: True
417
- - load_in_4bit: False
418
- - llm_int8_threshold: 6.0
419
- - llm_int8_skip_modules: None
420
- - llm_int8_enable_fp32_cpu_offload: False
421
- - llm_int8_has_fp16_weight: False
422
- - bnb_4bit_quant_type: fp4
423
- - bnb_4bit_use_double_quant: False
424
- - bnb_4bit_compute_dtype: float32
425
-
426
- ### Framework versions
427
-
428
-
429
- - PEFT 0.6.2
430
- ## Training procedure
431
-
432
-
433
- The following `bitsandbytes` quantization config was used during training:
434
- - quant_method: bitsandbytes
435
- - load_in_8bit: True
436
- - load_in_4bit: False
437
- - llm_int8_threshold: 6.0
438
- - llm_int8_skip_modules: None
439
- - llm_int8_enable_fp32_cpu_offload: False
440
- - llm_int8_has_fp16_weight: False
441
- - bnb_4bit_quant_type: fp4
442
- - bnb_4bit_use_double_quant: False
443
- - bnb_4bit_compute_dtype: float32
444
-
445
- ### Framework versions
446
-
447
-
448
- - PEFT 0.6.2
449
- ## Training procedure
450
-
451
-
452
- The following `bitsandbytes` quantization config was used during training:
453
- - quant_method: bitsandbytes
454
- - load_in_8bit: True
455
- - load_in_4bit: False
456
- - llm_int8_threshold: 6.0
457
- - llm_int8_skip_modules: None
458
- - llm_int8_enable_fp32_cpu_offload: False
459
- - llm_int8_has_fp16_weight: False
460
- - bnb_4bit_quant_type: fp4
461
- - bnb_4bit_use_double_quant: False
462
- - bnb_4bit_compute_dtype: float32
463
-
464
- ### Framework versions
465
-
466
-
467
- - PEFT 0.6.2
468
- ## Training procedure
469
-
470
-
471
- The following `bitsandbytes` quantization config was used during training:
472
- - quant_method: bitsandbytes
473
- - load_in_8bit: True
474
- - load_in_4bit: False
475
- - llm_int8_threshold: 6.0
476
- - llm_int8_skip_modules: None
477
- - llm_int8_enable_fp32_cpu_offload: False
478
- - llm_int8_has_fp16_weight: False
479
- - bnb_4bit_quant_type: fp4
480
- - bnb_4bit_use_double_quant: False
481
- - bnb_4bit_compute_dtype: float32
482
-
483
- ### Framework versions
484
-
485
-
486
- - PEFT 0.6.2
487
- ## Training procedure
488
-
489
-
490
- The following `bitsandbytes` quantization config was used during training:
491
- - quant_method: bitsandbytes
492
- - load_in_8bit: True
493
- - load_in_4bit: False
494
- - llm_int8_threshold: 6.0
495
- - llm_int8_skip_modules: None
496
- - llm_int8_enable_fp32_cpu_offload: False
497
- - llm_int8_has_fp16_weight: False
498
- - bnb_4bit_quant_type: fp4
499
- - bnb_4bit_use_double_quant: False
500
- - bnb_4bit_compute_dtype: float32
501
-
502
- ### Framework versions
503
-
504
-
505
- - PEFT 0.6.2
506
- ## Training procedure
507
-
508
-
509
- The following `bitsandbytes` quantization config was used during training:
510
- - quant_method: bitsandbytes
511
- - load_in_8bit: True
512
- - load_in_4bit: False
513
- - llm_int8_threshold: 6.0
514
- - llm_int8_skip_modules: None
515
- - llm_int8_enable_fp32_cpu_offload: False
516
- - llm_int8_has_fp16_weight: False
517
- - bnb_4bit_quant_type: fp4
518
- - bnb_4bit_use_double_quant: False
519
- - bnb_4bit_compute_dtype: float32
520
-
521
- ### Framework versions
522
-
523
-
524
- - PEFT 0.6.2
525
- ## Training procedure
526
-
527
-
528
- The following `bitsandbytes` quantization config was used during training:
529
- - quant_method: bitsandbytes
530
- - load_in_8bit: True
531
- - load_in_4bit: False
532
- - llm_int8_threshold: 6.0
533
- - llm_int8_skip_modules: None
534
- - llm_int8_enable_fp32_cpu_offload: False
535
- - llm_int8_has_fp16_weight: False
536
- - bnb_4bit_quant_type: fp4
537
- - bnb_4bit_use_double_quant: False
538
- - bnb_4bit_compute_dtype: float32
539
-
540
- ### Framework versions
541
-
542
-
543
- - PEFT 0.6.2
544
- ## Training procedure
545
-
546
-
547
- The following `bitsandbytes` quantization config was used during training:
548
- - quant_method: bitsandbytes
549
- - load_in_8bit: True
550
- - load_in_4bit: False
551
- - llm_int8_threshold: 6.0
552
- - llm_int8_skip_modules: None
553
- - llm_int8_enable_fp32_cpu_offload: False
554
- - llm_int8_has_fp16_weight: False
555
- - bnb_4bit_quant_type: fp4
556
- - bnb_4bit_use_double_quant: False
557
- - bnb_4bit_compute_dtype: float32
558
-
559
- ### Framework versions
560
-
561
-
562
- - PEFT 0.6.2
563
- ## Training procedure
564
-
565
-
566
- The following `bitsandbytes` quantization config was used during training:
567
- - quant_method: bitsandbytes
568
- - load_in_8bit: True
569
- - load_in_4bit: False
570
- - llm_int8_threshold: 6.0
571
- - llm_int8_skip_modules: None
572
- - llm_int8_enable_fp32_cpu_offload: False
573
- - llm_int8_has_fp16_weight: False
574
- - bnb_4bit_quant_type: fp4
575
- - bnb_4bit_use_double_quant: False
576
- - bnb_4bit_compute_dtype: float32
577
-
578
- ### Framework versions
579
-
580
-
581
- - PEFT 0.6.2
582
- ## Training procedure
583
-
584
-
585
- The following `bitsandbytes` quantization config was used during training:
586
- - quant_method: bitsandbytes
587
- - load_in_8bit: True
588
- - load_in_4bit: False
589
- - llm_int8_threshold: 6.0
590
- - llm_int8_skip_modules: None
591
- - llm_int8_enable_fp32_cpu_offload: False
592
- - llm_int8_has_fp16_weight: False
593
- - bnb_4bit_quant_type: fp4
594
- - bnb_4bit_use_double_quant: False
595
- - bnb_4bit_compute_dtype: float32
596
-
597
- ### Framework versions
598
-
599
-
600
- - PEFT 0.6.2
601
- ## Training procedure
602
-
603
-
604
- The following `bitsandbytes` quantization config was used during training:
605
- - quant_method: bitsandbytes
606
- - load_in_8bit: True
607
- - load_in_4bit: False
608
- - llm_int8_threshold: 6.0
609
- - llm_int8_skip_modules: None
610
- - llm_int8_enable_fp32_cpu_offload: False
611
- - llm_int8_has_fp16_weight: False
612
- - bnb_4bit_quant_type: fp4
613
- - bnb_4bit_use_double_quant: False
614
- - bnb_4bit_compute_dtype: float32
615
-
616
- ### Framework versions
617
-
618
-
619
- - PEFT 0.6.2
620
- ## Training procedure
621
-
622
-
623
- The following `bitsandbytes` quantization config was used during training:
624
- - quant_method: bitsandbytes
625
- - load_in_8bit: True
626
- - load_in_4bit: False
627
- - llm_int8_threshold: 6.0
628
- - llm_int8_skip_modules: None
629
- - llm_int8_enable_fp32_cpu_offload: False
630
- - llm_int8_has_fp16_weight: False
631
- - bnb_4bit_quant_type: fp4
632
- - bnb_4bit_use_double_quant: False
633
- - bnb_4bit_compute_dtype: float32
634
-
635
- ### Framework versions
636
-
637
-
638
- - PEFT 0.6.2
639
- ## Training procedure
640
-
641
-
642
- The following `bitsandbytes` quantization config was used during training:
643
- - quant_method: bitsandbytes
644
- - load_in_8bit: True
645
- - load_in_4bit: False
646
- - llm_int8_threshold: 6.0
647
- - llm_int8_skip_modules: None
648
- - llm_int8_enable_fp32_cpu_offload: False
649
- - llm_int8_has_fp16_weight: False
650
- - bnb_4bit_quant_type: fp4
651
- - bnb_4bit_use_double_quant: False
652
- - bnb_4bit_compute_dtype: float32
653
-
654
- ### Framework versions
655
-
656
-
657
- - PEFT 0.6.2
658
- ## Training procedure
659
-
660
-
661
- The following `bitsandbytes` quantization config was used during training:
662
- - quant_method: bitsandbytes
663
- - load_in_8bit: True
664
- - load_in_4bit: False
665
- - llm_int8_threshold: 6.0
666
- - llm_int8_skip_modules: None
667
- - llm_int8_enable_fp32_cpu_offload: False
668
- - llm_int8_has_fp16_weight: False
669
- - bnb_4bit_quant_type: fp4
670
- - bnb_4bit_use_double_quant: False
671
- - bnb_4bit_compute_dtype: float32
672
-
673
- ### Framework versions
674
-
675
-
676
- - PEFT 0.6.2
677
- ## Training procedure
678
-
679
-
680
- The following `bitsandbytes` quantization config was used during training:
681
- - quant_method: bitsandbytes
682
- - load_in_8bit: True
683
- - load_in_4bit: False
684
- - llm_int8_threshold: 6.0
685
- - llm_int8_skip_modules: None
686
- - llm_int8_enable_fp32_cpu_offload: False
687
- - llm_int8_has_fp16_weight: False
688
- - bnb_4bit_quant_type: fp4
689
- - bnb_4bit_use_double_quant: False
690
- - bnb_4bit_compute_dtype: float32
691
-
692
- ### Framework versions
693
-
694
-
695
- - PEFT 0.6.2
696
- ## Training procedure
697
-
698
-
699
- The following `bitsandbytes` quantization config was used during training:
700
- - quant_method: bitsandbytes
701
- - load_in_8bit: True
702
- - load_in_4bit: False
703
- - llm_int8_threshold: 6.0
704
- - llm_int8_skip_modules: None
705
- - llm_int8_enable_fp32_cpu_offload: False
706
- - llm_int8_has_fp16_weight: False
707
- - bnb_4bit_quant_type: fp4
708
- - bnb_4bit_use_double_quant: False
709
- - bnb_4bit_compute_dtype: float32
710
-
711
- ### Framework versions
712
-
713
-
714
- - PEFT 0.6.2
715
- ## Training procedure
716
-
717
-
718
- The following `bitsandbytes` quantization config was used during training:
719
- - quant_method: bitsandbytes
720
- - load_in_8bit: True
721
- - load_in_4bit: False
722
- - llm_int8_threshold: 6.0
723
- - llm_int8_skip_modules: None
724
- - llm_int8_enable_fp32_cpu_offload: False
725
- - llm_int8_has_fp16_weight: False
726
- - bnb_4bit_quant_type: fp4
727
- - bnb_4bit_use_double_quant: False
728
- - bnb_4bit_compute_dtype: float32
729
-
730
- ### Framework versions
731
-
732
-
733
- - PEFT 0.6.2
734
- ## Training procedure
735
-
736
-
737
- The following `bitsandbytes` quantization config was used during training:
738
- - quant_method: bitsandbytes
739
- - load_in_8bit: True
740
- - load_in_4bit: False
741
- - llm_int8_threshold: 6.0
742
- - llm_int8_skip_modules: None
743
- - llm_int8_enable_fp32_cpu_offload: False
744
- - llm_int8_has_fp16_weight: False
745
- - bnb_4bit_quant_type: fp4
746
- - bnb_4bit_use_double_quant: False
747
- - bnb_4bit_compute_dtype: float32
748
-
749
- ### Framework versions
750
-
751
-
752
- - PEFT 0.6.2
753
- ## Training procedure
754
-
755
-
756
- The following `bitsandbytes` quantization config was used during training:
757
- - quant_method: bitsandbytes
758
- - load_in_8bit: True
759
- - load_in_4bit: False
760
- - llm_int8_threshold: 6.0
761
- - llm_int8_skip_modules: None
762
- - llm_int8_enable_fp32_cpu_offload: False
763
- - llm_int8_has_fp16_weight: False
764
- - bnb_4bit_quant_type: fp4
765
- - bnb_4bit_use_double_quant: False
766
- - bnb_4bit_compute_dtype: float32
767
-
768
- ### Framework versions
769
-
770
-
771
- - PEFT 0.6.2
772
- ## Training procedure
773
-
774
-
775
- The following `bitsandbytes` quantization config was used during training:
776
- - quant_method: bitsandbytes
777
- - load_in_8bit: True
778
- - load_in_4bit: False
779
- - llm_int8_threshold: 6.0
780
- - llm_int8_skip_modules: None
781
- - llm_int8_enable_fp32_cpu_offload: False
782
- - llm_int8_has_fp16_weight: False
783
- - bnb_4bit_quant_type: fp4
784
- - bnb_4bit_use_double_quant: False
785
- - bnb_4bit_compute_dtype: float32
786
-
787
- ### Framework versions
788
-
789
-
790
- - PEFT 0.6.2
791
- ## Training procedure
792
-
793
-
794
- The following `bitsandbytes` quantization config was used during training:
795
- - quant_method: bitsandbytes
796
- - load_in_8bit: True
797
- - load_in_4bit: False
798
- - llm_int8_threshold: 6.0
799
- - llm_int8_skip_modules: None
800
- - llm_int8_enable_fp32_cpu_offload: False
801
- - llm_int8_has_fp16_weight: False
802
- - bnb_4bit_quant_type: fp4
803
- - bnb_4bit_use_double_quant: False
804
- - bnb_4bit_compute_dtype: float32
805
-
806
- ### Framework versions
807
-
808
-
809
- - PEFT 0.6.2
810
- ## Training procedure
811
-
812
-
813
- The following `bitsandbytes` quantization config was used during training:
814
- - quant_method: bitsandbytes
815
- - load_in_8bit: True
816
- - load_in_4bit: False
817
- - llm_int8_threshold: 6.0
818
- - llm_int8_skip_modules: None
819
- - llm_int8_enable_fp32_cpu_offload: False
820
- - llm_int8_has_fp16_weight: False
821
- - bnb_4bit_quant_type: fp4
822
- - bnb_4bit_use_double_quant: False
823
- - bnb_4bit_compute_dtype: float32
824
-
825
- ### Framework versions
826
-
827
-
828
- - PEFT 0.6.2
829
- ## Training procedure
830
-
831
-
832
- The following `bitsandbytes` quantization config was used during training:
833
- - quant_method: bitsandbytes
834
- - load_in_8bit: True
835
- - load_in_4bit: False
836
- - llm_int8_threshold: 6.0
837
- - llm_int8_skip_modules: None
838
- - llm_int8_enable_fp32_cpu_offload: False
839
- - llm_int8_has_fp16_weight: False
840
- - bnb_4bit_quant_type: fp4
841
- - bnb_4bit_use_double_quant: False
842
- - bnb_4bit_compute_dtype: float32
843
-
844
- ### Framework versions
845
-
846
-
847
- - PEFT 0.6.2
848
- ## Training procedure
849
-
850
-
851
- The following `bitsandbytes` quantization config was used during training:
852
- - quant_method: bitsandbytes
853
- - load_in_8bit: True
854
- - load_in_4bit: False
855
- - llm_int8_threshold: 6.0
856
- - llm_int8_skip_modules: None
857
- - llm_int8_enable_fp32_cpu_offload: False
858
- - llm_int8_has_fp16_weight: False
859
- - bnb_4bit_quant_type: fp4
860
- - bnb_4bit_use_double_quant: False
861
- - bnb_4bit_compute_dtype: float32
862
-
863
- ### Framework versions
864
-
865
-
866
- - PEFT 0.6.2
867
- ## Training procedure
868
-
869
-
870
- The following `bitsandbytes` quantization config was used during training:
871
- - quant_method: bitsandbytes
872
- - load_in_8bit: True
873
- - load_in_4bit: False
874
- - llm_int8_threshold: 6.0
875
- - llm_int8_skip_modules: None
876
- - llm_int8_enable_fp32_cpu_offload: False
877
- - llm_int8_has_fp16_weight: False
878
- - bnb_4bit_quant_type: fp4
879
- - bnb_4bit_use_double_quant: False
880
- - bnb_4bit_compute_dtype: float32
881
-
882
- ### Framework versions
883
-
884
-
885
- - PEFT 0.6.2
886
- ## Training procedure
887
-
888
-
889
- The following `bitsandbytes` quantization config was used during training:
890
- - quant_method: bitsandbytes
891
- - load_in_8bit: True
892
- - load_in_4bit: False
893
- - llm_int8_threshold: 6.0
894
- - llm_int8_skip_modules: None
895
- - llm_int8_enable_fp32_cpu_offload: False
896
- - llm_int8_has_fp16_weight: False
897
- - bnb_4bit_quant_type: fp4
898
- - bnb_4bit_use_double_quant: False
899
- - bnb_4bit_compute_dtype: float32
900
-
901
- ### Framework versions
902
-
903
-
904
- - PEFT 0.6.2
905
- ## Training procedure
906
-
907
-
908
- The following `bitsandbytes` quantization config was used during training:
909
- - quant_method: bitsandbytes
910
- - load_in_8bit: True
911
- - load_in_4bit: False
912
- - llm_int8_threshold: 6.0
913
- - llm_int8_skip_modules: None
914
- - llm_int8_enable_fp32_cpu_offload: False
915
- - llm_int8_has_fp16_weight: False
916
- - bnb_4bit_quant_type: fp4
917
- - bnb_4bit_use_double_quant: False
918
- - bnb_4bit_compute_dtype: float32
919
-
920
- ### Framework versions
921
-
922
-
923
- - PEFT 0.6.2
924
- ## Training procedure
925
-
926
-
927
- The following `bitsandbytes` quantization config was used during training:
928
- - quant_method: bitsandbytes
929
- - load_in_8bit: True
930
- - load_in_4bit: False
931
- - llm_int8_threshold: 6.0
932
- - llm_int8_skip_modules: None
933
- - llm_int8_enable_fp32_cpu_offload: False
934
- - llm_int8_has_fp16_weight: False
935
- - bnb_4bit_quant_type: fp4
936
- - bnb_4bit_use_double_quant: False
937
- - bnb_4bit_compute_dtype: float32
938
-
939
- ### Framework versions
940
-
941
-
942
- - PEFT 0.6.2
943
- ## Training procedure
944
-
945
-
946
- The following `bitsandbytes` quantization config was used during training:
947
- - quant_method: bitsandbytes
948
- - load_in_8bit: True
949
- - load_in_4bit: False
950
- - llm_int8_threshold: 6.0
951
- - llm_int8_skip_modules: None
952
- - llm_int8_enable_fp32_cpu_offload: False
953
- - llm_int8_has_fp16_weight: False
954
- - bnb_4bit_quant_type: fp4
955
- - bnb_4bit_use_double_quant: False
956
- - bnb_4bit_compute_dtype: float32
957
-
958
- ### Framework versions
959
-
960
-
961
- - PEFT 0.6.2
962
- ## Training procedure
963
-
964
-
965
- The following `bitsandbytes` quantization config was used during training:
966
- - quant_method: bitsandbytes
967
- - load_in_8bit: True
968
- - load_in_4bit: False
969
- - llm_int8_threshold: 6.0
970
- - llm_int8_skip_modules: None
971
- - llm_int8_enable_fp32_cpu_offload: False
972
- - llm_int8_has_fp16_weight: False
973
- - bnb_4bit_quant_type: fp4
974
- - bnb_4bit_use_double_quant: False
975
- - bnb_4bit_compute_dtype: float32
976
-
977
- ### Framework versions
978
-
979
-
980
- - PEFT 0.6.2
981
- ## Training procedure
982
-
983
-
984
- The following `bitsandbytes` quantization config was used during training:
985
- - quant_method: bitsandbytes
986
- - load_in_8bit: True
987
- - load_in_4bit: False
988
- - llm_int8_threshold: 6.0
989
- - llm_int8_skip_modules: None
990
- - llm_int8_enable_fp32_cpu_offload: False
991
- - llm_int8_has_fp16_weight: False
992
- - bnb_4bit_quant_type: fp4
993
- - bnb_4bit_use_double_quant: False
994
- - bnb_4bit_compute_dtype: float32
995
-
996
- ### Framework versions
997
-
998
-
999
- - PEFT 0.6.2
1000
- ## Training procedure
1001
-
1002
-
1003
- The following `bitsandbytes` quantization config was used during training:
1004
- - quant_method: bitsandbytes
1005
- - load_in_8bit: True
1006
- - load_in_4bit: False
1007
- - llm_int8_threshold: 6.0
1008
- - llm_int8_skip_modules: None
1009
- - llm_int8_enable_fp32_cpu_offload: False
1010
- - llm_int8_has_fp16_weight: False
1011
- - bnb_4bit_quant_type: fp4
1012
- - bnb_4bit_use_double_quant: False
1013
- - bnb_4bit_compute_dtype: float32
1014
-
1015
- ### Framework versions
1016
-
1017
-
1018
- - PEFT 0.6.2
1019
- ## Training procedure
1020
-
1021
-
1022
- The following `bitsandbytes` quantization config was used during training:
1023
- - quant_method: bitsandbytes
1024
- - load_in_8bit: True
1025
- - load_in_4bit: False
1026
- - llm_int8_threshold: 6.0
1027
- - llm_int8_skip_modules: None
1028
- - llm_int8_enable_fp32_cpu_offload: False
1029
- - llm_int8_has_fp16_weight: False
1030
- - bnb_4bit_quant_type: fp4
1031
- - bnb_4bit_use_double_quant: False
1032
- - bnb_4bit_compute_dtype: float32
1033
-
1034
- ### Framework versions
1035
-
1036
-
1037
- - PEFT 0.6.2
1038
- ## Training procedure
1039
-
1040
-
1041
- The following `bitsandbytes` quantization config was used during training:
1042
- - quant_method: bitsandbytes
1043
- - load_in_8bit: True
1044
- - load_in_4bit: False
1045
- - llm_int8_threshold: 6.0
1046
- - llm_int8_skip_modules: None
1047
- - llm_int8_enable_fp32_cpu_offload: False
1048
- - llm_int8_has_fp16_weight: False
1049
- - bnb_4bit_quant_type: fp4
1050
- - bnb_4bit_use_double_quant: False
1051
- - bnb_4bit_compute_dtype: float32
1052
-
1053
- ### Framework versions
1054
-
1055
-
1056
- - PEFT 0.6.2
1057
- ## Training procedure
1058
-
1059
-
1060
- The following `bitsandbytes` quantization config was used during training:
1061
- - quant_method: bitsandbytes
1062
- - load_in_8bit: True
1063
- - load_in_4bit: False
1064
- - llm_int8_threshold: 6.0
1065
- - llm_int8_skip_modules: None
1066
- - llm_int8_enable_fp32_cpu_offload: False
1067
- - llm_int8_has_fp16_weight: False
1068
- - bnb_4bit_quant_type: fp4
1069
- - bnb_4bit_use_double_quant: False
1070
- - bnb_4bit_compute_dtype: float32
1071
-
1072
- ### Framework versions
1073
-
1074
-
1075
- - PEFT 0.6.2
1076
- ## Training procedure
1077
-
1078
-
1079
- The following `bitsandbytes` quantization config was used during training:
1080
- - quant_method: bitsandbytes
1081
- - load_in_8bit: True
1082
- - load_in_4bit: False
1083
- - llm_int8_threshold: 6.0
1084
- - llm_int8_skip_modules: None
1085
- - llm_int8_enable_fp32_cpu_offload: False
1086
- - llm_int8_has_fp16_weight: False
1087
- - bnb_4bit_quant_type: fp4
1088
- - bnb_4bit_use_double_quant: False
1089
- - bnb_4bit_compute_dtype: float32
1090
-
1091
- ### Framework versions
1092
-
1093
-
1094
- - PEFT 0.6.2
1095
- ## Training procedure
1096
-
1097
-
1098
- The following `bitsandbytes` quantization config was used during training:
1099
- - quant_method: bitsandbytes
1100
- - load_in_8bit: True
1101
- - load_in_4bit: False
1102
- - llm_int8_threshold: 6.0
1103
- - llm_int8_skip_modules: None
1104
- - llm_int8_enable_fp32_cpu_offload: False
1105
- - llm_int8_has_fp16_weight: False
1106
- - bnb_4bit_quant_type: fp4
1107
- - bnb_4bit_use_double_quant: False
1108
  - bnb_4bit_compute_dtype: float32
1109
 
1110
  ### Framework versions
 
203
 
204
 
205
  The following `bitsandbytes` quantization config was used during training:
206
+ - load_in_8bit: False
207
+ - load_in_4bit: True
 
208
  - llm_int8_threshold: 6.0
209
  - llm_int8_skip_modules: None
210
  - llm_int8_enable_fp32_cpu_offload: False
211
  - llm_int8_has_fp16_weight: False
212
  - bnb_4bit_quant_type: fp4
213
+ - bnb_4bit_use_double_quant: True
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
214
  - bnb_4bit_compute_dtype: float32
215
 
216
  ### Framework versions
adapter_config.json CHANGED
@@ -19,8 +19,8 @@
19
  "rank_pattern": {},
20
  "revision": null,
21
  "target_modules": [
22
- "k_proj",
23
- "q_proj"
24
  ],
25
  "task_type": null
26
  }
 
19
  "rank_pattern": {},
20
  "revision": null,
21
  "target_modules": [
22
+ "q_proj",
23
+ "k_proj"
24
  ],
25
  "task_type": null
26
  }
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8b3a5ff305f5c15d29900598e64e3717a21f7193fe6d0d994c874a96fe49df7
3
+ size 21020682