bnjmnmarie's picture
Update README.md
ee716b9 verified
|
raw
history blame
855 Bytes
metadata
library_name: transformers
tags:
  - peft
license: mit
datasets:
  - HuggingFaceH4/ultrachat_200k
language:
  - en

LoRA adapter for kaitchup/Maixtchup-4x7b briefly fine-tuned on UltraChat.

To load and use this adapter:

model_name = "kaitchup/Maixtchup-4x7b"
#Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
compute_dtype = getattr(torch, "float16")
bnb_config = BitsAndBytesConfig(
        load_in_4bit=True,
        bnb_4bit_quant_type="nf4",
        bnb_4bit_compute_dtype=compute_dtype,
        bnb_4bit_use_double_quant=True,
)
model = AutoModelForCausalLM.from_pretrained(
          model_name, quantization_config=bnb_config, device_map="auto", attn_implementation="flash_attention_2",
)

model.config.use_cache = True

model = PeftModel.from_pretrained(model, "kaitchup/Maixtchup-4x7b-QLoRA-SFT-UltraChat")