--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0 I'm NOT the author of this work. I cite anon : ```shell Well, here it is. Storytelling Qlora. Trained on base llama2 13B but works flawlessly on other 13Bs. Idk about other sizes. 25MB of nsfw books, 60MB of sfwish ones. No special formatting other than *** between chapters and ⁂ between books. Takes some text to get going but once you have some context filled, it feels way better for prose than raw llama or instruct models, imho. Do whatever you want with it, I can't be bothered to maintain a HF page. WTFPL.``` It's just shit from nai's archive ```