This model is a fine-tuned Llama2-13b-chat-hf model with Japanese dataset with LoRA.
This model is finetuned by the joint efforts of Sparticle Inc. and A. I. Hakusan Inc.
The training set of this model contains:
5% of randomly chosen data from llm-japanese-dataset by izumi-lab.
Japanese-alpaca-lora dataset, retrieved from https://github.com/masa3141/japanese-alpaca-lora/tree/main
For inference, please follow the instructions in https://github.com/tloen/alpaca-lora/ .
Training procedure
The following bitsandbytes
quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
Framework versions
- PEFT 0.5.0.dev0
You must agree with Meta's agreements when using this LoRA adapter with Llama-2.