--- license: apache-2.0 --- language: - zh - en library_name: transformers tags: - baichuan --- This is an SFT model trained using https://github.com/hiyouga/LLaMA-Efficient-Tuning. Thanks to the original author for their hard work. All work is based on https://huggingface.co/baichuan-inc/baichuan-7B. You can find the matching data set on the github of the fine-tuning framework. We carried out 4 epoch of distributed training on the 8-card H100 machine, which took a short time. However, there is not much change in the loss. In the future, we will update the data set to see how it will perform in a vertical field. Of course, this is the inference code of the original author. You can use it directly. Usage: ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer from peft import PeftModel tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/baichuan-7B", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("baichuan-inc/baichuan-7B", device_map="auto", trust_remote_code=True) model = PeftModel.from_pretrained(model, "/data/baichuan-7b-sft") #change to your own path. streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) query = "晚上睡不着怎么办" inputs = tokenizer([":{}\n:".format(query)], return_tensors="pt") inputs = inputs.to("cuda") generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer) ``` You could also alternatively launch a CLI demo by using the script in https://github.com/hiyouga/LLaMA-Efficient-Tuning ```bash python src/cli_demo.py \ --model_name_or_path baichuan-inc/baichuan-7B \ --checkpoint_dir hiyouga/baichuan-7b-sft \ ```