Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ inference: false
|
|
17 |
A bilingual instruction-tuned LoRA model of https://huggingface.co/baichuan-inc/baichuan-7B
|
18 |
|
19 |
- Instruction-following datasets used: alpaca, alpaca-zh, codealpaca
|
20 |
-
- Training framework: https://github.com/hiyouga/LLaMA-
|
21 |
|
22 |
Please follow the [baichuan-7B License](https://huggingface.co/baichuan-inc/baichuan-7B/resolve/main/baichuan-7B%20%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) to use this model.
|
23 |
|
@@ -42,7 +42,7 @@ inputs = inputs.to("cuda")
|
|
42 |
generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
|
43 |
```
|
44 |
|
45 |
-
You could also alternatively launch a CLI demo by using the script in https://github.com/hiyouga/LLaMA-
|
46 |
|
47 |
```bash
|
48 |
python src/cli_demo.py --template default --model_name_or_path hiyouga/baichuan-7b-sft
|
@@ -50,7 +50,7 @@ python src/cli_demo.py --template default --model_name_or_path hiyouga/baichuan-
|
|
50 |
|
51 |
---
|
52 |
|
53 |
-
You could reproduce our results with the following scripts using [LLaMA-
|
54 |
|
55 |
```bash
|
56 |
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
@@ -61,7 +61,7 @@ CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
|
61 |
--template default \
|
62 |
--finetuning_type lora \
|
63 |
--lora_rank 16 \
|
64 |
-
--lora_target
|
65 |
--output_dir baichuan_lora \
|
66 |
--overwrite_cache \
|
67 |
--per_device_train_batch_size 8 \
|
|
|
17 |
A bilingual instruction-tuned LoRA model of https://huggingface.co/baichuan-inc/baichuan-7B
|
18 |
|
19 |
- Instruction-following datasets used: alpaca, alpaca-zh, codealpaca
|
20 |
+
- Training framework: https://github.com/hiyouga/LLaMA-Factory
|
21 |
|
22 |
Please follow the [baichuan-7B License](https://huggingface.co/baichuan-inc/baichuan-7B/resolve/main/baichuan-7B%20%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) to use this model.
|
23 |
|
|
|
42 |
generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
|
43 |
```
|
44 |
|
45 |
+
You could also alternatively launch a CLI demo by using the script in https://github.com/hiyouga/LLaMA-Factory
|
46 |
|
47 |
```bash
|
48 |
python src/cli_demo.py --template default --model_name_or_path hiyouga/baichuan-7b-sft
|
|
|
50 |
|
51 |
---
|
52 |
|
53 |
+
You could reproduce our results with the following scripts using [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory):
|
54 |
|
55 |
```bash
|
56 |
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
|
|
61 |
--template default \
|
62 |
--finetuning_type lora \
|
63 |
--lora_rank 16 \
|
64 |
+
--lora_target all \
|
65 |
--output_dir baichuan_lora \
|
66 |
--overwrite_cache \
|
67 |
--per_device_train_batch_size 8 \
|