File size: 6,070 Bytes
d932f60 e1f1553 d932f60 e1f1553 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
---
license: gpl-3.0
tags:
- text2text-generation
pipeline_tag: text2text-generation
language:
- zh
- en
---
Considering LLaMA's license constraints, the model is for research and learning only.
Please strictly respect LLaMA's usage policy. We are not allowed to publish weights for LLaMA, of course, even finetuned, but there is no problem publishing the difference, a patch that we suggest to apply to the files.
The encryption is a simple XOR between files, ensuring that only the people that have access to the original weights (from completely legal sources, of course) can transform them into finetuned weights.
You can find the decrypt code on https://github.com/LianjiaTech/BELLE/tree/main/models .
# Model Card for Model ID
## Welcome
If you find this model helpful, please *like* this model and star us on https://github.com/LianjiaTech/BELLE !
## Model description
We release our base model described in the paper
[Towards Better Instruction Following Language Models for Chinese](https://github.com/LianjiaTech/BELLE/blob/main/docs/Towards%20Better%20Instruction%20Following%20Language%20Models%20for%20Chinese.pdf)
We extend original LLaMA's vocabulary for an efficiency tokenization of Chinese.
This model is derived through the following steps:
1. Train a tokenizer with a vocabulary of 50K tokens on 12M lines of Chinese text.
2. Merge the trained vocabulary with the original LLaMA vocabulary, resulting in a new vocabulary of 79,458 tokens.
3. Resize word embeddings and further pretrain LLaMA on 3.4B Chinese words with other parameters fixed.
We test the extended tokenizer and the original tokenizer on 5,000 lines of Chinese text, and the average tokens of a line reduces from 733 to 291.
## Download, Convert & Check
1. After you git clone this model
```
md5sum ./*
228a21b7bf927f7ffd44c16c88256684 ./config.json.fb090219f6fed69687ab8f9c902f7802cff8060b08007ca0e5af177a8f9613d5.enc
f9b33d359f17a437f6c24b4de6f2272e ./generation_config.json.fd7ff399e5568cc21a0a8414f43df88ef7c424995b9b97a90563165d2cf79efd.enc
1c12c5bb95b1d191779ef160624a622a ./pytorch_model-00001-of-00002.bin.3b0666c50d7fd55d5116e788ec51aa96a34ba6816e86ffbee1dbe983bf511b4b.enc
1a67804dbdfd2168ef30ec077b73e90d ./pytorch_model-00002-of-00002.bin.763b336a89ef37327716d9c097835720662da656bdc27afde27daec9d0873284.enc
0d6db7f247a51589f3dd6d08dbfe64ce ./pytorch_model.bin.index.json.4f08b269e18619675bc3fd62f6efb3a8d59f9d54fa50f5625d0bba7adabaf90e.enc
34696bfce7b27548cfc2410e2b55762e ./special_tokens_map.json.96bdbb8504d9967606e5f661ccc7cbbac44a3661af863a7a58614670a0ccab33.enc
6014cf2235521f974c8d9fb69b6cf07e ./tokenizer_config.json.7078cc180b3d35e7ccd06b49ede4a7fef85f2572bda40c1fe2fc8f9ab25418d3.enc
56724a79091f3d1877cca65c6412d646 ./tokenizer.model.0b716a618c9e7c45648f91d997431eba3b0ff111b17ce7b777280ed771a49f95.enc
```
2. Decrypt the files using the scripts in https://github.com/LianjiaTech/BELLE/tree/main/models
You can use the following command in Bash.
Please replace "/path/to_encrypted" with the path where you stored your encrypted file,
replace "/path/to_original_llama_7B" with the path where you stored your original llama7B file,
and replace "/path/to_finetuned_model" with the path where you want to save your final trained model.
```bash
mkdir /path/to_finetuned_model
for f in "/path/to_encrypted"/*; \
do if [ -f "$f" ]; then \
python3 decrypt.py "$f" "/path/to_original_llama_7B/consolidated.00.pth" "/path/to_finetuned_model/"; \
fi; \
done
```
After executing the aforementioned command, you will obtain the following files.
```
./config.json
./generation_config.json
./pytorch_model-00001-of-00002.bin
./pytorch_model-00002-of-00002.bin
./pytorch_model.bin.index.json
./special_tokens_map.json
./tokenizer_config.json
./tokenizer.model
```
3. Check md5sum
You can verify the integrity of these files by performing an MD5 checksum to ensure their complete recovery.
Here are the MD5 checksums for the relevant files:
```
md5sum ./*
df363050c4ded5c3136270cef715a7d1 ./config.json
2917a1cafb895cf57e746cfd7696bfe5 ./generation_config.json
a88865ce42f45c0c88cd4f7f8ecd75ea ./pytorch_model-00001-of-00002.bin
ce23ee57ecc73a78b0117e38a68f8d84 ./pytorch_model-00002-of-00002.bin
e5385004e4876ea6b93d6126e845a82f ./pytorch_model.bin.index.json
15f7a943faa91a794f38dd81a212cb01 ./special_tokens_map.json
08f6f621dba90b2a23c6f9f7af974621 ./tokenizer_config.json
6ffe559392973a92ea28032add2a8494 ./tokenizer.model
```
## Use model
This model is a pre-trained language model and has not been instruction-tuned.
To obtain well instruction-following capabilities, please finetune it with your instruction data.
## Limitations
There still exists a few issues in the model trained on current base model and data:
1. The model might generate factual errors when asked to follow instructions related to facts.
2. Occasionally generates harmful responses since the model still struggles to identify potential harmful instructions.
3. Needs improvements on reasoning and coding.
Since the model still has its limitations, we require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.
## Citation
Please cite our paper and github when using our code, data or model.
```
@misc{ji2023better,
title={Towards Better Instruction Following Language Models for Chinese: Investigating the Impact of Training Data and Evaluation},
author={Yunjie Ji and Yan Gong and Yong Deng and Yiping Peng and Qiang Niu and Baochang Ma and Xiangang Li},
year={2023},
eprint={2304.07854},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{BELLE,
author = {Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Baochang Ma, Xiangang Li},
title = {BELLE: Be Everyone's Large Language model Engine},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/LianjiaTech/BELLE}},
}
``` |