feihu.hf
commited on
Commit
•
dd4a03e
1
Parent(s):
010ed3b
update readme
Browse files
README.md
CHANGED
@@ -78,8 +78,9 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
78 |
For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-4B-Chat-GPTQ`, `Qwen1.5-4B-Chat-AWQ`, and `Qwen1.5-4B-Chat-GGUF`.
|
79 |
|
80 |
|
81 |
-
##
|
82 |
-
|
|
|
83 |
|
84 |
|
85 |
## Citation
|
|
|
78 |
For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-4B-Chat-GPTQ`, `Qwen1.5-4B-Chat-AWQ`, and `Qwen1.5-4B-Chat-GGUF`.
|
79 |
|
80 |
|
81 |
+
## Tips
|
82 |
+
|
83 |
+
* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`.
|
84 |
|
85 |
|
86 |
## Citation
|