codefuse-admin
commited on
Commit
β’
5cb486a
1
Parent(s):
f2533c0
Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,9 @@ The context length of finetuning is 4K while it is able to be finetuned by 16k c
|
|
20 |
|
21 |
## News and Updates
|
22 |
|
23 |
-
π₯π₯π₯
|
|
|
|
|
24 |
|
25 |
<br>
|
26 |
|
|
|
20 |
|
21 |
## News and Updates
|
22 |
|
23 |
+
π₯π₯π₯ 2023-09-26 We are pleased to announce the release of the [4-bit quantized version](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B-4bits) of CodeFuse-CodeLlama-34B. Despite the quantization process, the model still achieves a remarkable 73.8% accuracy (greedy decoding) on the HumanEval pass@1 metric.
|
24 |
+
|
25 |
+
π₯π₯π₯ 2023-09-11 CodeFuse-CodeLlama34B has achieved 74.4% of pass@1 (greedy decoding) on HumanEval, which is SOTA results for openspurced LLMs at present.
|
26 |
|
27 |
<br>
|
28 |
|