codefuse-admin commited on
Commit
5cb486a
β€’
1 Parent(s): f2533c0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -20,7 +20,9 @@ The context length of finetuning is 4K while it is able to be finetuned by 16k c
20
 
21
  ## News and Updates
22
 
23
- πŸ”₯πŸ”₯πŸ”₯ CodeFuse-CodeLlama34B-MFT has achived 74.4% of pass@1 on HumanEval, which is SOTA at present.
 
 
24
 
25
  <br>
26
 
 
20
 
21
  ## News and Updates
22
 
23
+ πŸ”₯πŸ”₯πŸ”₯ 2023-09-26 We are pleased to announce the release of the [4-bit quantized version](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B-4bits) of CodeFuse-CodeLlama-34B. Despite the quantization process, the model still achieves a remarkable 73.8% accuracy (greedy decoding) on the HumanEval pass@1 metric.
24
+
25
+ πŸ”₯πŸ”₯πŸ”₯ 2023-09-11 CodeFuse-CodeLlama34B has achieved 74.4% of pass@1 (greedy decoding) on HumanEval, which is SOTA results for openspurced LLMs at present.
26
 
27
  <br>
28