codefuse-admin commited on
Commit
a7aace4
β€’
1 Parent(s): 3b4b391

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -24,7 +24,7 @@ CodeFuse-QWen-14B is a 14B Code-LLM finetuned by QLoRA of multiple code tasks on
24
 
25
  πŸ”₯πŸ”₯ 2023-09-27 CodeFuse-StarCoder-15B has been released, achieving a pass@1 (greedy decoding) score of 54.9% on HumanEval, which is a 21% increase compared to StarCoder's 33.6%.
26
 
27
- πŸ”₯πŸ”₯πŸ”₯ 2023-09-26 We are pleased to announce the release of the [4-bit quantized version](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B-4bits) of [CodeFuse-CodeLlama-34B](https://modelscope.cn/models/codefuse-ai/CodeFuse-CodeLlama-34B/summary). Despite the quantization process, the model still achieves a remarkable 73.8% accuracy (greedy decoding) on the HumanEval pass@1 metric.
28
 
29
  πŸ”₯πŸ”₯πŸ”₯ 2023-09-11 [CodeFuse-CodeLlama34B](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B) has achived 74.4% of pass@1 (greedy decoding) on HumanEval, which is SOTA results for openspurced LLMs at present.
30
 
 
24
 
25
  πŸ”₯πŸ”₯ 2023-09-27 CodeFuse-StarCoder-15B has been released, achieving a pass@1 (greedy decoding) score of 54.9% on HumanEval, which is a 21% increase compared to StarCoder's 33.6%.
26
 
27
+ πŸ”₯πŸ”₯πŸ”₯ 2023-09-26 We are pleased to announce the release of the [4-bit quantized version](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B-4bits) of [CodeFuse-CodeLlama-34B](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B). Despite the quantization process, the model still achieves a remarkable 73.8% accuracy (greedy decoding) on the HumanEval pass@1 metric.
28
 
29
  πŸ”₯πŸ”₯πŸ”₯ 2023-09-11 [CodeFuse-CodeLlama34B](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B) has achived 74.4% of pass@1 (greedy decoding) on HumanEval, which is SOTA results for openspurced LLMs at present.
30