TheBloke commited on
Commit
38e1297
1 Parent(s): b7b52ea

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md CHANGED
@@ -5,6 +5,13 @@ license: other
5
  model_creator: CodeFuse AI
6
  model_name: CodeFuse CodeLlama 34B
7
  model_type: llama
 
 
 
 
 
 
 
8
  quantized_by: TheBloke
9
  tasks:
10
  - code-generation
@@ -42,6 +49,7 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
42
  <!-- repositories-available start -->
43
  ## Repositories available
44
 
 
45
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GPTQ)
46
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GGUF)
47
  * [CodeFuse AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B)
@@ -274,6 +282,17 @@ The context length of finetuning is 4K while it is able to be finetuned by 16k c
274
 
275
  <br>
276
 
 
 
 
 
 
 
 
 
 
 
 
277
  ## Performance
278
 
279
 
@@ -393,6 +412,16 @@ CodeFuse-CodeLlama34B-MFT 是一个通过QLoRA对基座模型CodeLlama-34b-Pytho
393
 
394
  <br>
395
 
 
 
 
 
 
 
 
 
 
 
396
  ## 评测表现(代码)
397
 
398
  | 模型 | HumanEval(pass@1) | 日期 |
 
5
  model_creator: CodeFuse AI
6
  model_name: CodeFuse CodeLlama 34B
7
  model_type: llama
8
+ prompt_template: '<|role_start|>system<|role_end|>{system_message}
9
+
10
+ <|role_start|>human<|role_end|>{prompt}
11
+
12
+ <|role_start|>bot<|role_end|>
13
+
14
+ '
15
  quantized_by: TheBloke
16
  tasks:
17
  - code-generation
 
49
  <!-- repositories-available start -->
50
  ## Repositories available
51
 
52
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-AWQ)
53
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GPTQ)
54
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GGUF)
55
  * [CodeFuse AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B)
 
282
 
283
  <br>
284
 
285
+ ## Code Community
286
+
287
+ **Homepage**: 🏡 https://github.com/codefuse-ai (**Please give us your support with a Star🌟 + Fork🚀 + Watch👀**)
288
+
289
+ + If you wish to fine-tune the model yourself, you can visit ✨[MFTCoder](https://github.com/codefuse-ai/MFTCoder)✨✨
290
+
291
+ + If you wish to deploy the model yourself, you can visit ✨[FasterTransformer4CodeFuse](https://github.com/codefuse-ai/FasterTransformer4CodeFuse)✨✨
292
+
293
+ + If you wish to see a demo of the model, you can visit ✨[CodeFuse Demo](https://github.com/codefuse-ai/codefuse)✨✨
294
+
295
+
296
  ## Performance
297
 
298
 
 
412
 
413
  <br>
414
 
415
+ ## 代码社区
416
+ **大本营**: 🏡 https://github.com/codefuse-ai (**欢迎为我们的项目一键三连 Star🌟 + Fork🚀 + Watch👀**)
417
+
418
+ + 如果您想自己微调该模型,可以访问 ✨[MFTCoder](https://github.com/codefuse-ai/MFTCoder)✨✨
419
+
420
+ + 如果您想自己部署该模型,可以访问 ✨[FasterTransformer4CodeFuse](https://github.com/codefuse-ai/FasterTransformer4CodeFuse)✨✨
421
+
422
+ + 如果您想观看该模型示例,可以访问 ✨[CodeFuse Demo](https://github.com/codefuse-ai/codefuse)✨✨
423
+
424
+
425
  ## 评测表现(代码)
426
 
427
  | 模型 | HumanEval(pass@1) | 日期 |