TheBloke commited on
Commit
3686113
1 Parent(s): f86237d

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md CHANGED
@@ -5,6 +5,13 @@ license: other
5
  model_creator: CodeFuse AI
6
  model_name: CodeFuse CodeLlama 34B
7
  model_type: llama
 
 
 
 
 
 
 
8
  quantized_by: TheBloke
9
  tasks:
10
  - code-generation
@@ -58,6 +65,7 @@ Here is an incomplate list of clients and libraries that are known to support GG
58
  <!-- repositories-available start -->
59
  ## Repositories available
60
 
 
61
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GPTQ)
62
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GGUF)
63
  * [CodeFuse AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B)
@@ -304,6 +312,17 @@ The context length of finetuning is 4K while it is able to be finetuned by 16k c
304
 
305
  <br>
306
 
 
 
 
 
 
 
 
 
 
 
 
307
  ## Performance
308
 
309
 
@@ -423,6 +442,16 @@ CodeFuse-CodeLlama34B-MFT 是一个通过QLoRA对基座模型CodeLlama-34b-Pytho
423
 
424
  <br>
425
 
 
 
 
 
 
 
 
 
 
 
426
  ## 评测表现(代码)
427
 
428
  | 模型 | HumanEval(pass@1) | 日期 |
 
5
  model_creator: CodeFuse AI
6
  model_name: CodeFuse CodeLlama 34B
7
  model_type: llama
8
+ prompt_template: '<|role_start|>system<|role_end|>{system_message}
9
+
10
+ <|role_start|>human<|role_end|>{prompt}
11
+
12
+ <|role_start|>bot<|role_end|>
13
+
14
+ '
15
  quantized_by: TheBloke
16
  tasks:
17
  - code-generation
 
65
  <!-- repositories-available start -->
66
  ## Repositories available
67
 
68
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-AWQ)
69
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GPTQ)
70
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GGUF)
71
  * [CodeFuse AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B)
 
312
 
313
  <br>
314
 
315
+ ## Code Community
316
+
317
+ **Homepage**: 🏡 https://github.com/codefuse-ai (**Please give us your support with a Star🌟 + Fork🚀 + Watch👀**)
318
+
319
+ + If you wish to fine-tune the model yourself, you can visit ✨[MFTCoder](https://github.com/codefuse-ai/MFTCoder)✨✨
320
+
321
+ + If you wish to deploy the model yourself, you can visit ✨[FasterTransformer4CodeFuse](https://github.com/codefuse-ai/FasterTransformer4CodeFuse)✨✨
322
+
323
+ + If you wish to see a demo of the model, you can visit ✨[CodeFuse Demo](https://github.com/codefuse-ai/codefuse)✨✨
324
+
325
+
326
  ## Performance
327
 
328
 
 
442
 
443
  <br>
444
 
445
+ ## 代码社区
446
+ **大本营**: 🏡 https://github.com/codefuse-ai (**欢迎为我们的项目一键三连 Star🌟 + Fork🚀 + Watch👀**)
447
+
448
+ + 如果您想自己微调该模型,可以访问 ✨[MFTCoder](https://github.com/codefuse-ai/MFTCoder)✨✨
449
+
450
+ + 如果您想自己部署该模型,可以访问 ✨[FasterTransformer4CodeFuse](https://github.com/codefuse-ai/FasterTransformer4CodeFuse)✨✨
451
+
452
+ + 如果您想观看该模型示例,可以访问 ✨[CodeFuse Demo](https://github.com/codefuse-ai/codefuse)✨✨
453
+
454
+
455
  ## 评测表现(代码)
456
 
457
  | 模型 | HumanEval(pass@1) | 日期 |