File size: 2,060 Bytes
0df9b32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
tags:
- code
- llama
library_name: transformers
pipeline_tag: text-generation
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-9B/blob/main/LICENSE
---

<p align="center">
<img width="300px" alt="Yi-Coder" src="https://huggingface.co/TechxGenus/Yi-9B-Coder/resolve/main/Yi-Coder.jpg">
</p>

### Yi-Coder

We've fine-tuned Yi-9B with an additional 0.5 billion high-quality, code-related tokens for 3 epochs. We used DeepSpeed ZeRO 3 and Flash Attention 2 to accelerate the training process. It achieves **69.5 pass@1** on HumanEval-Python. This model operates using the Alpaca instruction format (excluding the system prompt).

### Usage

Here give some examples of how to use our model:

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
PROMPT = """### Instruction
{instruction}
### Response
"""
instruction = <Your code instruction here>
prompt = PROMPT.format(instruction=instruction)
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/Yi-9B-Coder")
model = AutoModelForCausalLM.from_pretrained(
    "TechxGenus/Yi-9B-Coder",
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=2048)
print(tokenizer.decode(outputs[0]))
```

With text-generation pipeline:


```python
from transformers import pipeline
import torch
PROMPT = """### Instruction
{instruction}
### Response
"""
instruction = <Your code instruction here>
prompt = PROMPT.format(instruction=instruction)
generator = pipeline(
    model="TechxGenus/Yi-9B-Coder",
    task="text-generation",
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
result = generator(prompt, max_length=2048)
print(result[0]["generated_text"])
```

### Note

Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding. It has undergone very limited testing. Additional safety testing should be performed before any real-world deployments.