bin123apple commited on
Commit
07948bb
1 Parent(s): 2fb721d

introduction

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md CHANGED
@@ -1,3 +1,41 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ We introduced a new model designed for the Code generation task. Its test accuracy on the HumanEval base dataset surpasses that of GPT-4 Turbo (April 2024). (90.9% vs 90.2%).
6
+
7
+ Additionally, compared to previous open-source models, AutoCoder offers a new feature: it can **automatically install the required packages** and attempt to run the code until it deems there are no issues, **whenever the user wishes to execute the code**.
8
+
9
+ See details on the [AutoCoder GitHub](https://github.com/bin123apple/AutoCoder).
10
+
11
+ Simple test script:
12
+
13
+ ```
14
+ model_path = ""
15
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
16
+ model = AutoModelForCausalLM.from_pretrained(model_path,
17
+ device_map="auto")
18
+
19
+ HumanEval = load_dataset("evalplus/humanevalplus")
20
+
21
+ Input = "" # input your question here
22
+
23
+
24
+ messages=[
25
+ { 'role': 'user', 'content': Input}
26
+ ]
27
+ inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True,
28
+ return_tensors="pt").to(model.device)
29
+
30
+ outputs = model.generate(inputs,
31
+ max_new_tokens=1024,
32
+ do_sample=False,
33
+ temperature=0.0,
34
+ top_p=1.0,
35
+ num_return_sequences=1,
36
+ eos_token_id=tokenizer.eos_token_id)
37
+
38
+ answer = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
39
+
40
+
41
+ ```