mhhmm commited on
Commit
b54267f
1 Parent(s): dc7be24

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -86,9 +86,10 @@ The following hyperparameters were used during training:
86
  I'm using MultiPL-E benchmark, the same as Code Llmama using in their paper
87
 
88
 
89
- | Dataset | Pass@k | Estimate | NumProblems | MinCompletions | MaxCompletions |
90
- |--------------------------------------------------|--------|----------------------|-------------|----------------|----------------|
91
- | mhhmm/typescript-instruct-20k | 1 | 0.4241 | 159 | 13 | 20 |
 
92
 
93
  How to reproduce my evaluation? Just run like the offical document of MultiPL-E: https://nuprl.github.io/MultiPL-E/tutorial.html, change the modal name by my model here: `mhhmm/typescript-instruct-20k`
94
 
 
86
  I'm using MultiPL-E benchmark, the same as Code Llmama using in their paper
87
 
88
 
89
+ | Modal | Pass@k | Estimate | Num problems |
90
+ |-----------------------------------------|--------|----------|---------------|
91
+ | Code LLama - Instruct 13B | 1 | 0.390 | 159 |
92
+ | Our | 1 | 0.424 | 159 |
93
 
94
  How to reproduce my evaluation? Just run like the offical document of MultiPL-E: https://nuprl.github.io/MultiPL-E/tutorial.html, change the modal name by my model here: `mhhmm/typescript-instruct-20k`
95