mhhmm commited on
Commit
0eb9b41
1 Parent(s): 044920b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -91,7 +91,7 @@ I'm using MultiPL-E benchmark, the same as Code Llmama using in their paper
91
  | Code LLama - Instruct 13B | 1 | 39.0% | 159 |
92
  | Our 13B | 1 | 42.4% | 159 |
93
 
94
- How to reproduce my evaluation? Just run like the offical document of MultiPL-E: https://nuprl.github.io/MultiPL-E/tutorial.html, change the modal name by my model here: `mhhmm/typescript-instruct-20k`
95
 
96
  This is the code that I ran with Google Colab (using A100 40GB, yes, it requires that much GPU RAM)
97
 
 
91
  | Code LLama - Instruct 13B | 1 | 39.0% | 159 |
92
  | Our 13B | 1 | 42.4% | 159 |
93
 
94
+ How to reproduce my evaluation? Just run like the offical document of MultiPL-E: https://nuprl.github.io/MultiPL-E/tutorial.html, change the modal name by my model here: `mhhmm/typescript-instruct-20k-v2`
95
 
96
  This is the code that I ran with Google Colab (using A100 40GB, yes, it requires that much GPU RAM)
97