Update README.md
Browse files
README.md
CHANGED
@@ -85,6 +85,9 @@ The following hyperparameters were used during training:
|
|
85 |
|
86 |
I'm using MultiPL-E benchmark, the same as Code Llmama using in their paper
|
87 |
|
|
|
|
|
|
|
88 |
How to reproduce my evaluation? Just run like the offical document of MultiPL-E: https://nuprl.github.io/MultiPL-E/tutorial.html, change the modal name by my model here: `mhhmm/typescript-instruct-20k`
|
89 |
|
90 |
This is the code that I ran with Google Colab (using A100 40GB, yes, it requires that much GPU RAM)
|
|
|
85 |
|
86 |
I'm using MultiPL-E benchmark, the same as Code Llmama using in their paper
|
87 |
|
88 |
+
Dataset,Pass@k,Estimate,NumProblems,MinCompletions,MaxCompletions
|
89 |
+
humaneval-ts-mhhmm_typescript_instruct_20k_v2-0.2-reworded,1,0.4240791832736455,159,13,20
|
90 |
+
|
91 |
How to reproduce my evaluation? Just run like the offical document of MultiPL-E: https://nuprl.github.io/MultiPL-E/tutorial.html, change the modal name by my model here: `mhhmm/typescript-instruct-20k`
|
92 |
|
93 |
This is the code that I ran with Google Colab (using A100 40GB, yes, it requires that much GPU RAM)
|