mhhmm commited on
Commit
8186ddb
1 Parent(s): 4cfde11

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +109 -1
README.md CHANGED
@@ -1,3 +1,111 @@
1
  ---
2
- license: cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: llama2
3
+ library_name: peft
4
+ tags:
5
+ - typescript
6
+ - instruction-tuning
7
+ - code-generation
8
+ - lora
9
+ - peft
10
+ base_model: codellama/CodeLlama-13b-hf
11
+ model-index:
12
+ - name: lora-out
13
+ results: []
14
+ datasets:
15
+ - mhhmm/typescript-instruct-20k
16
+ language:
17
+ - en
18
+ metrics:
19
+ - code_eval
20
+ pipeline_tag: text-generation
21
  ---
22
+
23
+ ## Architecture
24
+
25
+ ![The Architecture](https://github.com/LeVuMinhHuy/brocode/blob/master/.pics/about-the-model.png?raw=true)
26
+
27
+ ## About
28
+
29
+ This model is a fine-tuned version of [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf).
30
+ It achieves the following results on the evaluation set:
31
+ - Loss: 0.4268
32
+
33
+ ### Training hyperparameters
34
+
35
+ The following hyperparameters were used during training:
36
+ - learning_rate: 0.0002
37
+ - train_batch_size: 8
38
+ - eval_batch_size: 8
39
+ - seed: 42
40
+ - distributed_type: multi-GPU
41
+ - num_devices: 2
42
+ - total_train_batch_size: 16
43
+ - total_eval_batch_size: 16
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: cosine
46
+ - lr_scheduler_warmup_steps: 10
47
+ - num_epochs: 1
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss |
52
+ |:-------------:|:-----:|:----:|:---------------:|
53
+ | 0.7555 | 0.01 | 1 | 0.7062 |
54
+ | 0.7036 | 0.05 | 7 | 0.6673 |
55
+ | 0.5422 | 0.1 | 14 | 0.5152 |
56
+ | 0.5351 | 0.15 | 21 | 0.4866 |
57
+ | 0.495 | 0.2 | 28 | 0.4688 |
58
+ | 0.5651 | 0.25 | 35 | 0.4587 |
59
+ | 0.5146 | 0.3 | 42 | 0.4486 |
60
+ | 0.4955 | 0.35 | 49 | 0.4469 |
61
+ | 0.5117 | 0.4 | 56 | 0.4432 |
62
+ | 0.5245 | 0.45 | 63 | 0.4410 |
63
+ | 0.5003 | 0.5 | 70 | 0.4371 |
64
+ | 0.4502 | 0.55 | 77 | 0.4340 |
65
+ | 0.527 | 0.6 | 84 | 0.4315 |
66
+ | 0.48 | 0.65 | 91 | 0.4305 |
67
+ | 0.448 | 0.7 | 98 | 0.4289 |
68
+ | 0.5427 | 0.75 | 105 | 0.4289 |
69
+ | 0.4715 | 0.8 | 112 | 0.4279 |
70
+ | 0.5584 | 0.85 | 119 | 0.4276 |
71
+ | 0.4936 | 0.9 | 126 | 0.4267 |
72
+ | 0.4788 | 0.95 | 133 | 0.4268 |
73
+ | 0.476 | 1.0 | 140 | 0.4268 |
74
+
75
+
76
+ ### Framework versions
77
+
78
+ - Transformers 4.36.0.dev0
79
+ - Pytorch 2.0.1+cu118
80
+ - Datasets 2.15.0
81
+ - Tokenizers 0.15.0
82
+ - PEFT 0.6.0
83
+
84
+ ### Evaluation
85
+
86
+ I'm using MultiPL-E benchmark, the same as Code Llmama using in their paper
87
+
88
+
89
+ | Modal | Pass@k | Estimate | Num problems |
90
+ |-----------------------------------------|--------|----------|---------------|
91
+ | Code LLama - Instruct 13B | 1 | 39.0% | 159 |
92
+ | Our 13B | 1 | 42.4% | 159 |
93
+
94
+ How to reproduce my evaluation? Just run like the offical document of MultiPL-E: https://nuprl.github.io/MultiPL-E/tutorial.html, change the modal name by my model here: `mhhmm/typescript-instruct-20k-v2`
95
+
96
+ This is the code that I ran with Google Colab (using A100 40GB, yes, it requires that much GPU RAM)
97
+
98
+ If you even have a stronger GPU, increase the --batch-size, or --completion-limit
99
+
100
+ ```
101
+ !pip install --upgrade pip
102
+ !pip install aiohttp numpy tqdm pytest datasets torch transformers sentencepiece
103
+ !git clone https://github.com/nuprl/MultiPL-E
104
+ %cd MultiPL-E
105
+ !mkdir typescript
106
+ !python3 automodel.py --name mhhmm/typescript-instruct-20k-v2 --root-dataset humaneval --lang ts --temperature 0.2 --batch-size 10 --completion-limit 20 --output-dir-prefix typescript
107
+ %cd evaluation/src
108
+ !python3 main.py --dir ../../typescript --output-dir ../../typescript --recursive
109
+ !python3 pass_k.py ./typescript/*
110
+ ```
111
+