File size: 2,539 Bytes
35a3eea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
license: apache-2.0
base_model: Salesforce/codet5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: codet5-small-v2
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# codet5-small-v2

This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0818
- Rouge1: 90.2445
- Rouge2: 87.8925
- Rougel: 90.3346
- Rougelsum: 90.4247
- Gen Len: 13.7838

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | Rouge1  | Rouge2  | Rougel  | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log        | 1.0   | 20   | 1.6233          | 60.4701 | 43.4599 | 57.215  | 56.9531   | 13.5135 |
| No log        | 2.0   | 40   | 0.5503          | 79.3375 | 69.4176 | 75.4377 | 75.2854   | 13.5676 |
| No log        | 3.0   | 60   | 0.2397          | 66.4876 | 55.0069 | 62.0571 | 61.9916   | 11.8378 |
| No log        | 4.0   | 80   | 0.1486          | 78.7452 | 74.9876 | 78.6615 | 78.6958   | 14.4054 |
| No log        | 5.0   | 100  | 0.1200          | 81.8039 | 78.3534 | 82.0292 | 82.0142   | 14.3243 |
| No log        | 6.0   | 120  | 0.1040          | 81.8039 | 78.3534 | 82.0292 | 82.0142   | 14.3243 |
| No log        | 7.0   | 140  | 0.0931          | 88.2604 | 84.8265 | 88.0824 | 88.3097   | 14.4324 |
| No log        | 8.0   | 160  | 0.0857          | 88.2604 | 84.8265 | 88.0824 | 88.3097   | 14.4324 |
| No log        | 9.0   | 180  | 0.0834          | 90.2445 | 87.8925 | 90.3346 | 90.4247   | 13.7838 |
| No log        | 10.0  | 200  | 0.0818          | 90.2445 | 87.8925 | 90.3346 | 90.4247   | 13.7838 |


### Framework versions

- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1