nlp-chula commited on
Commit
0e71845
1 Parent(s): 12d8054

End of training

Browse files
Files changed (2) hide show
  1. README.md +13 -14
  2. pytorch_model.bin +1 -1
README.md CHANGED
@@ -16,8 +16,8 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.9164
20
- - Accuracy: 0.7808
21
 
22
  ## Model description
23
 
@@ -36,29 +36,28 @@ More information needed
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
- - learning_rate: 5e-05
40
  - train_batch_size: 16
41
  - eval_batch_size: 16
42
  - seed: 42
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
- - num_epochs: 6
46
 
47
  ### Training results
48
 
49
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
50
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
51
- | 1.3287 | 1.0 | 512 | 1.2494 | 0.5985 |
52
- | 0.8335 | 2.0 | 1024 | 0.8613 | 0.7289 |
53
- | 0.5842 | 3.0 | 1536 | 0.8177 | 0.7682 |
54
- | 0.4046 | 4.0 | 2048 | 0.8337 | 0.7682 |
55
- | 0.2979 | 5.0 | 2560 | 0.8745 | 0.7773 |
56
- | 0.2083 | 6.0 | 3072 | 0.9164 | 0.7808 |
57
 
58
 
59
  ### Framework versions
60
 
61
- - Transformers 4.34.0
62
- - Pytorch 2.0.1+cu118
63
- - Datasets 2.14.5
64
- - Tokenizers 0.14.0
 
16
 
17
  This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.8123
20
+ - Accuracy: 0.7762
21
 
22
  ## Model description
23
 
 
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
+ - learning_rate: 3e-05
40
  - train_batch_size: 16
41
  - eval_batch_size: 16
42
  - seed: 42
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
+ - num_epochs: 5
46
 
47
  ### Training results
48
 
49
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
50
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
51
+ | 1.2935 | 1.0 | 512 | 0.9896 | 0.6811 |
52
+ | 0.9114 | 2.0 | 1024 | 0.8804 | 0.7221 |
53
+ | 0.64 | 3.0 | 1536 | 0.8094 | 0.7546 |
54
+ | 0.4395 | 4.0 | 2048 | 0.8038 | 0.7705 |
55
+ | 0.3559 | 5.0 | 2560 | 0.8123 | 0.7762 |
 
56
 
57
 
58
  ### Framework versions
59
 
60
+ - Transformers 4.34.1
61
+ - Pytorch 2.1.0+cu118
62
+ - Datasets 2.14.6
63
+ - Tokenizers 0.14.1
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9b9d95ff0361ceb3c204d32ea5298da76f7dbdd827d7c69cca6f548902a66adc
3
  size 421096302
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b32a7773d2175c67e38a009e32a578ed4d80c49c60fc1a57112fa2243ad18a4
3
  size 421096302