mor40 commited on
Commit
0259942
1 Parent(s): 1d42ffe

Training complete

Browse files
Files changed (1) hide show
  1. README.md +12 -13
README.md CHANGED
@@ -2,8 +2,6 @@
2
  base_model: mor40/BulBERT-chitanka-model
3
  tags:
4
  - generated_from_trainer
5
- datasets:
6
- - bgglue
7
  model-index:
8
  - name: BulBERT-exams-5epochs
9
  results: []
@@ -14,10 +12,10 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # BulBERT-exams-5epochs
16
 
17
- This model is a fine-tuned version of [mor40/BulBERT-chitanka-model](https://huggingface.co/mor40/BulBERT-chitanka-model) on the bgglue dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 2.9622
20
- - Acc: 0.2946
21
 
22
  ## Model description
23
 
@@ -44,21 +42,22 @@ The following hyperparameters were used during training:
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_ratio: 0.06
46
  - num_epochs: 5
 
47
 
48
  ### Training results
49
 
50
  | Training Loss | Epoch | Step | Validation Loss | Acc |
51
  |:-------------:|:-----:|:-----:|:---------------:|:------:|
52
- | 1.0207 | 1.0 | 2538 | 1.9309 | 0.3004 |
53
- | 0.9505 | 2.0 | 5076 | 2.7472 | 0.2750 |
54
- | 0.9552 | 3.0 | 7614 | 2.7331 | 0.2913 |
55
- | 0.9126 | 4.0 | 10152 | 2.8254 | 0.2798 |
56
- | 0.8957 | 5.0 | 12690 | 2.9622 | 0.2946 |
57
 
58
 
59
  ### Framework versions
60
 
61
- - Transformers 4.35.0
62
  - Pytorch 2.1.0+cu118
63
- - Datasets 2.14.6
64
- - Tokenizers 0.14.1
 
2
  base_model: mor40/BulBERT-chitanka-model
3
  tags:
4
  - generated_from_trainer
 
 
5
  model-index:
6
  - name: BulBERT-exams-5epochs
7
  results: []
 
12
 
13
  # BulBERT-exams-5epochs
14
 
15
+ This model is a fine-tuned version of [mor40/BulBERT-chitanka-model](https://huggingface.co/mor40/BulBERT-chitanka-model) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 0.7671
18
+ - Acc: 0.7498
19
 
20
  ## Model description
21
 
 
42
  - lr_scheduler_type: linear
43
  - lr_scheduler_warmup_ratio: 0.06
44
  - num_epochs: 5
45
+ - mixed_precision_training: Native AMP
46
 
47
  ### Training results
48
 
49
  | Training Loss | Epoch | Step | Validation Loss | Acc |
50
  |:-------------:|:-----:|:-----:|:---------------:|:------:|
51
+ | 0.7342 | 1.0 | 4309 | 0.7328 | 0.7498 |
52
+ | 0.6805 | 2.0 | 8618 | 0.7723 | 0.7498 |
53
+ | 0.6844 | 3.0 | 12927 | 0.7731 | 0.7498 |
54
+ | 0.6748 | 4.0 | 17236 | 0.7608 | 0.7498 |
55
+ | 0.6809 | 5.0 | 21545 | 0.7671 | 0.7498 |
56
 
57
 
58
  ### Framework versions
59
 
60
+ - Transformers 4.35.2
61
  - Pytorch 2.1.0+cu118
62
+ - Datasets 2.15.0
63
+ - Tokenizers 0.15.0