lewtun HF staff commited on
Commit
e8078d0
1 Parent(s): a5a73b0

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -11
README.md CHANGED
@@ -7,7 +7,7 @@ datasets:
7
  metrics:
8
  - accuracy
9
  model-index:
10
- - name: distilbert-base-uncased-finetuned-clinc
11
  results:
12
  - task:
13
  name: Text Classification
@@ -19,18 +19,18 @@ model-index:
19
  metrics:
20
  - name: Accuracy
21
  type: accuracy
22
- value: 0.9174193548387096
23
  ---
24
 
25
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
26
  should probably proofread and complete it, then remove this comment. -->
27
 
28
- # distilbert-base-uncased-finetuned-clinc
29
 
30
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
31
  It achieves the following results on the evaluation set:
32
- - Loss: 0.7773
33
- - Accuracy: 0.9174
34
 
35
  ## Model description
36
 
@@ -55,17 +55,22 @@ The following hyperparameters were used during training:
55
  - seed: 42
56
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
  - lr_scheduler_type: linear
58
- - num_epochs: 5
59
 
60
  ### Training results
61
 
62
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
63
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
64
- | 4.2923 | 1.0 | 318 | 3.2893 | 0.7423 |
65
- | 2.6307 | 2.0 | 636 | 1.8837 | 0.8281 |
66
- | 1.5483 | 3.0 | 954 | 1.1583 | 0.8968 |
67
- | 1.0153 | 4.0 | 1272 | 0.8618 | 0.9094 |
68
- | 0.7958 | 5.0 | 1590 | 0.7773 | 0.9174 |
 
 
 
 
 
69
 
70
 
71
  ### Framework versions
 
7
  metrics:
8
  - accuracy
9
  model-index:
10
+ - name: distilbert-base-uncased-distilled-clinc
11
  results:
12
  - task:
13
  name: Text Classification
 
19
  metrics:
20
  - name: Accuracy
21
  type: accuracy
22
+ value: 0.9432258064516129
23
  ---
24
 
25
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
26
  should probably proofread and complete it, then remove this comment. -->
27
 
28
+ # distilbert-base-uncased-distilled-clinc
29
 
30
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
31
  It achieves the following results on the evaluation set:
32
+ - Loss: 0.1770
33
+ - Accuracy: 0.9432
34
 
35
  ## Model description
36
 
 
55
  - seed: 42
56
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
  - lr_scheduler_type: linear
58
+ - num_epochs: 10
59
 
60
  ### Training results
61
 
62
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
63
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
64
+ | 1.5226 | 1.0 | 318 | 0.9867 | 0.7287 |
65
+ | 0.76 | 2.0 | 636 | 0.4736 | 0.8561 |
66
+ | 0.3972 | 3.0 | 954 | 0.2794 | 0.9126 |
67
+ | 0.2541 | 4.0 | 1272 | 0.2189 | 0.9294 |
68
+ | 0.2017 | 5.0 | 1590 | 0.1971 | 0.9361 |
69
+ | 0.1805 | 6.0 | 1908 | 0.1880 | 0.9406 |
70
+ | 0.1685 | 7.0 | 2226 | 0.1826 | 0.9413 |
71
+ | 0.1626 | 8.0 | 2544 | 0.1799 | 0.9426 |
72
+ | 0.1589 | 9.0 | 2862 | 0.1782 | 0.9429 |
73
+ | 0.1569 | 10.0 | 3180 | 0.1770 | 0.9432 |
74
 
75
 
76
  ### Framework versions