kaustavbhattacharjee commited on
Commit
69b43e6
1 Parent(s): 9db039b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -3
README.md CHANGED
@@ -8,7 +8,27 @@ metrics:
8
  - f1
9
  model-index:
10
  - name: finetuning-DistillBERT-amazon-polarity
11
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -16,7 +36,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # finetuning-DistillBERT-amazon-polarity
18
 
19
- This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
  - Loss: 0.1920
22
  - Accuracy: 0.9167
@@ -56,4 +76,4 @@ The following hyperparameters were used during training:
56
  - Transformers 4.38.1
57
  - Pytorch 2.1.0+cu121
58
  - Datasets 2.18.0
59
- - Tokenizers 0.15.2
 
8
  - f1
9
  model-index:
10
  - name: finetuning-DistillBERT-amazon-polarity
11
+ results:
12
+ - task:
13
+ type: text-classification
14
+ name: Text Classification
15
+ dataset:
16
+ name: amazon_polarity
17
+ type: sentiment
18
+ args: default
19
+ metrics:
20
+ - type: accuracy
21
+ value: 0.9166666666666666
22
+ name: Accuracy
23
+ - type: loss
24
+ value: 0.1919892132282257
25
+ name: Loss
26
+ - type: f1
27
+ value: 0.9169435215946843
28
+ name: F1
29
+ datasets:
30
+ - amazon_polarity
31
+ pipeline_tag: text-classification
32
  ---
33
 
34
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
36
 
37
  # finetuning-DistillBERT-amazon-polarity
38
 
39
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on [Amazon Polarity](https://huggingface.co/datasets/amazon_polarity) dataset.
40
  It achieves the following results on the evaluation set:
41
  - Loss: 0.1920
42
  - Accuracy: 0.9167
 
76
  - Transformers 4.38.1
77
  - Pytorch 2.1.0+cu121
78
  - Datasets 2.18.0
79
+ - Tokenizers 0.15.2