Update README.md
Browse files
README.md
CHANGED
@@ -26,7 +26,62 @@ model-index:
|
|
26 |
metrics:
|
27 |
- accuracy
|
28 |
- f1
|
29 |
-
pipeline_tag: text-classification
|
30 |
---
|
31 |
|
32 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
metrics:
|
27 |
- accuracy
|
28 |
- f1
|
|
|
29 |
---
|
30 |
|
31 |
+
# distilbert-sentiment
|
32 |
+
|
33 |
+
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on a subset of the [amazon-polarity dataset](https://huggingface.co/datasets/amazon_polarity).
|
34 |
+
It achieves the following results on the evaluation set:
|
35 |
+
- Loss: 0.119
|
36 |
+
- Accuracy: 0.958
|
37 |
+
- F1_score: 0.957
|
38 |
+
|
39 |
+
## Model description
|
40 |
+
|
41 |
+
This sentiment classifier has been trained on 180_000 samples for the training set, 20_000 samples for the validation set and 20_000 samples for the test set.
|
42 |
+
|
43 |
+
## Intended uses & limitations
|
44 |
+
```python
|
45 |
+
from transformers import pipeline
|
46 |
+
|
47 |
+
# Create the pipeline
|
48 |
+
sentiment_classifier = pipeline('text-classification', model='AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon')
|
49 |
+
|
50 |
+
# Now you can use the pipeline to classify emotions
|
51 |
+
result = sentiment_classifier("This product doesn't fit me at all.")
|
52 |
+
print(result)
|
53 |
+
#[{'label': 'negative', 'score': 0.9994848966598511}]
|
54 |
+
```
|
55 |
+
|
56 |
+
## Training and evaluation data
|
57 |
+
|
58 |
+
More information needed
|
59 |
+
|
60 |
+
## Training procedure
|
61 |
+
|
62 |
+
### Training hyperparameters
|
63 |
+
|
64 |
+
The following hyperparameters were used during training:
|
65 |
+
- learning_rate: 3e-05
|
66 |
+
- train_batch_size: 32
|
67 |
+
- eval_batch_size: 32
|
68 |
+
- seed: 1270
|
69 |
+
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
|
70 |
+
- lr_scheduler_type: linear
|
71 |
+
- lr_scheduler_warmup_steps: 150
|
72 |
+
- num_epochs: 2
|
73 |
+
- weight_decay: 0.01
|
74 |
+
|
75 |
+
### Training results
|
76 |
+
|
77 |
+
| key | value |
|
78 |
+
| --- | ----- |
|
79 |
+
| eval_loss | 0.119 |
|
80 |
+
| eval_accuracy | 0.958 |
|
81 |
+
| eval_f1_score | 0.957 |
|
82 |
+
|
83 |
+
### Framework versions
|
84 |
+
|
85 |
+
- Transformers 4.34.0
|
86 |
+
- Pytorch lightning 2.0.9
|
87 |
+
- Tokenizers 0.13.3
|