Datasets:
fix task_ids
Browse files
README.md
CHANGED
@@ -20,9 +20,6 @@ task_ids:
|
|
20 |
- natural-language-inference
|
21 |
- semantic-similarity-scoring
|
22 |
- sentiment-classification
|
23 |
-
- text-classification-other-coreference-nli
|
24 |
-
- text-classification-other-paraphrase-identification
|
25 |
-
- text-classification-other-qa-nli
|
26 |
- text-scoring
|
27 |
paperswithcode_id: glue
|
28 |
pretty_name: GLUE (General Language Understanding Evaluation benchmark)
|
@@ -148,6 +145,10 @@ configs:
|
|
148 |
- sst2
|
149 |
- stsb
|
150 |
- wnli
|
|
|
|
|
|
|
|
|
151 |
---
|
152 |
|
153 |
# Dataset Card for GLUE
|
@@ -609,4 +610,4 @@ the correct citation for each contained dataset.
|
|
609 |
|
610 |
### Contributions
|
611 |
|
612 |
-
Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
|
|
|
20 |
- natural-language-inference
|
21 |
- semantic-similarity-scoring
|
22 |
- sentiment-classification
|
|
|
|
|
|
|
23 |
- text-scoring
|
24 |
paperswithcode_id: glue
|
25 |
pretty_name: GLUE (General Language Understanding Evaluation benchmark)
|
|
|
145 |
- sst2
|
146 |
- stsb
|
147 |
- wnli
|
148 |
+
tags:
|
149 |
+
- qa-nli
|
150 |
+
- coreference-nli
|
151 |
+
- paraphrase-identification
|
152 |
---
|
153 |
|
154 |
# Dataset Card for GLUE
|
|
|
610 |
|
611 |
### Contributions
|
612 |
|
613 |
+
Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
|