diff --git a/README.md b/README.md deleted file mode 100644 index 97814dce145c52eebbe242b13ff85c961bbef819..0000000000000000000000000000000000000000 --- a/README.md +++ /dev/null @@ -1,178 +0,0 @@ ---- -annotations_creators: -- other -language_creators: -- other -language: -- sv -- da -- nb -license: -- cc-by-4.0 -multilinguality: -- translation -size_categories: -- unknown -source_datasets: -- extended|glue -- extended|super_glue -task_categories: -- text-classification -task_ids: -- natural-language-inference -- semantic-similarity-classification -- sentiment-classification -- text-scoring -pretty_name: overlim -tags: -- qa-nli -- paraphrase-identification ---- - -# Dataset Card for OverLim - -## Dataset Description - -- **Homepage:** -- **Repository:** -- **Paper:** -- **Leaderboard:** -- **Point of Contact:** - -### Dataset Summary - -The _OverLim_ dataset contains some of the GLUE and SuperGLUE tasks automatically -translated to Swedish, Danish, and Norwegian (bokmål), using the OpusMT models -for MarianMT. - -The translation quality was not manually checked and may thus be faulty. -Results on these datasets should thus be interpreted carefully. - -If you want to have an easy script to train and evaluate your models have a look [here](https://github.com/kb-labb/overlim_eval) - - -### Supported Tasks and Leaderboards - -The data contains the following tasks from GLUE and SuperGLUE: - -- GLUE - - `mnli` - - `mrpc` - - `qnli` - - `qqp` - - `rte` - - `sst` - - `stsb` - - `wnli` -- SuperGLUE - - `boolq` - - `cb` - - `copa` - - `rte` - -### Languages - -- Swedish -- Danish -- Norwegian (bokmål) - -## Dataset Structure - -### Data Instances - -Every task has their own set of features, but all share an `idx` and `label`. - -- GLUE - - `mnli` - - `premise`, `hypothesis` - - `mrpc` - - `text_a`, `text_b` - - `qnli` - - `premise`, `hypothesis` - - `qqp` - - `text_a`, `text_b` - - `sst` - - `text` - - `stsb` - - `text_a`, `text_b` - - `wnli` - - `premise`, `hypothesis` -- SuperGLUE - - `boolq` - - `question`, `passage` - - `cb` - - `premise`, `hypothesis` - - `copa` - - `premise`, `choice1`, `choice2`, `question` - - `rte` - - `premise`, `hypothesis` - -### Data Splits - -In order to have test-split, we repurpose the original validation-split as -test-split, and split the training-split into a new training- and -validation-split, with an 80-20 distribution. - -## Dataset Creation - -For more information about the individual tasks see (https://gluebenchmark.com) and (https://super.gluebenchmark.com). - -### Curation Rationale - -Training non-English models is easy, but there is a lack of evaluation datasets to compare their actual performance. - -### Source Data - -#### Initial Data Collection and Normalization - -[More Information Needed] - -#### Who are the source language producers? - -[More Information Needed] - -### Annotations - -#### Annotation process - -[More Information Needed] - -#### Who are the annotators? - -[More Information Needed] - -### Personal and Sensitive Information - -[More Information Needed] - -## Considerations for Using the Data - -### Social Impact of Dataset - -[More Information Needed] - -### Discussion of Biases - -[More Information Needed] - -### Other Known Limitations - -[More Information Needed] - -## Additional Information - -### Dataset Curators - -[More Information Needed] - -### Licensing Information - -[More Information Needed] - -### Citation Information - -[More Information Needed] - -### Contributions - -Thanks to [@kb-labb](https://github.com/kb-labb) for adding this dataset. diff --git a/boolq_da/overlim-test.parquet b/boolq_da/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..40337252f8dbb588c424a7781f2fe694d68e87c0 --- /dev/null +++ b/boolq_da/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca498113ebda02c917380ce8783e515cd8aa8e1653f14ca5a1a3ec4283333eea +size 1347532 diff --git a/boolq_da/overlim-train.parquet b/boolq_da/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..ed6a28c6e02b9bcaa55ab29c6eee59ee85ace77a --- /dev/null +++ b/boolq_da/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a895474f8ce8ad7fe15735129aad8a41b1c36c99a81f84653d9185ae81e17632 +size 2652565 diff --git a/boolq_da/overlim-validation.parquet b/boolq_da/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..21bbc49c44108074f3811c95b77fa31bbf6d48e8 --- /dev/null +++ b/boolq_da/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b3e1d0f4e88e37b76ba560c972ff99be5a63020990c39c6cb4aac29799b7bd59 +size 1295166 diff --git a/boolq_nb/overlim-test.parquet b/boolq_nb/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..3bc6d0868804cef622d627fb4ad047091cf764fa --- /dev/null +++ b/boolq_nb/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:676e7aa81ef8c0a1d50ae31b84260bcb34003bb12e5a44ff18518a2971565c62 +size 1305967 diff --git a/boolq_nb/overlim-train.parquet b/boolq_nb/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..bc43277cc920b78b52a40beb6177fd7c98bc41ab --- /dev/null +++ b/boolq_nb/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c8c93d8176e484244ae37b55b918637641769b6ba39f599e95da4c093eb0fe6 +size 2563847 diff --git a/boolq_nb/overlim-validation.parquet b/boolq_nb/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..3235910a3b741107b2a6d290b2c9659f1bfd40b2 --- /dev/null +++ b/boolq_nb/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f78141c58642c62ec02c50bc56b65c569bc88ac3db816f24ba27db5fd28aace +size 1256779 diff --git a/boolq_sv/overlim-test.parquet b/boolq_sv/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..05f4b4451d80c2c6ef7d93ebf9ed89b31669aef7 --- /dev/null +++ b/boolq_sv/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:016e1d68738a8e40e475c1d80e4525fea0c148966f60a9896f5fa3666bc90f7c +size 1355934 diff --git a/boolq_sv/overlim-train.parquet b/boolq_sv/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..96c9f6b1a72bd03c57082f156b17c2edcc8ea2eb --- /dev/null +++ b/boolq_sv/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66bf38abf1e0102e27258560a857d93aed0122059df6a0b45dba6c5dd39faddc +size 2662361 diff --git a/boolq_sv/overlim-validation.parquet b/boolq_sv/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..506e1736fff76d1890dca31e1611b7b48400912b --- /dev/null +++ b/boolq_sv/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15207c10b30eef8e6f97713e8b0789165d57008b08e6c336fae81f4b100dafef +size 1303531 diff --git a/cb_da/overlim-test.parquet b/cb_da/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..2fe651958594b84db6f7d1900f10b02866ab08ee --- /dev/null +++ b/cb_da/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e56520d3c03c580be74f00c03174e39c6b220422ea9d1241b788683899a8ba6d +size 19257 diff --git a/cb_da/overlim-train.parquet b/cb_da/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..fbe6851bb89df1bc71aca2b0fdababb4fa9dc15e --- /dev/null +++ b/cb_da/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84100934b9005b52bbbcf5965104c40370a0339329b741dde04ad8ace175c2e2 +size 47578 diff --git a/cb_da/overlim-validation.parquet b/cb_da/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..96ae5cd0bea8d39711903aa36205d7db827ef5f2 --- /dev/null +++ b/cb_da/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ceb5c40c22647adc3f162e319dc84d99fce6a3e91c9aae3ed7e709e9ae96332 +size 15572 diff --git a/cb_nb/overlim-test.parquet b/cb_nb/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..e51999c1be5636e0b07687c25cbdaec5d53f34ec --- /dev/null +++ b/cb_nb/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8ccb8c201c7d70b33d3d545dd4d32c7dfae762347a25fb5288d9b5b90db5d72 +size 18746 diff --git a/cb_nb/overlim-train.parquet b/cb_nb/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..144964dba1c240ed577861ec7d8475017691a849 --- /dev/null +++ b/cb_nb/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2134776c1c414eac6fc1c560b4556d4cfe607a183ad8d89dbcfdcbac68878d39 +size 44701 diff --git a/cb_nb/overlim-validation.parquet b/cb_nb/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..97c046a417051022ee76a7aa623d9b6cb7f2c593 --- /dev/null +++ b/cb_nb/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1c7cfd4bb4ee1ab769ae856e2a0d262b9ee93e15c8aa78482b3c8a98c9a2f38 +size 16084 diff --git a/cb_sv/overlim-test.parquet b/cb_sv/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..b7e9b76f1201cd1940f80833cf4e3ad9d9b0e855 --- /dev/null +++ b/cb_sv/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f8dbb81f82e52bd60392184570dd76e87deed7f0c0afb5aef93c204cbc0ccbb +size 19284 diff --git a/cb_sv/overlim-train.parquet b/cb_sv/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..2fa5a4130ce49acfefcc69bed5e6e2643e45ea50 --- /dev/null +++ b/cb_sv/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ed3f9568ed16bd1d5f8b0c6c88a54049b46403cca46eef1ac549d80eff80f3e5 +size 47998 diff --git a/cb_sv/overlim-validation.parquet b/cb_sv/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..38e7e1f6fa4e547ad653d8075c61447c8946ae72 --- /dev/null +++ b/cb_sv/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4c3cc1f3ca03df564c3336aa7fee6ef0c3ec5c761e044e7d454f3fa5fed169f +size 17230 diff --git a/copa_da/overlim-test.parquet b/copa_da/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..80edeeb1b316ba3d101333a06ce78bce2a3b69fb --- /dev/null +++ b/copa_da/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3bf3a8d23cd4b11be6376a3c42c990b1a30db6707162372a9bc37f32b77e75a2 +size 12184 diff --git a/copa_da/overlim-train.parquet b/copa_da/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..63225709225de41ddd3d6df068444868feb994a6 --- /dev/null +++ b/copa_da/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b15ef538416933d1dd9d73f6d002a06be4b2210413373872e514d0fce6db2061 +size 28964 diff --git a/copa_da/overlim-validation.parquet b/copa_da/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..6f6db87faa58abe6ea9e9f68cdbb8ed47a036dc3 --- /dev/null +++ b/copa_da/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a35d8e3322d28b2f769b7108af1900e9a982a1f0989f8f254f8fc9ebb35e8f7 +size 10280 diff --git a/copa_nb/overlim-test.parquet b/copa_nb/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..6326593ebfb2008ecf4c7c7086f32e969e427731 --- /dev/null +++ b/copa_nb/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a3aad5de6e59ff3b70df4d7a275b6ec2c9800575efeb24586f89556de2fc00f +size 12043 diff --git a/copa_nb/overlim-train.parquet b/copa_nb/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..653680c59432cc73899f8f325594e3f212d97de9 --- /dev/null +++ b/copa_nb/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82f095f9792229dc2b1f26750b38ee3823b1a7333ce0ffc964d7d568051d2daa +size 28553 diff --git a/copa_nb/overlim-validation.parquet b/copa_nb/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..4b77e7c1ab25a7f4c9ea438bfc548d9ea20e824a --- /dev/null +++ b/copa_nb/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9bb1366472435ba6bda0e4e44fb02f2353bdc540658cd5badd48337b6712f99 +size 10052 diff --git a/copa_sv/overlim-test.parquet b/copa_sv/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..c6876eb6116d34da97997cb7a308ad0e88774645 --- /dev/null +++ b/copa_sv/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fab35b19872eb8f45d80b76d61f8829d04fa250e9d4cd6b38ba23447d3ca4fb +size 12206 diff --git a/copa_sv/overlim-train.parquet b/copa_sv/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..c2e7536992f5811f9e7a9fc8d01d0911ade981b9 --- /dev/null +++ b/copa_sv/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:265db68b7c56271304f8de55b559446d73880d3b67f29daa2e56526b7d3d84c0 +size 28783 diff --git a/copa_sv/overlim-validation.parquet b/copa_sv/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..df9d6514b18940d4105325e08dd38fd94a42b7f3 --- /dev/null +++ b/copa_sv/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66d6cb613138f81d358c7e8c4975bc5704d1c013431b72b58e534daf7576bedc +size 10317 diff --git a/data/da/boolq.tar.gz b/data/da/boolq.tar.gz deleted file mode 100644 index 95156b35e3ce2507abeaabc02cedaafcb5f4f082..0000000000000000000000000000000000000000 --- a/data/da/boolq.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:42f1615b4e1580845e599aed7349b835f8772a0472dfa729f896f12ce0574e55 -size 3350831 diff --git a/data/da/cb.tar.gz b/data/da/cb.tar.gz deleted file mode 100644 index fad13a77073b35e990f9527992716018000fd408..0000000000000000000000000000000000000000 --- a/data/da/cb.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ff530ff490bbf68db950064add852ccc439a16544ec22780e1818bd33b365b1a -size 40604 diff --git a/data/da/copa.tar.gz b/data/da/copa.tar.gz deleted file mode 100644 index e05b34f199d5afc65f819f2939a0d16e5f994d18..0000000000000000000000000000000000000000 --- a/data/da/copa.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:088fb530f36b87768cd02181ab5229bf5d1e39894e54a11afea6f41998e3d0c6 -size 22828 diff --git a/data/da/mnli.tar.gz b/data/da/mnli.tar.gz deleted file mode 100644 index 50d74fff275e4b33e9f9d845b6d9701bb88487af..0000000000000000000000000000000000000000 --- a/data/da/mnli.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f96fd8d1b027c56f04ae1b21eb53415b39ad6ee0f97b2e2225d82794d3be350d -size 30837170 diff --git a/data/da/mrpc.tar.gz b/data/da/mrpc.tar.gz deleted file mode 100644 index de68196d731d75b1b800a0b9767501a16a8ff040..0000000000000000000000000000000000000000 --- a/data/da/mrpc.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d7b0520414351345f24660e26f2ae96011251908ac68533f876b16e14f904868 -size 374973 diff --git a/data/da/qnli.tar.gz b/data/da/qnli.tar.gz deleted file mode 100644 index 448ef364bd41afcd77b3eb01124661b9fc207cad..0000000000000000000000000000000000000000 --- a/data/da/qnli.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:260153f22ba714106852d1545906e77e7ce490e1220a2ac286730959e37b9278 -size 11157410 diff --git a/data/da/qqp.tar.gz b/data/da/qqp.tar.gz deleted file mode 100644 index 2ddb51bc5cc9048b0cd1ccbcbbb5ede3a198d999..0000000000000000000000000000000000000000 --- a/data/da/qqp.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:dd6169eab55cdd5d921328207a5a7facd2789ecbad3dd247c6b33d981b849319 -size 21500446 diff --git a/data/da/rte.tar.gz b/data/da/rte.tar.gz deleted file mode 100644 index 20eabe43f5dc980e8f3d7916ce83cf4e246b7c77..0000000000000000000000000000000000000000 --- a/data/da/rte.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e6018ee04334b57ca50fd13d4cc73c9cece96a332d368f464badbc9c374dc01e -size 392939 diff --git a/data/da/sst.tar.gz b/data/da/sst.tar.gz deleted file mode 100644 index fe53e88be25dd7126c55fcf0138202cabcd4625b..0000000000000000000000000000000000000000 --- a/data/da/sst.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5fd539bff626886cf16d31223fa055b5abc3806684606e6e65a7978024fe96de -size 1929166 diff --git a/data/da/stsb.tar.gz b/data/da/stsb.tar.gz deleted file mode 100644 index bffd47e0204642ad37078b76355ca06d470f0c18..0000000000000000000000000000000000000000 --- a/data/da/stsb.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c80b4d35b817d427f3bb2174039c44881957ad2b020e60dff6968e717fc8acdb -size 368211 diff --git a/data/da/wnli.tar.gz b/data/da/wnli.tar.gz deleted file mode 100644 index 73148de341d3386b4d3686b262c3d1009abacf9b..0000000000000000000000000000000000000000 --- a/data/da/wnli.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a97d74d9d2da304a18da4366534ae6c1de3f5fd4cc9a388504813568374af1ea -size 29413 diff --git a/data/nb/boolq.tar.gz b/data/nb/boolq.tar.gz deleted file mode 100644 index d599efad6900270bdff6821eb2acad93227d5c65..0000000000000000000000000000000000000000 --- a/data/nb/boolq.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0038ecb49187122ecd3e4607ca3a2db2afae27fd8293279ec805d809d5567de8 -size 3254901 diff --git a/data/nb/cb.tar.gz b/data/nb/cb.tar.gz deleted file mode 100644 index c30dc3831d398de1dbfa90b89aa0d843b416bb1c..0000000000000000000000000000000000000000 --- a/data/nb/cb.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7d4a11bbd79bdfafad0b7a4c0f5d4f8cb4c1d944f6bac8dfe6782bff7725a5da -size 39777 diff --git a/data/nb/copa.tar.gz b/data/nb/copa.tar.gz deleted file mode 100644 index 9709a5afee98e9fac409c69f06389e38c89a2012..0000000000000000000000000000000000000000 --- a/data/nb/copa.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:743dc02ade29e7cb4cb77f6716fcfd949635114f889ff3daec6f09bb6592f541 -size 22695 diff --git a/data/nb/mnli.tar.gz b/data/nb/mnli.tar.gz deleted file mode 100644 index 1bd49b7c029ef5eaaa10d4d55142c10e0f956a36..0000000000000000000000000000000000000000 --- a/data/nb/mnli.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7bdcd02d1cb2c5fd3996011032aa1a4eef96a4dae6b4d812f96ebd0a5fcd1349 -size 29771448 diff --git a/data/nb/mrpc.tar.gz b/data/nb/mrpc.tar.gz deleted file mode 100644 index 17619b1ddd7378956e531ba25f55351f261510c5..0000000000000000000000000000000000000000 --- a/data/nb/mrpc.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3af04ec9abc76e422562751c44edbf6bea4341ed64b231e11d9b05c414d81a15 -size 368694 diff --git a/data/nb/qnli.tar.gz b/data/nb/qnli.tar.gz deleted file mode 100644 index b03e8827eb6ac7fe61291ca72b9d17285fb9c589..0000000000000000000000000000000000000000 --- a/data/nb/qnli.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:604c6ccc69b081c3ff24072d97918b547e58f4b3c4b744072c1b750068930088 -size 10724704 diff --git a/data/nb/qqp.tar.gz b/data/nb/qqp.tar.gz deleted file mode 100644 index 9ae42bc3bed5746619bff69e64577aabac39ea81..0000000000000000000000000000000000000000 --- a/data/nb/qqp.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f2b0000dfb7f68a277b5b961fe40d78ede324d4fd455a868919e5795cfb41d11 -size 21097603 diff --git a/data/nb/rte.tar.gz b/data/nb/rte.tar.gz deleted file mode 100644 index 49f03e473cff835e77450f5f5032219b51a34642..0000000000000000000000000000000000000000 --- a/data/nb/rte.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c764579d7ac464fae4b295894a9e36c188707283ae02b8e0dfdcf9e86caf84ab -size 379837 diff --git a/data/nb/sst.tar.gz b/data/nb/sst.tar.gz deleted file mode 100644 index 768b7415e454b5106d6ec519ed1fae2dd2f16010..0000000000000000000000000000000000000000 --- a/data/nb/sst.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4f932332d705d01675fe565cf356ce4a22eaa2ebf4f4ebf68c3471fd43548d9c -size 1905948 diff --git a/data/nb/stsb.tar.gz b/data/nb/stsb.tar.gz deleted file mode 100644 index 4d2e7779178e993f82e61bac637c0abf516ab3be..0000000000000000000000000000000000000000 --- a/data/nb/stsb.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ddb82ab7453e90010bd27fe470a887a02e96305907994bfce39230d9379b00b8 -size 368825 diff --git a/data/nb/wnli.tar.gz b/data/nb/wnli.tar.gz deleted file mode 100644 index 315237b4cca3e50fef6523808382c49e3cc39959..0000000000000000000000000000000000000000 --- a/data/nb/wnli.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:07f94f2f11b502ebc188f05ecb83a4f89823d5bb4d0ae4b5b10c3052ecb8fdf9 -size 29677 diff --git a/data/sv/boolq.tar.gz b/data/sv/boolq.tar.gz deleted file mode 100644 index 3355ba33b54d3b3d245847709d8d0765e9ffafa9..0000000000000000000000000000000000000000 --- a/data/sv/boolq.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:159555cd9b2c93ed20480ee3bd2b7e3cb016b7929b81e121fbd0f2d1a030b074 -size 3368913 diff --git a/data/sv/cb.tar.gz b/data/sv/cb.tar.gz deleted file mode 100644 index a2ac29c1b3811bcdf9b23bc17e24591e65371f96..0000000000000000000000000000000000000000 --- a/data/sv/cb.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:85587d9ab0ca5cadbb1f1ebad65b648a79c65baff84989334444f503b451752e -size 41036 diff --git a/data/sv/copa.tar.gz b/data/sv/copa.tar.gz deleted file mode 100644 index e58818531b6a60f517efcb1903351de7d7f3ee01..0000000000000000000000000000000000000000 --- a/data/sv/copa.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ddea054e155a16c8857ed078073943058b4fe2ca4c59155d52afafc878ce5722 -size 22790 diff --git a/data/sv/mnli.tar.gz b/data/sv/mnli.tar.gz deleted file mode 100644 index 0b5b23a90f80365dbf75b670d39d20e7c1fd636f..0000000000000000000000000000000000000000 --- a/data/sv/mnli.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b5f40c836e15fdd2b2aaad3ea8792a7a8b1cb4ac6fb5baacc6cf13933cdc7319 -size 30839949 diff --git a/data/sv/mrpc.tar.gz b/data/sv/mrpc.tar.gz deleted file mode 100644 index b3b0e9cd279b3d70c20d0efb506b59e2203aab43..0000000000000000000000000000000000000000 --- a/data/sv/mrpc.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:13900c98246c7389bf8eda4d54e82042a6c1aed095ad26b60582919b7e8380a4 -size 381946 diff --git a/data/sv/qnli.tar.gz b/data/sv/qnli.tar.gz deleted file mode 100644 index 13ef0701808340c76be152c877415f1ce28fe034..0000000000000000000000000000000000000000 --- a/data/sv/qnli.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0c9205a5f0f8c2119c3e1d67e5f621201a089a39bd9335d64b51a86a87f90ad3 -size 11218767 diff --git a/data/sv/qqp.tar.gz b/data/sv/qqp.tar.gz deleted file mode 100644 index 75241b941ffc6e511a8086eec8c8d5333488a49c..0000000000000000000000000000000000000000 --- a/data/sv/qqp.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c54b8b76973671f5ef1ad8d87a7f2a292851c69b36a8bdbb2c93eaa05607cefb -size 21998351 diff --git a/data/sv/rte.tar.gz b/data/sv/rte.tar.gz deleted file mode 100644 index 29ccbdeb9fd37ce90a6b9c3c78bf3ef7e2e328a4..0000000000000000000000000000000000000000 --- a/data/sv/rte.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e77ce27a918be490ddf3f39dd7b25ae9dca202e82bd037b01fca0dc5ddedf8d0 -size 395529 diff --git a/data/sv/sst.tar.gz b/data/sv/sst.tar.gz deleted file mode 100644 index cf4290d7b7d4f2342b01552fd4d80c5dd44935de..0000000000000000000000000000000000000000 --- a/data/sv/sst.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:86ea38b24a963e42a07ee88e822efa262c674580d3e8f368a4b4179761ca0d81 -size 1984447 diff --git a/data/sv/stsb.tar.gz b/data/sv/stsb.tar.gz deleted file mode 100644 index ea5bbcf41d80454f15e1036f0efa8600630af44e..0000000000000000000000000000000000000000 --- a/data/sv/stsb.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c8d0df38abf3089fc2a83310bd536b69dbf09375f3df90fbb513b02b549f75e0 -size 371403 diff --git a/data/sv/wnli.tar.gz b/data/sv/wnli.tar.gz deleted file mode 100644 index 77d21dc2c3455fc74cc689b2d012d145a0b6f0f1..0000000000000000000000000000000000000000 --- a/data/sv/wnli.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2e4b019e3b2f614a1ef12a0b243d3d1b08cf2d9bf79676998ff259447a3461de -size 30427 diff --git a/dataset_infos.json b/dataset_infos.json deleted file mode 100644 index 3ace081bb37fe7282a933bb6323339a978780305..0000000000000000000000000000000000000000 --- a/dataset_infos.json +++ /dev/null @@ -1 +0,0 @@ -{"boolq_sv": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nBoolQ (Boolean Questions, Clark et al., 2019a) is a QA task where each example consists of a short\npassage and a yes/no question about the passage. The questions are provided anonymously and\nunsolicited by users of the Google search engine, and afterwards paired with a paragraph from a\nWikipedia article containing the answer. Following the original work, we evaluate with accuracy.", "citation": "@inproceedings{clark2019boolq,\n title={BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions},\n author={Clark, Christopher and Lee, Kenton and Chang, Ming-Wei, and Kwiatkowski, Tom and Collins, Michael, and Toutanova, Kristina},\n booktitle={NAACL},\n year={2019}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"question": {"dtype": "string", "id": null, "_type": "Value"}, "passage": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "boolq_sv", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 4211792, "num_examples": 6285, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 2057950, "num_examples": 3142, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 2150348, "num_examples": 3270, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/sv/boolq.tar.gz": {"num_bytes": 3368913, "checksum": "159555cd9b2c93ed20480ee3bd2b7e3cb016b7929b81e121fbd0f2d1a030b074"}}, "download_size": 3368913, "post_processing_size": null, "dataset_size": 8420090, "size_in_bytes": 11789003}, "cb_sv": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe CommitmentBank (De Marneffe et al., 2019) is a corpus of short texts in which at least\none sentence contains an embedded clause. Each of these embedded clauses is annotated with the\ndegree to which we expect that the person who wrote the text is committed to the truth of the clause.\nThe resulting task framed as three-class textual entailment on examples that are drawn from the Wall\nStreet Journal, fiction from the British National Corpus, and Switchboard. Each example consists\nof a premise containing an embedded clause and the corresponding hypothesis is the extraction of\nthat clause. We use a subset of the data that had inter-annotator agreement above 0.85. The data is\nimbalanced (relatively fewer neutral examples), so we evaluate using accuracy and F1, where for\nmulti-class F1 we compute the unweighted average of the F1 per class.", "citation": "@article{de marneff_simons_tonhauser_2019,\n title={The CommitmentBank: Investigating projection in naturally occurring discourse},\n journal={proceedings of Sinn und Bedeutung 23},\n author={De Marneff, Marie-Catherine and Simons, Mandy and Tonhauser, Judith},\n year={2019}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "cb_sv", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 72247, "num_examples": 201, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 18479, "num_examples": 49, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 22901, "num_examples": 56, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/sv/cb.tar.gz": {"num_bytes": 41036, "checksum": "85587d9ab0ca5cadbb1f1ebad65b648a79c65baff84989334444f503b451752e"}}, "download_size": 41036, "post_processing_size": null, "dataset_size": 113627, "size_in_bytes": 154663}, "copa_sv": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Choice Of Plausible Alternatives (COPA, Roemmele et al., 2011) dataset is a causal\nreasoning task in which a system is given a premise sentence and two possible alternatives. The\nsystem must choose the alternative which has the more plausible causal relationship with the premise.\nThe method used for the construction of the alternatives ensures that the task requires causal reasoning\nto solve. Examples either deal with alternative possible causes or alternative possible effects of the\npremise sentence, accompanied by a simple question disambiguating between the two instance\ntypes for the model. All examples are handcrafted and focus on topics from online blogs and a\nphotography-related encyclopedia. Following the recommendation of the authors, we evaluate using\naccuracy.", "citation": "@inproceedings{roemmele2011choice,\n title={Choice of plausible alternatives: An evaluation of commonsense causal reasoning},\n author={Roemmele, Melissa and Bejan, Cosmin Adrian and Gordon, Andrew S},\n booktitle={2011 AAAI Spring Symposium Series},\n year={2011}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "choice1": {"dtype": "string", "id": null, "_type": "Value"}, "choice2": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "copa_sv", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 38614, "num_examples": 321, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 9447, "num_examples": 79, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 12258, "num_examples": 100, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/sv/copa.tar.gz": {"num_bytes": 22790, "checksum": "ddea054e155a16c8857ed078073943058b4fe2ca4c59155d52afafc878ce5722"}}, "download_size": 22790, "post_processing_size": null, "dataset_size": 60319, "size_in_bytes": 83109}, "rte_sv": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Recognizing Textual Entailment (RTE) datasets come from a series of annual competitions\non textual entailment, the problem of predicting whether a given premise sentence entails a given\nhypothesis sentence (also known as natural language inference, NLI). RTE was previously included\nin GLUE, and we use the same data and format as before: We merge data from RTE1 (Dagan\net al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli\net al., 2009). All datasets are combined and converted to two-class classification: entailment and\nnot_entailment. Of all the GLUE tasks, RTE was among those that benefited from transfer learning\nthe most, jumping from near random-chance performance (~56%) at the time of GLUE's launch to\n85% accuracy (Liu et al., 2019c) at the time of writing. Given the eight point gap with respect to\nhuman performance, however, the task is not yet solved by machines, and we expect the remaining\ngap to be difficult to close.", "citation": "@inproceedings{dagan2005pascal,\n title={The PASCAL recognising textual entailment challenge},\n author={Dagan, Ido and Glickman, Oren and Magnini, Bernardo},\n booktitle={Machine Learning Challenges Workshop},\n pages={177--190},\n year={2005},\n organization={Springer}\n}\n@inproceedings{bar2006second,\n title={The second pascal recognising textual entailment challenge},\n author={Bar-Haim, Roy and Dagan, Ido and Dolan, Bill and Ferro, Lisa and Giampiccolo, Danilo and Magnini, Bernardo and Szpektor, Idan},\n booktitle={Proceedings of the second PASCAL challenges workshop on recognising textual entailment},\n volume={6},\n number={1},\n pages={6--4},\n year={2006},\n organization={Venice}\n}\n@inproceedings{giampiccolo2007third,\n title={The third pascal recognizing textual entailment challenge},\n author={Giampiccolo, Danilo and Magnini, Bernardo and Dagan, Ido and Dolan, Bill},\n booktitle={Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing},\n pages={1--9},\n year={2007},\n organization={Association for Computational Linguistics}\n}\n@inproceedings{bentivogli2009fifth,\n title={The Fifth PASCAL Recognizing Textual Entailment Challenge.},\n author={Bentivogli, Luisa and Clark, Peter and Dagan, Ido and Giampiccolo, Danilo},\n booktitle={TAC},\n year={2009}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "rte_sv", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 798331, "num_examples": 2214, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 91197, "num_examples": 276, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 95768, "num_examples": 277, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/sv/rte.tar.gz": {"num_bytes": 395529, "checksum": "e77ce27a918be490ddf3f39dd7b25ae9dca202e82bd037b01fca0dc5ddedf8d0"}}, "download_size": 395529, "post_processing_size": null, "dataset_size": 985296, "size_in_bytes": 1380825}, "qqp_sv": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Quora Question Pairs2 dataset is a collection of question pairs from the\ncommunity question-answering website Quora. The task is to determine whether a\npair of questions are semantically equivalent.", "citation": "@online{WinNT,\nauthor = {Iyer, Shankar and Dandekar, Nikhil and Csernai, Kornel},\ntitle = {First Quora Dataset Release: Question Pairs},\nyear = {2017},\nurl = {https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs},\nurldate = {2019-04-03}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"text_a": {"dtype": "string", "id": null, "_type": "Value"}, "text_b": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "qqp_sv", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 46419002, "num_examples": 323419, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 5806503, "num_examples": 40427, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 5795360, "num_examples": 40430, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/sv/qqp.tar.gz": {"num_bytes": 21998351, "checksum": "c54b8b76973671f5ef1ad8d87a7f2a292851c69b36a8bdbb2c93eaa05607cefb"}}, "download_size": 21998351, "post_processing_size": null, "dataset_size": 58020865, "size_in_bytes": 80019216}, "qnli_sv": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Stanford Question Answering Dataset is a question-answering\ndataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn\nfrom Wikipedia) contains the answer to the corresponding question (written by an annotator). We\nconvert the task into sentence pair classification by forming a pair between each question and each\nsentence in the corresponding context, and filtering out pairs with low lexical overlap between the\nquestion and the context sentence. The task is to determine whether the context sentence contains\nthe answer to the question. This modified version of the original task removes the requirement that\nthe model select the exact answer, but also removes the simplifying assumptions that the answer\nis always present in the input and that lexical overlap is a reliable cue.", "citation": "@article{rajpurkar2016squad,\n title={Squad: 100,000+ questions for machine comprehension of text},\n author={Rajpurkar, Pranav and Zhang, Jian and Lopyrev, Konstantin and Liang, Percy},\n journal={arXiv preprint arXiv:1606.05250},\n year={2016}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "qnli_sv", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 25377994, "num_examples": 99506, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 1343820, "num_examples": 5237, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 1422786, "num_examples": 5463, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/sv/qnli.tar.gz": {"num_bytes": 11218767, "checksum": "0c9205a5f0f8c2119c3e1d67e5f621201a089a39bd9335d64b51a86a87f90ad3"}}, "download_size": 11218767, "post_processing_size": null, "dataset_size": 28144600, "size_in_bytes": 39363367}, "stsb_sv": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of\nsentence pairs drawn from news headlines, video and image captions, and natural\nlanguage inference data. Each pair is human-annotated with a similarity score\nfrom 1 to 5.", "citation": "@article{cer2017semeval,\n title={Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation},\n author={Cer, Daniel and Diab, Mona and Agirre, Eneko and Lopez-Gazpio, Inigo and Specia, Lucia},\n journal={arXiv preprint arXiv:1708.00055},\n year={2017}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"text_a": {"dtype": "string", "id": null, "_type": "Value"}, "text_b": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "stsb_sv", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 640930, "num_examples": 4312, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 217613, "num_examples": 1437, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 241574, "num_examples": 1500, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/sv/stsb.tar.gz": {"num_bytes": 371403, "checksum": "c8d0df38abf3089fc2a83310bd536b69dbf09375f3df90fbb513b02b549f75e0"}}, "download_size": 371403, "post_processing_size": null, "dataset_size": 1100117, "size_in_bytes": 1471520}, "mnli_sv": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Multi-Genre Natural Language Inference Corpus is a crowdsourced\ncollection of sentence pairs with textual entailment annotations. Given a premise sentence\nand a hypothesis sentence, the task is to predict whether the premise entails the hypothesis\n(entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are\ngathered from ten different sources, including transcribed speech, fiction, and government reports.\nWe use the standard test set, for which we obtained private labels from the authors, and evaluate\non both the matched (in-domain) and mismatched (cross-domain) section. We also use and recommend\nthe SNLI corpus as 550k examples of auxiliary training data.", "citation": " @InProceedings{N18-1101,\n author = \"Williams, Adina\n and Nangia, Nikita\n and Bowman, Samuel\",\n title = \"A Broad-Coverage Challenge Corpus for\n Sentence Understanding through Inference\",\n booktitle = \"Proceedings of the 2018 Conference of\n the North American Chapter of the\n Association for Computational Linguistics:\n Human Language Technologies, Volume 1 (Long\n Papers)\",\n year = \"2018\",\n publisher = \"Association for Computational Linguistics\",\n pages = \"1112--1122\",\n location = \"New Orleans, Louisiana\",\n url = \"http://aclweb.org/anthology/N18-1101\"\n }\n @article{bowman2015large,\n title={A large annotated corpus for learning natural language inference},\n author={Bowman, Samuel R and Angeli, Gabor and Potts, Christopher and Manning, Christopher D},\n journal={arXiv preprint arXiv:1508.05326},\n year={2015}\n }\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "mnli_sv", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 76720827, "num_examples": 383124, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 1906810, "num_examples": 9578, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 1941882, "num_examples": 9815, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/sv/mnli.tar.gz": {"num_bytes": 30839949, "checksum": "b5f40c836e15fdd2b2aaad3ea8792a7a8b1cb4ac6fb5baacc6cf13933cdc7319"}}, "download_size": 30839949, "post_processing_size": null, "dataset_size": 80569519, "size_in_bytes": 111409468}, "mrpc_sv": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of\nsentence pairs automatically extracted from online news sources, with human annotations\nfor whether the sentences in the pair are semantically equivalent.", "citation": "@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"text_a": {"dtype": "string", "id": null, "_type": "Value"}, "text_b": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "mrpc_sv", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 853395, "num_examples": 3261, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 105267, "num_examples": 407, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 108361, "num_examples": 408, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/sv/mrpc.tar.gz": {"num_bytes": 381946, "checksum": "13900c98246c7389bf8eda4d54e82042a6c1aed095ad26b60582919b7e8380a4"}}, "download_size": 381946, "post_processing_size": null, "dataset_size": 1067023, "size_in_bytes": 1448969}, "wnli_sv": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task\nin which a system must read a sentence with a pronoun and select the referent of that pronoun from\na list of choices. The examples are manually constructed to foil simple statistical methods: Each\none is contingent on contextual information provided by a single word or phrase in the sentence.\nTo convert the problem into sentence pair classification, we construct sentence pairs by replacing\nthe ambiguous pronoun with each possible referent. The task is to predict if the sentence with the\npronoun substituted is entailed by the original sentence. We use a small evaluation set consisting of\nnew examples derived from fiction books that was shared privately by the authors of the original\ncorpus. While the included training set is balanced between two classes, the test set is imbalanced\nbetween them (65% not entailment). Also, due to a data quirk, the development set is adversarial:\nhypotheses are sometimes shared between training and development examples, so if a model memorizes the\ntraining examples, they will predict the wrong label on corresponding development set\nexample. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence\nbetween a model's score on this task and its score on the unconverted original task. We\ncall converted dataset WNLI (Winograd NLI).", "citation": "@inproceedings{levesque2012winograd,\n title={The winograd schema challenge},\n author={Levesque, Hector and Davis, Ernest and Morgenstern, Leora},\n booktitle={Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning},\n year={2012}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "wnli_sv", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 94959, "num_examples": 565, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 11665, "num_examples": 70, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 12020, "num_examples": 71, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/sv/wnli.tar.gz": {"num_bytes": 30427, "checksum": "2e4b019e3b2f614a1ef12a0b243d3d1b08cf2d9bf79676998ff259447a3461de"}}, "download_size": 30427, "post_processing_size": null, "dataset_size": 118644, "size_in_bytes": 149071}, "sst_sv": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Stanford Sentiment Treebank consists of sentences from movie reviews and\nhuman annotations of their sentiment. The task is to predict the sentiment of a\ngiven sentence. We use the two-way (positive/negative) class split, and use only\nsentence-level labels.", "citation": "@inproceedings{socher2013recursive,\n title={Recursive deep models for semantic compositionality over a sentiment treebank},\n author={Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning, Christopher D and Ng, Andrew and Potts, Christopher},\n booktitle={Proceedings of the 2013 conference on empirical methods in natural language processing},\n pages={1631--1642},\n year={2013}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "sst_sv", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 4526277, "num_examples": 66486, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 58238, "num_examples": 863, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 105918, "num_examples": 872, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/sv/sst.tar.gz": {"num_bytes": 1984447, "checksum": "86ea38b24a963e42a07ee88e822efa262c674580d3e8f368a4b4179761ca0d81"}}, "download_size": 1984447, "post_processing_size": null, "dataset_size": 4690433, "size_in_bytes": 6674880}, "boolq_nb": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nBoolQ (Boolean Questions, Clark et al., 2019a) is a QA task where each example consists of a short\npassage and a yes/no question about the passage. The questions are provided anonymously and\nunsolicited by users of the Google search engine, and afterwards paired with a paragraph from a\nWikipedia article containing the answer. Following the original work, we evaluate with accuracy.", "citation": "@inproceedings{clark2019boolq,\n title={BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions},\n author={Clark, Christopher and Lee, Kenton and Chang, Ming-Wei, and Kwiatkowski, Tom and Collins, Michael, and Toutanova, Kristina},\n booktitle={NAACL},\n year={2019}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"question": {"dtype": "string", "id": null, "_type": "Value"}, "passage": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "boolq_nb", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 3966994, "num_examples": 6285, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 1938942, "num_examples": 3142, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 2024171, "num_examples": 3270, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/nb/boolq.tar.gz": {"num_bytes": 3254901, "checksum": "0038ecb49187122ecd3e4607ca3a2db2afae27fd8293279ec805d809d5567de8"}}, "download_size": 3254901, "post_processing_size": null, "dataset_size": 7930107, "size_in_bytes": 11185008}, "cb_nb": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe CommitmentBank (De Marneffe et al., 2019) is a corpus of short texts in which at least\none sentence contains an embedded clause. Each of these embedded clauses is annotated with the\ndegree to which we expect that the person who wrote the text is committed to the truth of the clause.\nThe resulting task framed as three-class textual entailment on examples that are drawn from the Wall\nStreet Journal, fiction from the British National Corpus, and Switchboard. Each example consists\nof a premise containing an embedded clause and the corresponding hypothesis is the extraction of\nthat clause. We use a subset of the data that had inter-annotator agreement above 0.85. The data is\nimbalanced (relatively fewer neutral examples), so we evaluate using accuracy and F1, where for\nmulti-class F1 we compute the unweighted average of the F1 per class.", "citation": "@article{de marneff_simons_tonhauser_2019,\n title={The CommitmentBank: Investigating projection in naturally occurring discourse},\n journal={proceedings of Sinn und Bedeutung 23},\n author={De Marneff, Marie-Catherine and Simons, Mandy and Tonhauser, Judith},\n year={2019}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "cb_nb", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 68229, "num_examples": 201, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 17578, "num_examples": 49, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 21560, "num_examples": 56, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/nb/cb.tar.gz": {"num_bytes": 39777, "checksum": "7d4a11bbd79bdfafad0b7a4c0f5d4f8cb4c1d944f6bac8dfe6782bff7725a5da"}}, "download_size": 39777, "post_processing_size": null, "dataset_size": 107367, "size_in_bytes": 147144}, "copa_nb": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Choice Of Plausible Alternatives (COPA, Roemmele et al., 2011) dataset is a causal\nreasoning task in which a system is given a premise sentence and two possible alternatives. The\nsystem must choose the alternative which has the more plausible causal relationship with the premise.\nThe method used for the construction of the alternatives ensures that the task requires causal reasoning\nto solve. Examples either deal with alternative possible causes or alternative possible effects of the\npremise sentence, accompanied by a simple question disambiguating between the two instance\ntypes for the model. All examples are handcrafted and focus on topics from online blogs and a\nphotography-related encyclopedia. Following the recommendation of the authors, we evaluate using\naccuracy.", "citation": "@inproceedings{roemmele2011choice,\n title={Choice of plausible alternatives: An evaluation of commonsense causal reasoning},\n author={Roemmele, Melissa and Bejan, Cosmin Adrian and Gordon, Andrew S},\n booktitle={2011 AAAI Spring Symposium Series},\n year={2011}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "choice1": {"dtype": "string", "id": null, "_type": "Value"}, "choice2": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "copa_nb", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 37790, "num_examples": 321, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 9173, "num_examples": 79, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 12044, "num_examples": 100, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/nb/copa.tar.gz": {"num_bytes": 22695, "checksum": "743dc02ade29e7cb4cb77f6716fcfd949635114f889ff3daec6f09bb6592f541"}}, "download_size": 22695, "post_processing_size": null, "dataset_size": 59007, "size_in_bytes": 81702}, "rte_nb": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Recognizing Textual Entailment (RTE) datasets come from a series of annual competitions\non textual entailment, the problem of predicting whether a given premise sentence entails a given\nhypothesis sentence (also known as natural language inference, NLI). RTE was previously included\nin GLUE, and we use the same data and format as before: We merge data from RTE1 (Dagan\net al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli\net al., 2009). All datasets are combined and converted to two-class classification: entailment and\nnot_entailment. Of all the GLUE tasks, RTE was among those that benefited from transfer learning\nthe most, jumping from near random-chance performance (~56%) at the time of GLUE's launch to\n85% accuracy (Liu et al., 2019c) at the time of writing. Given the eight point gap with respect to\nhuman performance, however, the task is not yet solved by machines, and we expect the remaining\ngap to be difficult to close.", "citation": "@inproceedings{dagan2005pascal,\n title={The PASCAL recognising textual entailment challenge},\n author={Dagan, Ido and Glickman, Oren and Magnini, Bernardo},\n booktitle={Machine Learning Challenges Workshop},\n pages={177--190},\n year={2005},\n organization={Springer}\n}\n@inproceedings{bar2006second,\n title={The second pascal recognising textual entailment challenge},\n author={Bar-Haim, Roy and Dagan, Ido and Dolan, Bill and Ferro, Lisa and Giampiccolo, Danilo and Magnini, Bernardo and Szpektor, Idan},\n booktitle={Proceedings of the second PASCAL challenges workshop on recognising textual entailment},\n volume={6},\n number={1},\n pages={6--4},\n year={2006},\n organization={Venice}\n}\n@inproceedings{giampiccolo2007third,\n title={The third pascal recognizing textual entailment challenge},\n author={Giampiccolo, Danilo and Magnini, Bernardo and Dagan, Ido and Dolan, Bill},\n booktitle={Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing},\n pages={1--9},\n year={2007},\n organization={Association for Computational Linguistics}\n}\n@inproceedings{bentivogli2009fifth,\n title={The Fifth PASCAL Recognizing Textual Entailment Challenge.},\n author={Bentivogli, Luisa and Clark, Peter and Dagan, Ido and Giampiccolo, Danilo},\n booktitle={TAC},\n year={2009}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "rte_nb", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 745583, "num_examples": 2214, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 85478, "num_examples": 276, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 89644, "num_examples": 277, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/nb/rte.tar.gz": {"num_bytes": 379837, "checksum": "c764579d7ac464fae4b295894a9e36c188707283ae02b8e0dfdcf9e86caf84ab"}}, "download_size": 379837, "post_processing_size": null, "dataset_size": 920705, "size_in_bytes": 1300542}, "qqp_nb": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Quora Question Pairs2 dataset is a collection of question pairs from the\ncommunity question-answering website Quora. The task is to determine whether a\npair of questions are semantically equivalent.", "citation": "@online{WinNT,\nauthor = {Iyer, Shankar and Dandekar, Nikhil and Csernai, Kornel},\ntitle = {First Quora Dataset Release: Question Pairs},\nyear = {2017},\nurl = {https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs},\nurldate = {2019-04-03}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"text_a": {"dtype": "string", "id": null, "_type": "Value"}, "text_b": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "qqp_nb", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 44922311, "num_examples": 323419, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 5616459, "num_examples": 40427, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 5608850, "num_examples": 40430, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/nb/qqp.tar.gz": {"num_bytes": 21097603, "checksum": "f2b0000dfb7f68a277b5b961fe40d78ede324d4fd455a868919e5795cfb41d11"}}, "download_size": 21097603, "post_processing_size": null, "dataset_size": 56147620, "size_in_bytes": 77245223}, "qnli_nb": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Stanford Question Answering Dataset is a question-answering\ndataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn\nfrom Wikipedia) contains the answer to the corresponding question (written by an annotator). We\nconvert the task into sentence pair classification by forming a pair between each question and each\nsentence in the corresponding context, and filtering out pairs with low lexical overlap between the\nquestion and the context sentence. The task is to determine whether the context sentence contains\nthe answer to the question. This modified version of the original task removes the requirement that\nthe model select the exact answer, but also removes the simplifying assumptions that the answer\nis always present in the input and that lexical overlap is a reliable cue.", "citation": "@article{rajpurkar2016squad,\n title={Squad: 100,000+ questions for machine comprehension of text},\n author={Rajpurkar, Pranav and Zhang, Jian and Lopyrev, Konstantin and Liang, Percy},\n journal={arXiv preprint arXiv:1606.05250},\n year={2016}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "qnli_nb", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 24131580, "num_examples": 99506, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 1280654, "num_examples": 5237, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 1353617, "num_examples": 5463, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/nb/qnli.tar.gz": {"num_bytes": 10724704, "checksum": "604c6ccc69b081c3ff24072d97918b547e58f4b3c4b744072c1b750068930088"}}, "download_size": 10724704, "post_processing_size": null, "dataset_size": 26765851, "size_in_bytes": 37490555}, "stsb_nb": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of\nsentence pairs drawn from news headlines, video and image captions, and natural\nlanguage inference data. Each pair is human-annotated with a similarity score\nfrom 1 to 5.", "citation": "@article{cer2017semeval,\n title={Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation},\n author={Cer, Daniel and Diab, Mona and Agirre, Eneko and Lopez-Gazpio, Inigo and Specia, Lucia},\n journal={arXiv preprint arXiv:1708.00055},\n year={2017}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"text_a": {"dtype": "string", "id": null, "_type": "Value"}, "text_b": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "stsb_nb", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 604413, "num_examples": 4312, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 204206, "num_examples": 1437, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 229571, "num_examples": 1500, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/nb/stsb.tar.gz": {"num_bytes": 368825, "checksum": "ddb82ab7453e90010bd27fe470a887a02e96305907994bfce39230d9379b00b8"}}, "download_size": 368825, "post_processing_size": null, "dataset_size": 1038190, "size_in_bytes": 1407015}, "mnli_nb": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Multi-Genre Natural Language Inference Corpus is a crowdsourced\ncollection of sentence pairs with textual entailment annotations. Given a premise sentence\nand a hypothesis sentence, the task is to predict whether the premise entails the hypothesis\n(entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are\ngathered from ten different sources, including transcribed speech, fiction, and government reports.\nWe use the standard test set, for which we obtained private labels from the authors, and evaluate\non both the matched (in-domain) and mismatched (cross-domain) section. We also use and recommend\nthe SNLI corpus as 550k examples of auxiliary training data.", "citation": " @InProceedings{N18-1101,\n author = \"Williams, Adina\n and Nangia, Nikita\n and Bowman, Samuel\",\n title = \"A Broad-Coverage Challenge Corpus for\n Sentence Understanding through Inference\",\n booktitle = \"Proceedings of the 2018 Conference of\n the North American Chapter of the\n Association for Computational Linguistics:\n Human Language Technologies, Volume 1 (Long\n Papers)\",\n year = \"2018\",\n publisher = \"Association for Computational Linguistics\",\n pages = \"1112--1122\",\n location = \"New Orleans, Louisiana\",\n url = \"http://aclweb.org/anthology/N18-1101\"\n }\n @article{bowman2015large,\n title={A large annotated corpus for learning natural language inference},\n author={Bowman, Samuel R and Angeli, Gabor and Potts, Christopher and Manning, Christopher D},\n journal={arXiv preprint arXiv:1508.05326},\n year={2015}\n }\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "mnli_nb", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 72938596, "num_examples": 383124, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 1820060, "num_examples": 9578, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 1842160, "num_examples": 9815, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/nb/mnli.tar.gz": {"num_bytes": 29771448, "checksum": "7bdcd02d1cb2c5fd3996011032aa1a4eef96a4dae6b4d812f96ebd0a5fcd1349"}}, "download_size": 29771448, "post_processing_size": null, "dataset_size": 76600816, "size_in_bytes": 106372264}, "mrpc_nb": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of\nsentence pairs automatically extracted from online news sources, with human annotations\nfor whether the sentences in the pair are semantically equivalent.", "citation": "@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"text_a": {"dtype": "string", "id": null, "_type": "Value"}, "text_b": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "mrpc_nb", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 784715, "num_examples": 3261, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 97040, "num_examples": 407, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 98674, "num_examples": 408, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/nb/mrpc.tar.gz": {"num_bytes": 368694, "checksum": "3af04ec9abc76e422562751c44edbf6bea4341ed64b231e11d9b05c414d81a15"}}, "download_size": 368694, "post_processing_size": null, "dataset_size": 980429, "size_in_bytes": 1349123}, "wnli_nb": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task\nin which a system must read a sentence with a pronoun and select the referent of that pronoun from\na list of choices. The examples are manually constructed to foil simple statistical methods: Each\none is contingent on contextual information provided by a single word or phrase in the sentence.\nTo convert the problem into sentence pair classification, we construct sentence pairs by replacing\nthe ambiguous pronoun with each possible referent. The task is to predict if the sentence with the\npronoun substituted is entailed by the original sentence. We use a small evaluation set consisting of\nnew examples derived from fiction books that was shared privately by the authors of the original\ncorpus. While the included training set is balanced between two classes, the test set is imbalanced\nbetween them (65% not entailment). Also, due to a data quirk, the development set is adversarial:\nhypotheses are sometimes shared between training and development examples, so if a model memorizes the\ntraining examples, they will predict the wrong label on corresponding development set\nexample. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence\nbetween a model's score on this task and its score on the unconverted original task. We\ncall converted dataset WNLI (Winograd NLI).", "citation": "@inproceedings{levesque2012winograd,\n title={The winograd schema challenge},\n author={Levesque, Hector and Davis, Ernest and Morgenstern, Leora},\n booktitle={Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning},\n year={2012}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "wnli_nb", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 92154, "num_examples": 565, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 11340, "num_examples": 70, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 11646, "num_examples": 71, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/nb/wnli.tar.gz": {"num_bytes": 29677, "checksum": "07f94f2f11b502ebc188f05ecb83a4f89823d5bb4d0ae4b5b10c3052ecb8fdf9"}}, "download_size": 29677, "post_processing_size": null, "dataset_size": 115140, "size_in_bytes": 144817}, "sst_nb": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Stanford Sentiment Treebank consists of sentences from movie reviews and\nhuman annotations of their sentiment. The task is to predict the sentiment of a\ngiven sentence. We use the two-way (positive/negative) class split, and use only\nsentence-level labels.", "citation": "@inproceedings{socher2013recursive,\n title={Recursive deep models for semantic compositionality over a sentiment treebank},\n author={Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning, Christopher D and Ng, Andrew and Potts, Christopher},\n booktitle={Proceedings of the 2013 conference on empirical methods in natural language processing},\n pages={1631--1642},\n year={2013}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "sst_nb", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 4328256, "num_examples": 66486, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 56231, "num_examples": 863, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 99345, "num_examples": 872, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/nb/sst.tar.gz": {"num_bytes": 1905948, "checksum": "4f932332d705d01675fe565cf356ce4a22eaa2ebf4f4ebf68c3471fd43548d9c"}}, "download_size": 1905948, "post_processing_size": null, "dataset_size": 4483832, "size_in_bytes": 6389780}, "boolq_da": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nBoolQ (Boolean Questions, Clark et al., 2019a) is a QA task where each example consists of a short\npassage and a yes/no question about the passage. The questions are provided anonymously and\nunsolicited by users of the Google search engine, and afterwards paired with a paragraph from a\nWikipedia article containing the answer. Following the original work, we evaluate with accuracy.", "citation": "@inproceedings{clark2019boolq,\n title={BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions},\n author={Clark, Christopher and Lee, Kenton and Chang, Ming-Wei, and Kwiatkowski, Tom and Collins, Michael, and Toutanova, Kristina},\n booktitle={NAACL},\n year={2019}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"question": {"dtype": "string", "id": null, "_type": "Value"}, "passage": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "boolq_da", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 4147484, "num_examples": 6285, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 2024936, "num_examples": 3142, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 2114420, "num_examples": 3270, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/da/boolq.tar.gz": {"num_bytes": 3350831, "checksum": "42f1615b4e1580845e599aed7349b835f8772a0472dfa729f896f12ce0574e55"}}, "download_size": 3350831, "post_processing_size": null, "dataset_size": 8286840, "size_in_bytes": 11637671}, "cb_da": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe CommitmentBank (De Marneffe et al., 2019) is a corpus of short texts in which at least\none sentence contains an embedded clause. Each of these embedded clauses is annotated with the\ndegree to which we expect that the person who wrote the text is committed to the truth of the clause.\nThe resulting task framed as three-class textual entailment on examples that are drawn from the Wall\nStreet Journal, fiction from the British National Corpus, and Switchboard. Each example consists\nof a premise containing an embedded clause and the corresponding hypothesis is the extraction of\nthat clause. We use a subset of the data that had inter-annotator agreement above 0.85. The data is\nimbalanced (relatively fewer neutral examples), so we evaluate using accuracy and F1, where for\nmulti-class F1 we compute the unweighted average of the F1 per class.", "citation": "@article{de marneff_simons_tonhauser_2019,\n title={The CommitmentBank: Investigating projection in naturally occurring discourse},\n journal={proceedings of Sinn und Bedeutung 23},\n author={De Marneff, Marie-Catherine and Simons, Mandy and Tonhauser, Judith},\n year={2019}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "cb_da", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 70886, "num_examples": 201, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 17924, "num_examples": 49, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 22180, "num_examples": 56, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/da/cb.tar.gz": {"num_bytes": 40604, "checksum": "ff530ff490bbf68db950064add852ccc439a16544ec22780e1818bd33b365b1a"}}, "download_size": 40604, "post_processing_size": null, "dataset_size": 110990, "size_in_bytes": 151594}, "copa_da": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Choice Of Plausible Alternatives (COPA, Roemmele et al., 2011) dataset is a causal\nreasoning task in which a system is given a premise sentence and two possible alternatives. The\nsystem must choose the alternative which has the more plausible causal relationship with the premise.\nThe method used for the construction of the alternatives ensures that the task requires causal reasoning\nto solve. Examples either deal with alternative possible causes or alternative possible effects of the\npremise sentence, accompanied by a simple question disambiguating between the two instance\ntypes for the model. All examples are handcrafted and focus on topics from online blogs and a\nphotography-related encyclopedia. Following the recommendation of the authors, we evaluate using\naccuracy.", "citation": "@inproceedings{roemmele2011choice,\n title={Choice of plausible alternatives: An evaluation of commonsense causal reasoning},\n author={Roemmele, Melissa and Bejan, Cosmin Adrian and Gordon, Andrew S},\n booktitle={2011 AAAI Spring Symposium Series},\n year={2011}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "choice1": {"dtype": "string", "id": null, "_type": "Value"}, "choice2": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "copa_da", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 38625, "num_examples": 321, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 9386, "num_examples": 79, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 12189, "num_examples": 100, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/da/copa.tar.gz": {"num_bytes": 22828, "checksum": "088fb530f36b87768cd02181ab5229bf5d1e39894e54a11afea6f41998e3d0c6"}}, "download_size": 22828, "post_processing_size": null, "dataset_size": 60200, "size_in_bytes": 83028}, "rte_da": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Recognizing Textual Entailment (RTE) datasets come from a series of annual competitions\non textual entailment, the problem of predicting whether a given premise sentence entails a given\nhypothesis sentence (also known as natural language inference, NLI). RTE was previously included\nin GLUE, and we use the same data and format as before: We merge data from RTE1 (Dagan\net al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli\net al., 2009). All datasets are combined and converted to two-class classification: entailment and\nnot_entailment. Of all the GLUE tasks, RTE was among those that benefited from transfer learning\nthe most, jumping from near random-chance performance (~56%) at the time of GLUE's launch to\n85% accuracy (Liu et al., 2019c) at the time of writing. Given the eight point gap with respect to\nhuman performance, however, the task is not yet solved by machines, and we expect the remaining\ngap to be difficult to close.", "citation": "@inproceedings{dagan2005pascal,\n title={The PASCAL recognising textual entailment challenge},\n author={Dagan, Ido and Glickman, Oren and Magnini, Bernardo},\n booktitle={Machine Learning Challenges Workshop},\n pages={177--190},\n year={2005},\n organization={Springer}\n}\n@inproceedings{bar2006second,\n title={The second pascal recognising textual entailment challenge},\n author={Bar-Haim, Roy and Dagan, Ido and Dolan, Bill and Ferro, Lisa and Giampiccolo, Danilo and Magnini, Bernardo and Szpektor, Idan},\n booktitle={Proceedings of the second PASCAL challenges workshop on recognising textual entailment},\n volume={6},\n number={1},\n pages={6--4},\n year={2006},\n organization={Venice}\n}\n@inproceedings{giampiccolo2007third,\n title={The third pascal recognizing textual entailment challenge},\n author={Giampiccolo, Danilo and Magnini, Bernardo and Dagan, Ido and Dolan, Bill},\n booktitle={Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing},\n pages={1--9},\n year={2007},\n organization={Association for Computational Linguistics}\n}\n@inproceedings{bentivogli2009fifth,\n title={The Fifth PASCAL Recognizing Textual Entailment Challenge.},\n author={Bentivogli, Luisa and Clark, Peter and Dagan, Ido and Giampiccolo, Danilo},\n booktitle={TAC},\n year={2009}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "rte_da", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 789786, "num_examples": 2214, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 90099, "num_examples": 276, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 94218, "num_examples": 277, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/da/rte.tar.gz": {"num_bytes": 392939, "checksum": "e6018ee04334b57ca50fd13d4cc73c9cece96a332d368f464badbc9c374dc01e"}}, "download_size": 392939, "post_processing_size": null, "dataset_size": 974103, "size_in_bytes": 1367042}, "qqp_da": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Quora Question Pairs2 dataset is a collection of question pairs from the\ncommunity question-answering website Quora. The task is to determine whether a\npair of questions are semantically equivalent.", "citation": "@online{WinNT,\nauthor = {Iyer, Shankar and Dandekar, Nikhil and Csernai, Kornel},\ntitle = {First Quora Dataset Release: Question Pairs},\nyear = {2017},\nurl = {https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs},\nurldate = {2019-04-03}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"text_a": {"dtype": "string", "id": null, "_type": "Value"}, "text_b": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "qqp_da", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 46213247, "num_examples": 323419, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 5779399, "num_examples": 40427, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 5770158, "num_examples": 40430, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/da/qqp.tar.gz": {"num_bytes": 21500446, "checksum": "dd6169eab55cdd5d921328207a5a7facd2789ecbad3dd247c6b33d981b849319"}}, "download_size": 21500446, "post_processing_size": null, "dataset_size": 57762804, "size_in_bytes": 79263250}, "qnli_da": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Stanford Question Answering Dataset is a question-answering\ndataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn\nfrom Wikipedia) contains the answer to the corresponding question (written by an annotator). We\nconvert the task into sentence pair classification by forming a pair between each question and each\nsentence in the corresponding context, and filtering out pairs with low lexical overlap between the\nquestion and the context sentence. The task is to determine whether the context sentence contains\nthe answer to the question. This modified version of the original task removes the requirement that\nthe model select the exact answer, but also removes the simplifying assumptions that the answer\nis always present in the input and that lexical overlap is a reliable cue.", "citation": "@article{rajpurkar2016squad,\n title={Squad: 100,000+ questions for machine comprehension of text},\n author={Rajpurkar, Pranav and Zhang, Jian and Lopyrev, Konstantin and Liang, Percy},\n journal={arXiv preprint arXiv:1606.05250},\n year={2016}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "qnli_da", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 25199883, "num_examples": 99506, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 1335353, "num_examples": 5237, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 1412721, "num_examples": 5463, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/da/qnli.tar.gz": {"num_bytes": 11157410, "checksum": "260153f22ba714106852d1545906e77e7ce490e1220a2ac286730959e37b9278"}}, "download_size": 11157410, "post_processing_size": null, "dataset_size": 27947957, "size_in_bytes": 39105367}, "stsb_da": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of\nsentence pairs drawn from news headlines, video and image captions, and natural\nlanguage inference data. Each pair is human-annotated with a similarity score\nfrom 1 to 5.", "citation": "@article{cer2017semeval,\n title={Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation},\n author={Cer, Daniel and Diab, Mona and Agirre, Eneko and Lopez-Gazpio, Inigo and Specia, Lucia},\n journal={arXiv preprint arXiv:1708.00055},\n year={2017}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"text_a": {"dtype": "string", "id": null, "_type": "Value"}, "text_b": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "stsb_da", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 633017, "num_examples": 4312, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 214058, "num_examples": 1437, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 238054, "num_examples": 1500, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/da/stsb.tar.gz": {"num_bytes": 368211, "checksum": "c80b4d35b817d427f3bb2174039c44881957ad2b020e60dff6968e717fc8acdb"}}, "download_size": 368211, "post_processing_size": null, "dataset_size": 1085129, "size_in_bytes": 1453340}, "mnli_da": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Multi-Genre Natural Language Inference Corpus is a crowdsourced\ncollection of sentence pairs with textual entailment annotations. Given a premise sentence\nand a hypothesis sentence, the task is to predict whether the premise entails the hypothesis\n(entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are\ngathered from ten different sources, including transcribed speech, fiction, and government reports.\nWe use the standard test set, for which we obtained private labels from the authors, and evaluate\non both the matched (in-domain) and mismatched (cross-domain) section. We also use and recommend\nthe SNLI corpus as 550k examples of auxiliary training data.", "citation": " @InProceedings{N18-1101,\n author = \"Williams, Adina\n and Nangia, Nikita\n and Bowman, Samuel\",\n title = \"A Broad-Coverage Challenge Corpus for\n Sentence Understanding through Inference\",\n booktitle = \"Proceedings of the 2018 Conference of\n the North American Chapter of the\n Association for Computational Linguistics:\n Human Language Technologies, Volume 1 (Long\n Papers)\",\n year = \"2018\",\n publisher = \"Association for Computational Linguistics\",\n pages = \"1112--1122\",\n location = \"New Orleans, Louisiana\",\n url = \"http://aclweb.org/anthology/N18-1101\"\n }\n @article{bowman2015large,\n title={A large annotated corpus for learning natural language inference},\n author={Bowman, Samuel R and Angeli, Gabor and Potts, Christopher and Manning, Christopher D},\n journal={arXiv preprint arXiv:1508.05326},\n year={2015}\n }\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "mnli_da", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 76027939, "num_examples": 383124, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 1890232, "num_examples": 9578, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 1919687, "num_examples": 9815, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/da/mnli.tar.gz": {"num_bytes": 30837170, "checksum": "f96fd8d1b027c56f04ae1b21eb53415b39ad6ee0f97b2e2225d82794d3be350d"}}, "download_size": 30837170, "post_processing_size": null, "dataset_size": 79837858, "size_in_bytes": 110675028}, "mrpc_da": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of\nsentence pairs automatically extracted from online news sources, with human annotations\nfor whether the sentences in the pair are semantically equivalent.", "citation": "@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"text_a": {"dtype": "string", "id": null, "_type": "Value"}, "text_b": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "mrpc_da", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 845906, "num_examples": 3261, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 104536, "num_examples": 407, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 106918, "num_examples": 408, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/da/mrpc.tar.gz": {"num_bytes": 374973, "checksum": "d7b0520414351345f24660e26f2ae96011251908ac68533f876b16e14f904868"}}, "download_size": 374973, "post_processing_size": null, "dataset_size": 1057360, "size_in_bytes": 1432333}, "wnli_da": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task\nin which a system must read a sentence with a pronoun and select the referent of that pronoun from\na list of choices. The examples are manually constructed to foil simple statistical methods: Each\none is contingent on contextual information provided by a single word or phrase in the sentence.\nTo convert the problem into sentence pair classification, we construct sentence pairs by replacing\nthe ambiguous pronoun with each possible referent. The task is to predict if the sentence with the\npronoun substituted is entailed by the original sentence. We use a small evaluation set consisting of\nnew examples derived from fiction books that was shared privately by the authors of the original\ncorpus. While the included training set is balanced between two classes, the test set is imbalanced\nbetween them (65% not entailment). Also, due to a data quirk, the development set is adversarial:\nhypotheses are sometimes shared between training and development examples, so if a model memorizes the\ntraining examples, they will predict the wrong label on corresponding development set\nexample. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence\nbetween a model's score on this task and its score on the unconverted original task. We\ncall converted dataset WNLI (Winograd NLI).", "citation": "@inproceedings{levesque2012winograd,\n title={The winograd schema challenge},\n author={Levesque, Hector and Davis, Ernest and Morgenstern, Leora},\n booktitle={Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning},\n year={2012}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "wnli_da", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 92825, "num_examples": 565, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 11391, "num_examples": 70, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 11753, "num_examples": 71, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/da/wnli.tar.gz": {"num_bytes": 29413, "checksum": "a97d74d9d2da304a18da4366534ae6c1de3f5fd4cc9a388504813568374af1ea"}}, "download_size": 29413, "post_processing_size": null, "dataset_size": 115969, "size_in_bytes": 145382}, "sst_da": {"description": "GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\nThe Stanford Sentiment Treebank consists of sentences from movie reviews and\nhuman annotations of their sentiment. The task is to predict the sentiment of a\ngiven sentence. We use the two-way (positive/negative) class split, and use only\nsentence-level labels.", "citation": "@inproceedings{socher2013recursive,\n title={Recursive deep models for semantic compositionality over a sentiment treebank},\n author={Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning, Christopher D and Ng, Andrew and Potts, Christopher},\n booktitle={Proceedings of the 2013 conference on empirical methods in natural language processing},\n pages={1631--1642},\n year={2013}\n}\n@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},\n journal={arXiv preprint arXiv:1905.00537},\n year={2019}\n}\n\nNote that each SuperGLUE dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset.\n", "homepage": "", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "over_lim", "config_name": "sst_da", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 4460298, "num_examples": 66486, "dataset_name": "over_lim"}, "validation": {"name": "validation", "num_bytes": 57489, "num_examples": 863, "dataset_name": "over_lim"}, "test": {"name": "test", "num_bytes": 104627, "num_examples": 872, "dataset_name": "over_lim"}}, "download_checksums": {"https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/da/sst.tar.gz": {"num_bytes": 1929166, "checksum": "5fd539bff626886cf16d31223fa055b5abc3806684606e6e65a7978024fe96de"}}, "download_size": 1929166, "post_processing_size": null, "dataset_size": 4622414, "size_in_bytes": 6551580}} \ No newline at end of file diff --git a/dummy/boolq_da/1.0.2/dummy_data.zip b/dummy/boolq_da/1.0.2/dummy_data.zip deleted file mode 100644 index 457321c0c2330ec31d3100a2fbd7e6a10f0b4bef..0000000000000000000000000000000000000000 --- a/dummy/boolq_da/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d4e7e3730ccb6f5e4899c0c21851e63f5a6956d581144e4bc2b065544533314e -size 6235 diff --git a/dummy/boolq_da/1.0.2/dummy_data.zip.lock b/dummy/boolq_da/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/boolq_nb/1.0.2/dummy_data.zip b/dummy/boolq_nb/1.0.2/dummy_data.zip deleted file mode 100644 index 080d715715f3e126a48104d3a5f4a7617e800b2b..0000000000000000000000000000000000000000 --- a/dummy/boolq_nb/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a2f26bb40abcc35d997311068ad6bbb728d2285f0ffc51991206a56e40a69b0c -size 6056 diff --git a/dummy/boolq_nb/1.0.2/dummy_data.zip.lock b/dummy/boolq_nb/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/boolq_sv/1.0.2/dummy_data.zip b/dummy/boolq_sv/1.0.2/dummy_data.zip deleted file mode 100644 index 33ee11b0488ed7513446559d067682ab90f0e862..0000000000000000000000000000000000000000 --- a/dummy/boolq_sv/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:952e640a91fbc84d2782092750d41f9aa5cc6484a7df720630b8003f842ec555 -size 6344 diff --git a/dummy/boolq_sv/1.0.2/dummy_data.zip.lock b/dummy/boolq_sv/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/cb_da/1.0.2/dummy_data.zip b/dummy/cb_da/1.0.2/dummy_data.zip deleted file mode 100644 index c5b228bb8b9853638678cbfd260d503cef5e0363..0000000000000000000000000000000000000000 --- a/dummy/cb_da/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:6a023dadba46e79b7915fc51625eb0f41017e132f7e6023222e1d694be544d0e -size 3017 diff --git a/dummy/cb_da/1.0.2/dummy_data.zip.lock b/dummy/cb_da/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/cb_nb/1.0.2/dummy_data.zip b/dummy/cb_nb/1.0.2/dummy_data.zip deleted file mode 100644 index b8bccf8cffd5e4237245f29418e5bb590d162dbb..0000000000000000000000000000000000000000 --- a/dummy/cb_nb/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f9c5f03eab09c1909ba49c65adcc331fbaab9855cba9a63001d58a96ca0c3073 -size 3046 diff --git a/dummy/cb_nb/1.0.2/dummy_data.zip.lock b/dummy/cb_nb/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/cb_sv/1.0.2/dummy_data.zip b/dummy/cb_sv/1.0.2/dummy_data.zip deleted file mode 100644 index 1decf7b0b6f130eb6bcb881dab81623b2f46855c..0000000000000000000000000000000000000000 --- a/dummy/cb_sv/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5e267f0ee7bd1f77f56aa7bce121a7020209a54f6128842e3ad2f3b59452bf72 -size 3058 diff --git a/dummy/cb_sv/1.0.2/dummy_data.zip.lock b/dummy/cb_sv/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/copa_da/1.0.2/dummy_data.zip b/dummy/copa_da/1.0.2/dummy_data.zip deleted file mode 100644 index cce67e51f02338baff794767a158ae948dcce1eb..0000000000000000000000000000000000000000 --- a/dummy/copa_da/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:524bd7890460a24693ed42a076cfe650d5ab401edc965ad4fb5b1fa742e2fb2a -size 1974 diff --git a/dummy/copa_da/1.0.2/dummy_data.zip.lock b/dummy/copa_da/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/copa_nb/1.0.2/dummy_data.zip b/dummy/copa_nb/1.0.2/dummy_data.zip deleted file mode 100644 index 4d59a7bf857cef0d7ed47a632c7229f3969d247d..0000000000000000000000000000000000000000 --- a/dummy/copa_nb/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:63e233be888e6f6a3ec672fb915864d828b5049bc26b9716a3a58899646a7fdf -size 1952 diff --git a/dummy/copa_nb/1.0.2/dummy_data.zip.lock b/dummy/copa_nb/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/copa_sv/1.0.2/dummy_data.zip b/dummy/copa_sv/1.0.2/dummy_data.zip deleted file mode 100644 index 6e660f66784ba01c529cf85788e02122e4b22302..0000000000000000000000000000000000000000 --- a/dummy/copa_sv/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b675ad7b34891fd66ab31d77853aaeaee55b26ec0b6ee689988d2f49266135c0 -size 1972 diff --git a/dummy/copa_sv/1.0.2/dummy_data.zip.lock b/dummy/copa_sv/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/mnli_da/1.0.2/dummy_data.zip b/dummy/mnli_da/1.0.2/dummy_data.zip deleted file mode 100644 index cba8a21202657f66f7d744872fc5c92fa383d7c6..0000000000000000000000000000000000000000 --- a/dummy/mnli_da/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:9ca2b12457660866e8c6affe9c08b0d095ab97a57d359965b41f7741aabac565 -size 2672 diff --git a/dummy/mnli_da/1.0.2/dummy_data.zip.lock b/dummy/mnli_da/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/mnli_nb/1.0.2/dummy_data.zip b/dummy/mnli_nb/1.0.2/dummy_data.zip deleted file mode 100644 index c55b7cbbfe40d307b5b7ee7dc01c85f3da12bf66..0000000000000000000000000000000000000000 --- a/dummy/mnli_nb/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a5cac89ca6d16e6eda8ccf6b988502889fb68f9a5356ab61e0ef442993b61557 -size 2611 diff --git a/dummy/mnli_nb/1.0.2/dummy_data.zip.lock b/dummy/mnli_nb/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/mnli_sv/1.0.2/dummy_data.zip b/dummy/mnli_sv/1.0.2/dummy_data.zip deleted file mode 100644 index 43f271859d3990da9fede46866dbdf8aafba5183..0000000000000000000000000000000000000000 --- a/dummy/mnli_sv/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:67a44909d358e532a323b824685d409410a8794923d54f45055ae49c7c08df46 -size 2620 diff --git a/dummy/mnli_sv/1.0.2/dummy_data.zip.lock b/dummy/mnli_sv/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/mrpc_da/1.0.2/dummy_data.zip b/dummy/mrpc_da/1.0.2/dummy_data.zip deleted file mode 100644 index 9833d31e7a3e014e4b8d394eef4ca71fea3ca13f..0000000000000000000000000000000000000000 --- a/dummy/mrpc_da/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:9e13a0bb25e4d7079b112857fdeecfafe2474120b8c6e2e1b8760824b9bb70d3 -size 2548 diff --git a/dummy/mrpc_da/1.0.2/dummy_data.zip.lock b/dummy/mrpc_da/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/mrpc_nb/1.0.2/dummy_data.zip b/dummy/mrpc_nb/1.0.2/dummy_data.zip deleted file mode 100644 index 69823a84c8154349aee40063c06652614f3ac1bb..0000000000000000000000000000000000000000 --- a/dummy/mrpc_nb/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:cb871ad8aad0530868328f1d48f7c4581923f17d9ec0926813b63962c4e8330f -size 2565 diff --git a/dummy/mrpc_nb/1.0.2/dummy_data.zip.lock b/dummy/mrpc_nb/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/mrpc_sv/1.0.2/dummy_data.zip b/dummy/mrpc_sv/1.0.2/dummy_data.zip deleted file mode 100644 index 86eac1a148882348b08d28c83db2abbfe46a8815..0000000000000000000000000000000000000000 --- a/dummy/mrpc_sv/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c5d2a4eae6605b75bec4a89bed03f5676d247bb4b2468165f761ad2a47b27822 -size 2616 diff --git a/dummy/mrpc_sv/1.0.2/dummy_data.zip.lock b/dummy/mrpc_sv/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/qnli_da/1.0.2/dummy_data.zip b/dummy/qnli_da/1.0.2/dummy_data.zip deleted file mode 100644 index 9c97d0ea5e6cbf9950bce908a9e0fca482e44321..0000000000000000000000000000000000000000 --- a/dummy/qnli_da/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2b46f8416efaac609ed2322a6d7ce4728860143a23e6c88fceda82770b440e66 -size 3088 diff --git a/dummy/qnli_da/1.0.2/dummy_data.zip.lock b/dummy/qnli_da/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/qnli_nb/1.0.2/dummy_data.zip b/dummy/qnli_nb/1.0.2/dummy_data.zip deleted file mode 100644 index c27423f32367148eec822169c4ae8f4266a45279..0000000000000000000000000000000000000000 --- a/dummy/qnli_nb/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4d06185531fe4a01ec50e8577f3df12facd57afa3adb8944e84a6c809d9ecb86 -size 3011 diff --git a/dummy/qnli_nb/1.0.2/dummy_data.zip.lock b/dummy/qnli_nb/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/qnli_sv/1.0.2/dummy_data.zip b/dummy/qnli_sv/1.0.2/dummy_data.zip deleted file mode 100644 index f0412a3f5673a80b5fe78f6700754d3fdb8a26e5..0000000000000000000000000000000000000000 --- a/dummy/qnli_sv/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c5cb2f2efc37268171cbe8b709007c637fdbca24869043400fa14967e7b1c3ae -size 3079 diff --git a/dummy/qnli_sv/1.0.2/dummy_data.zip.lock b/dummy/qnli_sv/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/qqp_da/1.0.2/dummy_data.zip b/dummy/qqp_da/1.0.2/dummy_data.zip deleted file mode 100644 index d4393a4d0a7476653ab7e0e9b51b43ab3293065f..0000000000000000000000000000000000000000 --- a/dummy/qqp_da/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f47db1d317eab3faa3c2e8352cc3ffd11a62b084e167028efbb58b1900dadab7 -size 1916 diff --git a/dummy/qqp_da/1.0.2/dummy_data.zip.lock b/dummy/qqp_da/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/qqp_nb/1.0.2/dummy_data.zip b/dummy/qqp_nb/1.0.2/dummy_data.zip deleted file mode 100644 index f219c1f8e0136b8f10415ec16f84fb82afd744a7..0000000000000000000000000000000000000000 --- a/dummy/qqp_nb/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b27f67c93e6e6f910b2f1d07f55889acb2d0ebd6a62b6368a9370a06a381e03f -size 1884 diff --git a/dummy/qqp_nb/1.0.2/dummy_data.zip.lock b/dummy/qqp_nb/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/qqp_sv/1.0.2/dummy_data.zip b/dummy/qqp_sv/1.0.2/dummy_data.zip deleted file mode 100644 index c2e08f0bba5c9768cc7d8821ccebce7f7aee6d58..0000000000000000000000000000000000000000 --- a/dummy/qqp_sv/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3049a7e01c1700ddae479a8ee5d2e39f02ad56242182035a107d9610ad3103ce -size 1912 diff --git a/dummy/qqp_sv/1.0.2/dummy_data.zip.lock b/dummy/qqp_sv/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/rte_da/1.0.2/dummy_data.zip b/dummy/rte_da/1.0.2/dummy_data.zip deleted file mode 100644 index eac7559fd5467e7953a7551064d65fe8030a28de..0000000000000000000000000000000000000000 --- a/dummy/rte_da/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f9694b74d75fce19638fe184d28c6ca2d1a4c93f6f0cc360655cb0ef7afdc03d -size 3919 diff --git a/dummy/rte_da/1.0.2/dummy_data.zip.lock b/dummy/rte_da/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/rte_nb/1.0.2/dummy_data.zip b/dummy/rte_nb/1.0.2/dummy_data.zip deleted file mode 100644 index f0ea2063ba3b01cf4a2f38c6b12038e5fb703949..0000000000000000000000000000000000000000 --- a/dummy/rte_nb/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:40f5ff851dc17203adf86cce6b7746843161923b62808aec333ae270ceb59fae -size 3847 diff --git a/dummy/rte_nb/1.0.2/dummy_data.zip.lock b/dummy/rte_nb/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/rte_sv/1.0.2/dummy_data.zip b/dummy/rte_sv/1.0.2/dummy_data.zip deleted file mode 100644 index 563d093e0f0e0854100edf2099f0e49373aa31cd..0000000000000000000000000000000000000000 --- a/dummy/rte_sv/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b9a8ced140dcb18d18181561eac8fed1134e0baae060325c6c7ae98f81fc978b -size 3991 diff --git a/dummy/rte_sv/1.0.2/dummy_data.zip.lock b/dummy/rte_sv/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/sst_da/1.0.2/dummy_data.zip b/dummy/sst_da/1.0.2/dummy_data.zip deleted file mode 100644 index 75d5d8f20dcb1a804e2970b8f61fd9e5463276b1..0000000000000000000000000000000000000000 --- a/dummy/sst_da/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:931cbfe15af789418146061db4199b9bc54a3fd34a3425d052dafe340c583e66 -size 1566 diff --git a/dummy/sst_da/1.0.2/dummy_data.zip.lock b/dummy/sst_da/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/sst_nb/1.0.2/dummy_data.zip b/dummy/sst_nb/1.0.2/dummy_data.zip deleted file mode 100644 index d748f2cd5ed785fb7f70305827f947a0282a8dc2..0000000000000000000000000000000000000000 --- a/dummy/sst_nb/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3b75ee742b4ebab3ee8b0ab20cc64db4d68201e5803845932d4a6e7397a5758a -size 1540 diff --git a/dummy/sst_nb/1.0.2/dummy_data.zip.lock b/dummy/sst_nb/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/sst_sv/1.0.2/dummy_data.zip b/dummy/sst_sv/1.0.2/dummy_data.zip deleted file mode 100644 index f8cc9247a001c1e8f33a03e41fbf7045c95123db..0000000000000000000000000000000000000000 --- a/dummy/sst_sv/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:8cbbd80ecec6b6da772f23894ddb3c5bf8155896b70550529b02b236130fc26c -size 1581 diff --git a/dummy/sst_sv/1.0.2/dummy_data.zip.lock b/dummy/sst_sv/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/stsb_da/1.0.2/dummy_data.zip b/dummy/stsb_da/1.0.2/dummy_data.zip deleted file mode 100644 index 84806ff94f378a47aa26f6002c3aba2e97ac656d..0000000000000000000000000000000000000000 --- a/dummy/stsb_da/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:683dabcfcdd4259f07658998baf04bf142868b883bc923fa82fa946513b7120c -size 1514 diff --git a/dummy/stsb_da/1.0.2/dummy_data.zip.lock b/dummy/stsb_da/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/stsb_nb/1.0.2/dummy_data.zip b/dummy/stsb_nb/1.0.2/dummy_data.zip deleted file mode 100644 index 419643eb02e53a9720663bfdfb985587048a3f01..0000000000000000000000000000000000000000 --- a/dummy/stsb_nb/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:966dbc317d67e2909e9dc99d42cb796c3e619e272a4981bcd077b4a55ea9a3a5 -size 1536 diff --git a/dummy/stsb_nb/1.0.2/dummy_data.zip.lock b/dummy/stsb_nb/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/stsb_sv/1.0.2/dummy_data.zip b/dummy/stsb_sv/1.0.2/dummy_data.zip deleted file mode 100644 index c6673981c01915142ce21dc336b410d74e4c01a4..0000000000000000000000000000000000000000 --- a/dummy/stsb_sv/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e475ba2494163d798519ac8e4607bf5f3494d239e7902314678dcc777dd6154c -size 1519 diff --git a/dummy/stsb_sv/1.0.2/dummy_data.zip.lock b/dummy/stsb_sv/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/wnli_da/1.0.2/dummy_data.zip b/dummy/wnli_da/1.0.2/dummy_data.zip deleted file mode 100644 index a3243e962cc021b14d229a349aed8f383b434115..0000000000000000000000000000000000000000 --- a/dummy/wnli_da/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:661fe2ab6a2587102fabb092b3449a25e67d433f83849cfc96eebb6fdb785ac7 -size 2088 diff --git a/dummy/wnli_da/1.0.2/dummy_data.zip.lock b/dummy/wnli_da/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/wnli_nb/1.0.2/dummy_data.zip b/dummy/wnli_nb/1.0.2/dummy_data.zip deleted file mode 100644 index 7c77b824d16eeeb78b435b471322cdb5671cd76b..0000000000000000000000000000000000000000 --- a/dummy/wnli_nb/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:defc66bb887db3d3d2c7ba33929ce2882e6d16cc919d7c3f56b82c40bf5a4e72 -size 2074 diff --git a/dummy/wnli_nb/1.0.2/dummy_data.zip.lock b/dummy/wnli_nb/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/dummy/wnli_sv/1.0.2/dummy_data.zip b/dummy/wnli_sv/1.0.2/dummy_data.zip deleted file mode 100644 index 385f2d2cbb468c7bebba21a148dc18e226597507..0000000000000000000000000000000000000000 --- a/dummy/wnli_sv/1.0.2/dummy_data.zip +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5ce0945bb29d0bec554f47664f673196309c878dd29fb50d9dc14df9c2aaa5b5 -size 2097 diff --git a/dummy/wnli_sv/1.0.2/dummy_data.zip.lock b/dummy/wnli_sv/1.0.2/dummy_data.zip.lock deleted file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/mnli_da/overlim-test.parquet b/mnli_da/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..f8ffb00c77575783fc5e369c47269a72df8a134e --- /dev/null +++ b/mnli_da/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d405f55a9b3508e03f70cf56a9a27bd13967fea3ed2e9065eed5c7b28818fd3 +size 1221251 diff --git a/mnli_da/overlim-train.parquet b/mnli_da/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..4dbb754efeb8f279dde399ed45d5802446f566b2 --- /dev/null +++ b/mnli_da/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b7633b392cedac09c60dbaead221e6078100083a01dcea47309bc2de434fa9b +size 51311436 diff --git a/mnli_da/overlim-validation.parquet b/mnli_da/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..29d40fee8775dab56aa21649af8a296a4cdf12b0 --- /dev/null +++ b/mnli_da/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06475fa0c68ff0f47e4c9b3c8c9916b13f9e3e29b2c0ad9a598cf5be7d2e220a +size 1280731 diff --git a/mnli_nb/overlim-test.parquet b/mnli_nb/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..89630e27d74a7841a7d652c3a7f7be6df3a289cf --- /dev/null +++ b/mnli_nb/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:016c45326fc1927609e618e65d456efdd213ae5507fd15ac22bb56627daa7f4f +size 1166247 diff --git a/mnli_nb/overlim-train.parquet b/mnli_nb/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..f4a18bcb7baba5242cea26865e6b0a52006b67a5 --- /dev/null +++ b/mnli_nb/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0651b5e8c6958724daeb1d417c8b3ef412c601b09ca5475c0d0a4fb800a7e283 +size 48852976 diff --git a/mnli_nb/overlim-validation.parquet b/mnli_nb/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..3d0f1e8281dc24bd6337a2a5bae62c837a9eac6c --- /dev/null +++ b/mnli_nb/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b27d89429620bf570ab94166528b9590038929ec97e4279b7ed2369cd0ff9fb +size 1218061 diff --git a/mnli_sv/overlim-test.parquet b/mnli_sv/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..168e22b3c7d2532a20b74ffa8e86acd329d428da --- /dev/null +++ b/mnli_sv/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a4e988f6bc62d01f027434af476c408b2c38ddf1016696e86cb26ec8323628e +size 1222086 diff --git a/mnli_sv/overlim-train.parquet b/mnli_sv/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..56b3358af8054c2f3ef5e76b2446fcc459b3f45d --- /dev/null +++ b/mnli_sv/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e765d2936c223f334ad2a88ad7a1f330b237f013a4fb5002361f8b87b803a25f +size 51234031 diff --git a/mnli_sv/overlim-validation.parquet b/mnli_sv/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..1acf2ab81c6ff5f0a87578d6a0f74190e6b087af --- /dev/null +++ b/mnli_sv/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9d693ba7d3532d174034f452656ece1fb04ea53a7e8c807a3295c8368c0bc9c +size 1275411 diff --git a/mrpc_da/overlim-test.parquet b/mrpc_da/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..83ac76b4d49577f8994f2fc75c2e8934bc6961e4 --- /dev/null +++ b/mrpc_da/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2a67d0fd947266b529fe2d11fa78bf9ab1594db8e8ee7e4f3a7dbc9680f0a45 +size 78452 diff --git a/mrpc_da/overlim-train.parquet b/mrpc_da/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..7f96470a6b7352c26432e70e0f67fa71419fb186 --- /dev/null +++ b/mrpc_da/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d485b8264c00f8913437413ad0959837a4e664e074db48a8d33216ad71bcec7d +size 597946 diff --git a/mrpc_da/overlim-validation.parquet b/mrpc_da/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..bf9432d5525dedbd12a4ffc051dfb3529ff794da --- /dev/null +++ b/mrpc_da/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c0cd4c7f906af1342b8967ac42fb5bf10fd4395fdb3aa0e4a3632f781da97628 +size 76446 diff --git a/mrpc_nb/overlim-test.parquet b/mrpc_nb/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..b915f9ec04b3ceebbfa7bb5a81f9c91f4df17eac --- /dev/null +++ b/mrpc_nb/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffb090db8451e589fe16686e1a34cb351978d7727a72af8527ac784199937bd1 +size 73521 diff --git a/mrpc_nb/overlim-train.parquet b/mrpc_nb/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..84636e81d693acacfde6311a6c48e64cec116318 --- /dev/null +++ b/mrpc_nb/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63d29ccaf87b82544ebbb212813b9feb2b58106003ee6c427979903ac222edb4 +size 563196 diff --git a/mrpc_nb/overlim-validation.parquet b/mrpc_nb/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..462f6e1d3a490da121324dcd62bb0a18d7980036 --- /dev/null +++ b/mrpc_nb/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0053ea7e3a9cb116159f8c1b8efce91c94e811fe177f832f1e7e060b787cecb5 +size 71835 diff --git a/mrpc_sv/overlim-test.parquet b/mrpc_sv/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..629ab2d66996ab650acaba78e5bbb2923f64ef8c --- /dev/null +++ b/mrpc_sv/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a38fa25a1a4a4362450a919cc0a9cb81d6848e8060e31af9f009f937de12704a +size 78087 diff --git a/mrpc_sv/overlim-train.parquet b/mrpc_sv/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..e87b9165a9cedb9c423b36153805d29bf123a3e0 --- /dev/null +++ b/mrpc_sv/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a80febdd0cd3911711510745f769558a9601c6a4260e33c0164934213dc1804f +size 601527 diff --git a/mrpc_sv/overlim-validation.parquet b/mrpc_sv/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..83f9cff718a8eac4e216bfe9d48a12f420762d22 --- /dev/null +++ b/mrpc_sv/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b1cea2d4240b9d04216fc65a6cd73c23c7d2a1d64008413e855888eb88001a61 +size 76509 diff --git a/overlim.py b/overlim.py deleted file mode 100644 index fb97788d67e6c28b01bffe4221e6ab1fe8b3483b..0000000000000000000000000000000000000000 --- a/overlim.py +++ /dev/null @@ -1,514 +0,0 @@ -# coding=utf-8 -# Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Lint as: python3 -"""The SuperGLUE benchmark.""" - -import json -import os - -import datasets - -_CITATION = """\ -""" - -# You can copy an official description -_DESCRIPTION = """\ -""" - -_HOMEPAGE = "" - -_LICENSE = "" - -_GLUE_CITATION = """\ -@inproceedings{wang2019glue, - title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding}, - author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.}, - note={In the Proceedings of ICLR.}, - year={2019} -} -""" - -_GLUE_DESCRIPTION = """\ -GLUE, the General Language Understanding Evaluation benchmark -(https://gluebenchmark.com/) is a collection of resources for training, -evaluating, and analyzing natural language understanding systems. - -""" -_SST_DESCRIPTION = """\ -The Stanford Sentiment Treebank consists of sentences from movie reviews and -human annotations of their sentiment. The task is to predict the sentiment of a -given sentence. We use the two-way (positive/negative) class split, and use only -sentence-level labels.""" -_SST_CITATION = """\ -@inproceedings{socher2013recursive, - title={Recursive deep models for semantic compositionality over a sentiment treebank}, - author={Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning, Christopher D and Ng, Andrew and Potts, Christopher}, - booktitle={Proceedings of the 2013 conference on empirical methods in natural language processing}, - pages={1631--1642}, - year={2013} -}""" -_MRPC_DESCRIPTION = """\ -The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of -sentence pairs automatically extracted from online news sources, with human annotations -for whether the sentences in the pair are semantically equivalent.""" -_MRPC_CITATION = """\ -@inproceedings{dolan2005automatically, - title={Automatically constructing a corpus of sentential paraphrases}, - author={Dolan, William B and Brockett, Chris}, - booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)}, - year={2005} -}""" -_QQP_DESCRIPTION = """\ -The Quora Question Pairs2 dataset is a collection of question pairs from the -community question-answering website Quora. The task is to determine whether a -pair of questions are semantically equivalent.""" -_QQP_CITATION = """\ -@online{WinNT, -author = {Iyer, Shankar and Dandekar, Nikhil and Csernai, Kornel}, -title = {First Quora Dataset Release: Question Pairs}, -year = {2017}, -url = {https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs}, -urldate = {2019-04-03} -}""" -_STSB_DESCRIPTION = """\ -The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of -sentence pairs drawn from news headlines, video and image captions, and natural -language inference data. Each pair is human-annotated with a similarity score -from 1 to 5.""" -_STSB_CITATION = """\ -@article{cer2017semeval, - title={Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation}, - author={Cer, Daniel and Diab, Mona and Agirre, Eneko and Lopez-Gazpio, Inigo and Specia, Lucia}, - journal={arXiv preprint arXiv:1708.00055}, - year={2017} -}""" -_MNLI_DESCRIPTION = """\ -The Multi-Genre Natural Language Inference Corpus is a crowdsourced -collection of sentence pairs with textual entailment annotations. Given a premise sentence -and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis -(entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are -gathered from ten different sources, including transcribed speech, fiction, and government reports. -We use the standard test set, for which we obtained private labels from the authors, and evaluate -on both the matched (in-domain) and mismatched (cross-domain) section. We also use and recommend -the SNLI corpus as 550k examples of auxiliary training data.""" -_MNLI_CITATION = """\ - @InProceedings{N18-1101, - author = "Williams, Adina - and Nangia, Nikita - and Bowman, Samuel", - title = "A Broad-Coverage Challenge Corpus for - Sentence Understanding through Inference", - booktitle = "Proceedings of the 2018 Conference of - the North American Chapter of the - Association for Computational Linguistics: - Human Language Technologies, Volume 1 (Long - Papers)", - year = "2018", - publisher = "Association for Computational Linguistics", - pages = "1112--1122", - location = "New Orleans, Louisiana", - url = "http://aclweb.org/anthology/N18-1101" - } - @article{bowman2015large, - title={A large annotated corpus for learning natural language inference}, - author={Bowman, Samuel R and Angeli, Gabor and Potts, Christopher and Manning, Christopher D}, - journal={arXiv preprint arXiv:1508.05326}, - year={2015} - }""" -_QNLI_DESCRIPTION = """\ -The Stanford Question Answering Dataset is a question-answering -dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn -from Wikipedia) contains the answer to the corresponding question (written by an annotator). We -convert the task into sentence pair classification by forming a pair between each question and each -sentence in the corresponding context, and filtering out pairs with low lexical overlap between the -question and the context sentence. The task is to determine whether the context sentence contains -the answer to the question. This modified version of the original task removes the requirement that -the model select the exact answer, but also removes the simplifying assumptions that the answer -is always present in the input and that lexical overlap is a reliable cue.""" -_QNLI_CITATION = """\ -@article{rajpurkar2016squad, - title={Squad: 100,000+ questions for machine comprehension of text}, - author={Rajpurkar, Pranav and Zhang, Jian and Lopyrev, Konstantin and Liang, Percy}, - journal={arXiv preprint arXiv:1606.05250}, - year={2016} -}""" -_WNLI_DESCRIPTION = """\ -The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task -in which a system must read a sentence with a pronoun and select the referent of that pronoun from -a list of choices. The examples are manually constructed to foil simple statistical methods: Each -one is contingent on contextual information provided by a single word or phrase in the sentence. -To convert the problem into sentence pair classification, we construct sentence pairs by replacing -the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the -pronoun substituted is entailed by the original sentence. We use a small evaluation set consisting of -new examples derived from fiction books that was shared privately by the authors of the original -corpus. While the included training set is balanced between two classes, the test set is imbalanced -between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: -hypotheses are sometimes shared between training and development examples, so if a model memorizes the -training examples, they will predict the wrong label on corresponding development set -example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence -between a model's score on this task and its score on the unconverted original task. We -call converted dataset WNLI (Winograd NLI).""" -_WNLI_CITATION = """\ -@inproceedings{levesque2012winograd, - title={The winograd schema challenge}, - author={Levesque, Hector and Davis, Ernest and Morgenstern, Leora}, - booktitle={Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning}, - year={2012} -}""" - -_SUPER_GLUE_CITATION = """\ -@article{wang2019superglue, - title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems}, - author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R}, - journal={arXiv preprint arXiv:1905.00537}, - year={2019} -} - -Note that each SuperGLUE dataset has its own citation. Please see the source to -get the correct citation for each contained dataset. -""" - -_SUPER_GLUE_DESCRIPTION = """\ -SuperGLUE (https://super.gluebenchmark.com/) is a new benchmark styled after -GLUE with a new set of more difficult language understanding tasks, improved -resources, and a new public leaderboard. - -""" - -_BOOLQ_DESCRIPTION = """\ -BoolQ (Boolean Questions, Clark et al., 2019a) is a QA task where each example consists of a short -passage and a yes/no question about the passage. The questions are provided anonymously and -unsolicited by users of the Google search engine, and afterwards paired with a paragraph from a -Wikipedia article containing the answer. Following the original work, we evaluate with accuracy.""" - -_CB_DESCRIPTION = """\ -The CommitmentBank (De Marneffe et al., 2019) is a corpus of short texts in which at least -one sentence contains an embedded clause. Each of these embedded clauses is annotated with the -degree to which we expect that the person who wrote the text is committed to the truth of the clause. -The resulting task framed as three-class textual entailment on examples that are drawn from the Wall -Street Journal, fiction from the British National Corpus, and Switchboard. Each example consists -of a premise containing an embedded clause and the corresponding hypothesis is the extraction of -that clause. We use a subset of the data that had inter-annotator agreement above 0.85. The data is -imbalanced (relatively fewer neutral examples), so we evaluate using accuracy and F1, where for -multi-class F1 we compute the unweighted average of the F1 per class.""" - -_COPA_DESCRIPTION = """\ -The Choice Of Plausible Alternatives (COPA, Roemmele et al., 2011) dataset is a causal -reasoning task in which a system is given a premise sentence and two possible alternatives. The -system must choose the alternative which has the more plausible causal relationship with the premise. -The method used for the construction of the alternatives ensures that the task requires causal reasoning -to solve. Examples either deal with alternative possible causes or alternative possible effects of the -premise sentence, accompanied by a simple question disambiguating between the two instance -types for the model. All examples are handcrafted and focus on topics from online blogs and a -photography-related encyclopedia. Following the recommendation of the authors, we evaluate using -accuracy.""" - -_RTE_DESCRIPTION = """\ -The Recognizing Textual Entailment (RTE) datasets come from a series of annual competitions -on textual entailment, the problem of predicting whether a given premise sentence entails a given -hypothesis sentence (also known as natural language inference, NLI). RTE was previously included -in GLUE, and we use the same data and format as before: We merge data from RTE1 (Dagan -et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli -et al., 2009). All datasets are combined and converted to two-class classification: entailment and -not_entailment. Of all the GLUE tasks, RTE was among those that benefited from transfer learning -the most, jumping from near random-chance performance (~56%) at the time of GLUE's launch to -85% accuracy (Liu et al., 2019c) at the time of writing. Given the eight point gap with respect to -human performance, however, the task is not yet solved by machines, and we expect the remaining -gap to be difficult to close.""" - -_BOOLQ_CITATION = """\ -@inproceedings{clark2019boolq, - title={BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions}, - author={Clark, Christopher and Lee, Kenton and Chang, Ming-Wei, and Kwiatkowski, Tom and Collins, Michael, and Toutanova, Kristina}, - booktitle={NAACL}, - year={2019} -}""" - -_CB_CITATION = """\ -@article{de marneff_simons_tonhauser_2019, - title={The CommitmentBank: Investigating projection in naturally occurring discourse}, - journal={proceedings of Sinn und Bedeutung 23}, - author={De Marneff, Marie-Catherine and Simons, Mandy and Tonhauser, Judith}, - year={2019} -}""" - -_COPA_CITATION = """\ -@inproceedings{roemmele2011choice, - title={Choice of plausible alternatives: An evaluation of commonsense causal reasoning}, - author={Roemmele, Melissa and Bejan, Cosmin Adrian and Gordon, Andrew S}, - booktitle={2011 AAAI Spring Symposium Series}, - year={2011} -}""" - -_RTE_CITATION = """\ -@inproceedings{dagan2005pascal, - title={The PASCAL recognising textual entailment challenge}, - author={Dagan, Ido and Glickman, Oren and Magnini, Bernardo}, - booktitle={Machine Learning Challenges Workshop}, - pages={177--190}, - year={2005}, - organization={Springer} -} -@inproceedings{bar2006second, - title={The second pascal recognising textual entailment challenge}, - author={Bar-Haim, Roy and Dagan, Ido and Dolan, Bill and Ferro, Lisa and Giampiccolo, Danilo and Magnini, Bernardo and Szpektor, Idan}, - booktitle={Proceedings of the second PASCAL challenges workshop on recognising textual entailment}, - volume={6}, - number={1}, - pages={6--4}, - year={2006}, - organization={Venice} -} -@inproceedings{giampiccolo2007third, - title={The third pascal recognizing textual entailment challenge}, - author={Giampiccolo, Danilo and Magnini, Bernardo and Dagan, Ido and Dolan, Bill}, - booktitle={Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing}, - pages={1--9}, - year={2007}, - organization={Association for Computational Linguistics} -} -@inproceedings{bentivogli2009fifth, - title={The Fifth PASCAL Recognizing Textual Entailment Challenge.}, - author={Bentivogli, Luisa and Clark, Peter and Dagan, Ido and Giampiccolo, Danilo}, - booktitle={TAC}, - year={2009} -}""" - -# TODO: Add link to the official dataset URLs here -# The HuggingFace Datasets library doesn't host the datasets but only points to the original files. -# This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method) -_URL = "https://huggingface.co/datasets/KBLab/overlim/resolve/main/data/" -_TASKS = { - "boolq": "boolq.tar.gz", - "cb": "cb.tar.gz", - "copa": "copa.tar.gz", - "mnli": "mnli.tar.gz", - "mrpc": "mrpc.tar.gz", - "qnli": "qnli.tar.gz", - "qqp": "qqp.tar.gz", - "rte": "rte.tar.gz", - "sst": "sst.tar.gz", - "stsb": "stsb.tar.gz", - "wnli": "wnli.tar.gz" -} -_LANGUAGES = {"sv", "da", "nb"} - - -class OverLimConfig(datasets.BuilderConfig): - """BuilderConfig for Suc.""" - def __init__(self, name, description, features, citation, language, label_classes=None, **kwargs): - """BuilderConfig for OverLim. - """ - self.full_name = name + "_" + language - super(OverLimConfig, - self).__init__(name=self.full_name, version=datasets.Version("1.0.2"), **kwargs) - self.features = features + ["label"] - self.label_classes = label_classes - self.citation = citation - self.description = description - self.task_name = name - self.language = language - self.data_url = _TASKS[name] - - - -class OverLim(datasets.GeneratorBasedBuilder): - """OverLim""" - - BUILDER_CONFIGS = [[ - OverLimConfig( - name="boolq", - description=_BOOLQ_DESCRIPTION, - features=["question", "passage"], - label_classes=["False", "True"], - citation=_BOOLQ_CITATION, - language=lang, - ), - OverLimConfig( - name="cb", - description=_CB_DESCRIPTION, - features=["premise", "hypothesis"], - label_classes=["entailment", "contradiction", "neutral"], - citation=_CB_CITATION, - language=lang, - ), - OverLimConfig( - name="copa", - description=_COPA_DESCRIPTION, - label_classes=["choice1", "choice2"], - # Note that question will only be the X in the statement "What's - # the X for this?". - features=["premise", "choice1", "choice2", "question"], - citation=_COPA_CITATION, - language=lang, - ), - OverLimConfig( - name="rte", - description=_RTE_DESCRIPTION, - features=["premise", "hypothesis"], - label_classes=["entailment", "not_entailment"], - citation=_RTE_CITATION, - language=lang, - ), - OverLimConfig( - name="qqp", - description=_QQP_DESCRIPTION, - features=["text_a", "text_b"], - label_classes=["not_duplicate", "duplicate"], - citation=_QQP_CITATION, - language=lang, - ), - OverLimConfig( - name="qnli", - description=_QNLI_DESCRIPTION, - features=["premise", "hypothesis"], - label_classes=["entailment", "not_entailment"], - citation=_QNLI_CITATION, - language=lang, - ), - OverLimConfig( - name="stsb", - description=_STSB_DESCRIPTION, - features=["text_a", "text_b"], - citation=_STSB_CITATION, - language=lang, - ), - OverLimConfig( - name="mnli", - description=_MNLI_DESCRIPTION, - features=["premise", "hypothesis"], - label_classes=["entailment", "neutral", "contradiction"], - citation=_MNLI_CITATION, - language=lang, - ), - OverLimConfig( - name="mrpc", - description=_MRPC_DESCRIPTION, - features=["text_a", "text_b"], - label_classes=["not_equivalent", "equivalent"], - citation=_MRPC_CITATION, - language=lang, - ), - OverLimConfig( - name="wnli", - description=_WNLI_DESCRIPTION, - features=["premise", "hypothesis"], - label_classes=["not_entailment", "entailment"], - citation=_WNLI_CITATION, - language=lang, - ), - OverLimConfig( - name="sst", - description=_SST_DESCRIPTION, - features=["text"], - label_classes=["negative", "positive"], - citation=_SST_CITATION, - language=lang, - ) - - ] for lang in _LANGUAGES] - BUILDER_CONFIGS = [element for inner in BUILDER_CONFIGS for element in inner] - - def _info(self): - features = {feature: datasets.Value("string") for feature in self.config.features if feature != "label"} - if self.config.label_classes: - #if self.config.task_name in ["cb", "mnli", "qnli", "rte"]: - # features["label"] = datasets.Value("string") - #else: - features["label"] = datasets.features.ClassLabel(names=self.config.label_classes) - else: - features["label"] = datasets.Value("float32") - features["idx"] = datasets.Value("int32") - - return datasets.DatasetInfo( - description=_GLUE_DESCRIPTION + self.config.description, - features=datasets.Features(features), - homepage=_HOMEPAGE, - citation=self.config.citation + "\n" + _SUPER_GLUE_CITATION, - ) - - def _split_generators(self, dl_manager): - dl_dir = dl_manager.download_and_extract(os.path.join(_URL, self.config.language, self.config.data_url)) - # dl_dir = dl_manager.iter_archive(os.path.join(_URL, self.config.language, self.config.data_url)) - dl_dir = os.path.join(dl_dir, self.config.task_name) - return [ - datasets.SplitGenerator( - name=datasets.Split.TRAIN, - gen_kwargs={ - "data_file": os.path.join(dl_dir, "train.jsonl"), - }, - ), - datasets.SplitGenerator( - name=datasets.Split.VALIDATION, - gen_kwargs={ - "data_file": os.path.join(dl_dir, "val.jsonl"), - }, - ), - datasets.SplitGenerator( - name=datasets.Split.TEST, - gen_kwargs={ - "data_file": os.path.join(dl_dir, "test.jsonl"), - }, - ), - ] - - def _generate_examples(self, data_file): - with open(data_file, encoding="utf-8") as f: - for line in f: - row = json.loads(line) - example = {feature: row[feature] for feature in self.config.features} - example["idx"] = row["idx"] - - if self.config.name == "copa": - example["label"] = "choice2" if row["label"] else "choice1" - else: - example["label"] = _cast_label(row["label"]) - yield example["idx"], example - - -def _cast_label(label): - """Converts the label into the appropriate string version.""" - try: - label = float(label) - return label - except ValueError: - pass - try: - label = int(label) - return label - except ValueError: - pass - # try: - # label = int(bool(label)) - # return label - # except ValueError: - # pass - return label - - - # if isinstance(label, str): - # return label - # elif isinstance(label, bool): - # return "True" if label else "False" - # # return label - # elif isinstance(label, int): - # assert label in (0, 1) - # return label - # elif isinstance(label, float): - # return label - # # return str(label) - # else: - # raise ValueError("Invalid label format.") \ No newline at end of file diff --git a/qnli_da/overlim-test.parquet b/qnli_da/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..389580d8f8264fc47fd3816ab95cbde60bedd58e --- /dev/null +++ b/qnli_da/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee60391ba3504df52a1e9fd163221ab5bfdb034e6a1ed0eca79c3d6fb93c5c57 +size 886368 diff --git a/qnli_da/overlim-train.parquet b/qnli_da/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..ced2cfb0d5f42e97dd1de59b443531c0cb689d0d --- /dev/null +++ b/qnli_da/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a8c2d9372628bcb37b3fe705ae04f8c19a7f0a3b9de55c3d1cf44948c364f7a8 +size 16871621 diff --git a/qnli_da/overlim-validation.parquet b/qnli_da/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..8e21e6ac18f089c4020a9737adcd58a46fbeb79f --- /dev/null +++ b/qnli_da/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06280b80c2019f2b9b14bfe14c4141ec2c5142f14df5431ca891219efe7c6fb5 +size 898394 diff --git a/qnli_nb/overlim-test.parquet b/qnli_nb/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..56ab101284be09d7bb28534b9ce6e2353d485e35 --- /dev/null +++ b/qnli_nb/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c5e4fc45a20356334f12b028820123c2f76c9202c58124333754ba4bbe6b1253 +size 845977 diff --git a/qnli_nb/overlim-train.parquet b/qnli_nb/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..5fd87a3acec6aa5d9f498330f7b85c121069d770 --- /dev/null +++ b/qnli_nb/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38eab9d83cd0c1f55d1e32407266803e06ee00d601b60e9e6d03fbdc561a4a00 +size 16067401 diff --git a/qnli_nb/overlim-validation.parquet b/qnli_nb/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..202397c7683feb9c46c4bd8854f655bf58bc468f --- /dev/null +++ b/qnli_nb/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af97faf26a8c14e9edd8aaf9bbee5f1bcb2c042a434bbdcd15362d135be7977e +size 853825 diff --git a/qnli_sv/overlim-test.parquet b/qnli_sv/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..4403ca2bff50c1a51481289ac88201cf7a2f5111 --- /dev/null +++ b/qnli_sv/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4804513c5e3fb0519a7be99021712d58c39a0041834c148db57bee7d5d2a1d7e +size 889330 diff --git a/qnli_sv/overlim-train.parquet b/qnli_sv/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..91aad45fee43c365e5ef90697b03acf9774ad0e6 --- /dev/null +++ b/qnli_sv/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1c551c86a657072686038ea6c37c516581f3323aec4d0b550ff3ec61306d6e63 +size 16949467 diff --git a/qnli_sv/overlim-validation.parquet b/qnli_sv/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..ae136ddbf7a2def594f09b61970997aca3d8d6e4 --- /dev/null +++ b/qnli_sv/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8709e3e5477c50e745e1d0be4c6a383664a944d530cc554b67bc5d51399f4df +size 898241 diff --git a/qqp_da/overlim-test.parquet b/qqp_da/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..47e279e20dc4af20c2a7bb40670e264b837ec901 --- /dev/null +++ b/qqp_da/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2cc62e551da18cd28c5d280909f7f89fdf60b1bc4b0d06d1826975ca200b1cde +size 3827857 diff --git a/qqp_da/overlim-train.parquet b/qqp_da/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..fe9de5659e1dd64bba8a3d2cd7fd39f383bd32fd --- /dev/null +++ b/qqp_da/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17a7deb03453e1dde00a2097c67709fb2b8227b55de0e768d6f13ae1e5426b0e +size 30633051 diff --git a/qqp_da/overlim-validation.parquet b/qqp_da/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..334049d9e770f439c492af81e8f8e039817fd183 --- /dev/null +++ b/qqp_da/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45c78dc488c07536e00ce55a198666f43e8f4e89a205e7bb4d5f1ef6c4d41941 +size 3832775 diff --git a/qqp_nb/overlim-test.parquet b/qqp_nb/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..7e1b27457251d438c6acc571628f2051649b1616 --- /dev/null +++ b/qqp_nb/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:434527aa77c27774298428f9323273f89462f3ab4dff2047d9505c4109b07e16 +size 3705670 diff --git a/qqp_nb/overlim-train.parquet b/qqp_nb/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..9149e7172189ab6f9c1d0d9309f750a029bbdc37 --- /dev/null +++ b/qqp_nb/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e3996e78a87335c24216d5e7b99c57ffae2a2bba8efed3c2193245282b4ad2b8 +size 29639358 diff --git a/qqp_nb/overlim-validation.parquet b/qqp_nb/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..c6c51483b0281ffff708a73c5c90431905a67533 --- /dev/null +++ b/qqp_nb/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d23ba074666d5a048193a26e73428d1e755a71cf77b42ffbf0942ae6f4d518d8 +size 3708041 diff --git a/qqp_sv/overlim-test.parquet b/qqp_sv/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..1c74e13aa1d7685f62d83417b7f3106d1cefb630 --- /dev/null +++ b/qqp_sv/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7797ed43a1a2dd05104e105ffb6391414b60d669fe1153d0ad36f8788171eb0b +size 3853901 diff --git a/qqp_sv/overlim-train.parquet b/qqp_sv/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..077fcf19881009e07ad248e9b271915dbb426a94 --- /dev/null +++ b/qqp_sv/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b59ac9e14e1d0419089a4a48c5e614e629830716b8b51aa36e5f2709b9355a81 +size 30838191 diff --git a/qqp_sv/overlim-validation.parquet b/qqp_sv/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..c9a498756a6780d5b05ebebbb1a071afccb99820 --- /dev/null +++ b/qqp_sv/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6668de7f1574654d4619c5d2f976ae257d2e0bb5ecf1b9e94e3db74242cb0b9c +size 3861114 diff --git a/rte_da/overlim-test.parquet b/rte_da/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..37b7ab671d5cf7d40313a4d25a0d8620e2b4bd41 --- /dev/null +++ b/rte_da/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c5cd319042bd43d4e5685b7da80d8f30471990aaaf3a602fd8e2a1c949ec8c5 +size 69395 diff --git a/rte_da/overlim-train.parquet b/rte_da/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..5e2a3aa6ff55b525ff2c989f56e9edcc7e92a254 --- /dev/null +++ b/rte_da/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9687e6dff711c7c305c20eb63ea1f98a6a504d4f5f894eacc2522fffdea11b0a +size 538189 diff --git a/rte_da/overlim-validation.parquet b/rte_da/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..b736b50c6642e0f92017198aae6e6b57403fdc93 --- /dev/null +++ b/rte_da/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:334115b463d54d29409c19246d3427de5d1e760f9582e0101590eb56876e0d16 +size 67928 diff --git a/rte_nb/overlim-test.parquet b/rte_nb/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..9a10d90a7d352b91d276ab1d9e5116337c8994c3 --- /dev/null +++ b/rte_nb/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1ba2e8ffb0e074752ea50dcebc7045e8482c813df8ae552edf0bf270e4a6199 +size 66717 diff --git a/rte_nb/overlim-train.parquet b/rte_nb/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..3367c658719003ccac36a6abc8c7fd54f57bfa2a --- /dev/null +++ b/rte_nb/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc49ce35325eacadabaf7c8e9abffdb7ab222758de5eacd3d4f4ade9dc403bcf +size 505254 diff --git a/rte_nb/overlim-validation.parquet b/rte_nb/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..ea4021aabcf6a5744df5c1bbf3ab06909d9cfbcc --- /dev/null +++ b/rte_nb/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:52cef35c84e54004c5014cddce23654da401ae37af50831fd103ed828d9ff65a +size 62750 diff --git a/rte_sv/overlim-test.parquet b/rte_sv/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..f5c2889d67e3e92d9a723d6b932c4af11ca51809 --- /dev/null +++ b/rte_sv/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90b23ce4a71283ea4cf65bd0574a6ac596dfdd3f8197336b2a6a6763f0b3708a +size 70503 diff --git a/rte_sv/overlim-train.parquet b/rte_sv/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..7ba6abc57e522feafde2372dc5dc08f36e740aab --- /dev/null +++ b/rte_sv/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5cf0c9c2f49b9943b5c7201d1b9bb99f018874eced8a4aac6894c3d9f3e16f08 +size 543277 diff --git a/rte_sv/overlim-validation.parquet b/rte_sv/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..e1d7532f4e42ae29630b60ab456ae547c08ab36b --- /dev/null +++ b/rte_sv/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d60b1f4c3078392522fe4a6caece57f88dc8f178dbd6f000c726c9665e984474 +size 68041 diff --git a/sst_da/overlim-test.parquet b/sst_da/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..1318d6428af48ef7c4b0a5d280b180967730d9cb --- /dev/null +++ b/sst_da/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75fee4868c7f79858bc97e9896195456dc81335f21a6fd3ffcd8c11f10271ea9 +size 73563 diff --git a/sst_da/overlim-train.parquet b/sst_da/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..9a89b96c4e79222dbc34fac45fde56536fb970fe --- /dev/null +++ b/sst_da/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e14c232c326a2442531811c98393856c45c53fe3fb66c270e7f76954a886dd9 +size 3137742 diff --git a/sst_da/overlim-validation.parquet b/sst_da/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..1c91211315607409e3e501378cdc10b57de28d2d --- /dev/null +++ b/sst_da/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88788da4e54c79e4f094814e32d08c1357385227d2de666d5234f807020599ed +size 41637 diff --git a/sst_nb/overlim-test.parquet b/sst_nb/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..1dff201637f0bcb987423cff22cc4b02962ad0b8 --- /dev/null +++ b/sst_nb/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2aab1f9923b955dbc18c8ac7e1fad3332341db400fed62f4326c0e045c3328a1 +size 70257 diff --git a/sst_nb/overlim-train.parquet b/sst_nb/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..c6b71cd560bdc0c795a451851c3949a31204cc67 --- /dev/null +++ b/sst_nb/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5811289b5336fdc0ac103e508230bcbb61101de3e4016b564d58ada33dd97765 +size 3075361 diff --git a/sst_nb/overlim-validation.parquet b/sst_nb/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..38c64e3bc627f90e605661268aa9b29ea121e787 --- /dev/null +++ b/sst_nb/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:12809243de62eac8f71e91b415313c9f875c3e4c1179560e7384ec640e2882c2 +size 41538 diff --git a/sst_sv/overlim-test.parquet b/sst_sv/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..11e4257c8fabb5a7290cd195c54efac8dba5b358 --- /dev/null +++ b/sst_sv/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:98baee67bb72cc439252314fbfb88e050d6c0806e7498f273a83ba9303cc80ab +size 73902 diff --git a/sst_sv/overlim-train.parquet b/sst_sv/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..7dde9032457b46b41237798600029f0557dcd75e --- /dev/null +++ b/sst_sv/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f40f3a4c030aafce9d506e3eb99457f9e9ed5866dfc09a74ca07921427d0fb37 +size 3159037 diff --git a/sst_sv/overlim-validation.parquet b/sst_sv/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..69d5b76e60367bc0284bd5f5347aa7d6345f42bc --- /dev/null +++ b/sst_sv/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe57a01ca3cf59e4faf836fb348115528d9e52493a386352733700b07f60192b +size 41984 diff --git a/stsb_da/overlim-test.parquet b/stsb_da/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..29886880c1c1081608f9568c2c5147c8cf7d6963 --- /dev/null +++ b/stsb_da/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2c003b8157530bd051017cd90c9f28c25f4cc8ccfbbc658166029803c495c9c9 +size 152362 diff --git a/stsb_da/overlim-train.parquet b/stsb_da/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..588f979d851b6a598b4c9ce227b24f55d47893ac --- /dev/null +++ b/stsb_da/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:36ebb5eba67d5ebf7c8c8477fe4da52972d4ca08165ae6b713c74a2f131fc7b2 +size 390136 diff --git a/stsb_da/overlim-validation.parquet b/stsb_da/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..3eee89290089201631ae36efaebe223b1c62e8af --- /dev/null +++ b/stsb_da/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a8221ef685e3522c8993250fcb78629b5f6cfab81caa2f6140e9e71a03066c4a +size 138853 diff --git a/stsb_nb/overlim-test.parquet b/stsb_nb/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..a44e5f773a5e8e46ac64b594de2e95a13327342f --- /dev/null +++ b/stsb_nb/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:11ce7de7bdc1ec8c22f8abba38e9d6cf0c37ea0d9ad8ad0aa91eddfaff87df23 +size 147816 diff --git a/stsb_nb/overlim-train.parquet b/stsb_nb/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..3c120bbfc70dc8256079fad0c11771797ca38f1f --- /dev/null +++ b/stsb_nb/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93a6d63b71933b509eca3ef8fe2b8954c2b80f52cb70976d588f1da382d1ee54 +size 379569 diff --git a/stsb_nb/overlim-validation.parquet b/stsb_nb/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..ae91632791822e7eadeaa0b91977d280f7f5383a --- /dev/null +++ b/stsb_nb/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:289bd0be640be7e4d1c59d33336e31980cb490e55462c9874c265bed346bfb39 +size 132378 diff --git a/stsb_sv/overlim-test.parquet b/stsb_sv/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..6d99a19f28d75b067b97827ec1c2a97570b18658 --- /dev/null +++ b/stsb_sv/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fececead58e89d85e14d274c7fecdd8303687fac00645e2ec5b61efb27646512 +size 153476 diff --git a/stsb_sv/overlim-train.parquet b/stsb_sv/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..cc56f56cd8d7eceb838720f992662ffbf5e8a877 --- /dev/null +++ b/stsb_sv/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de8d7e9adac39b08955e7fd507ff81231baa918877ae0c86426f9692653e6f6d +size 394802 diff --git a/stsb_sv/overlim-validation.parquet b/stsb_sv/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..7d5d90361934a13a20a55fa4a0949f1476380757 --- /dev/null +++ b/stsb_sv/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4fbec7c674fc599e2a4b15d10c6dc305a3b8ee754ceaedf9fbb1df3c000391e7 +size 139260 diff --git a/wnli_da/overlim-test.parquet b/wnli_da/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..e629875507c258526e819fe1a7e34658c728c098 --- /dev/null +++ b/wnli_da/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:398644a76227fd4604f899b3f9cbbfb840767d5f9606ed1a7bf3fa61f786b256 +size 11077 diff --git a/wnli_da/overlim-train.parquet b/wnli_da/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..1583ebd9cdf31c8d3b6abf5940b82a4fb7672d5b --- /dev/null +++ b/wnli_da/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7cf519e2a80c8b8bf90def26141b3d8916843203cefcde538c96de90d80330bb +size 37216 diff --git a/wnli_da/overlim-validation.parquet b/wnli_da/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..f6a123b669291fb7726387662b1a8ca1d2ece8f8 --- /dev/null +++ b/wnli_da/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24de4c1127e81410c9ced22fa242052afed2bc16ac0d7051aea33eb00fe0ce4d +size 10663 diff --git a/wnli_nb/overlim-test.parquet b/wnli_nb/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..c60f0a771f4c7f270f283d4815ab735bc2b3f628 --- /dev/null +++ b/wnli_nb/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:950be9779968966df7c479dd6b73ef36ed67a28acc1aea3666f3ede0f562abc5 +size 10852 diff --git a/wnli_nb/overlim-train.parquet b/wnli_nb/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..8eabd08132cada777de116aa7fcac81105bdb572 --- /dev/null +++ b/wnli_nb/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dbd030564b767839bef2d4306fefe8705ce9f1ca0850de5c26b5944b16ece41b +size 37721 diff --git a/wnli_nb/overlim-validation.parquet b/wnli_nb/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..9ea3f44cf93c3e6b0ca56a15aedd9c9135cac814 --- /dev/null +++ b/wnli_nb/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30a33c4cd9d153af4fd00978b6decc02eb4164ad4d8af9b1ce006cb97ebc8251 +size 10618 diff --git a/wnli_sv/overlim-test.parquet b/wnli_sv/overlim-test.parquet new file mode 100644 index 0000000000000000000000000000000000000000..aad84828808c10da7be23f2eb0caa753a8a9b024 --- /dev/null +++ b/wnli_sv/overlim-test.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2dfead088a9cd9de43d9d2a18aa154f3ab27f2e636df5914e1c3864caaa32ac7 +size 10921 diff --git a/wnli_sv/overlim-train.parquet b/wnli_sv/overlim-train.parquet new file mode 100644 index 0000000000000000000000000000000000000000..799e4fd16f115c079621c970db9c83e3464be4f0 --- /dev/null +++ b/wnli_sv/overlim-train.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:feb1f877e62297ec653b3dd6d412cf6b393e5c489681324088f322efbdb36d19 +size 37367 diff --git a/wnli_sv/overlim-validation.parquet b/wnli_sv/overlim-validation.parquet new file mode 100644 index 0000000000000000000000000000000000000000..fe286c21a41cf280cc45b3383cdb40278c96b777 --- /dev/null +++ b/wnli_sv/overlim-validation.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2a7497597e02fc64fb02e71ee23994ff265b158141b0d2f1c7803f29d3fa4db +size 10689