GaoangLau
commited on
Commit
•
33e4ac6
1
Parent(s):
09b8bc4
feat: add stsbenchmark raw
Browse files- stsbenchmark/LICENSE.txt +136 -0
- stsbenchmark/correlation.pl +119 -0
- stsbenchmark/readme.txt +174 -0
- stsbenchmark/sts-dev.csv +0 -0
- stsbenchmark/sts-test.csv +0 -0
- stsbenchmark/sts-train.csv +0 -0
stsbenchmark/LICENSE.txt
ADDED
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
Notes on datasets and licenses
|
3 |
+
------------------------------
|
4 |
+
|
5 |
+
If using this data in your research please cite the following paper
|
6 |
+
and the url of the STS website: http://ixa2.si.ehu.eus/stswiki:
|
7 |
+
|
8 |
+
Eneko Agirre, Daniel Cer, Mona Diab, Iñigo Lopez-Gazpio, Lucia
|
9 |
+
Specia. Semeval-2017 Task 1: Semantic Textual Similarity
|
10 |
+
Multilingual and Crosslingual Focused Evaluation. Proceedings of
|
11 |
+
SemEval 2017.
|
12 |
+
|
13 |
+
The scores are released under a "Commons Attribution - Share Alike 4.0
|
14 |
+
International License" http://creativecommons.org/licenses/by-sa/4.0/
|
15 |
+
|
16 |
+
The text of each dataset has a license of its own, as follows:
|
17 |
+
|
18 |
+
- MSR-Paraphrase, Microsoft Research Paraphrase Corpus. In order to use
|
19 |
+
MSRpar, researchers need to agree with the license terms from
|
20 |
+
Microsoft Research:
|
21 |
+
http://research.microsoft.com/en-us/downloads/607d14d9-20cd-47e3-85bc-a2f65cd28042/
|
22 |
+
|
23 |
+
- headlines: Mined from several news sources by European Media Monitor
|
24 |
+
(Best et al. 2005). using the RSS feed. European Media Monitor (EMM)
|
25 |
+
Real Time News Clusters are the top news stories for the last 4
|
26 |
+
hours, updated every ten minutes. The article clustering is fully
|
27 |
+
automatic. The selection and placement of stories are determined
|
28 |
+
automatically by a computer program. This site is a joint project of
|
29 |
+
DG-JRC and DG-COMM. The information on this site is subject to a
|
30 |
+
disclaimer (see
|
31 |
+
http://europa.eu/geninfo/legal_notices_en.htm). Please acknowledge
|
32 |
+
EMM when (re)using this material.
|
33 |
+
http://emm.newsbrief.eu/rss?type=rtn&language=en&duplicates=false
|
34 |
+
|
35 |
+
- deft-news: A subset of news article data in the DEFT
|
36 |
+
project.
|
37 |
+
|
38 |
+
- MSR-Video, Microsoft Research Video Description Corpus. In order to
|
39 |
+
use MSRvideo, researchers need to agree with the license terms from
|
40 |
+
Microsoft Research:
|
41 |
+
http://research.microsoft.com/en-us/downloads/38cf15fd-b8df-477e-a4e4-a4680caa75af/
|
42 |
+
|
43 |
+
- image: The Image Descriptions data set is a subset of
|
44 |
+
the PASCAL VOC-2008 data set (Rashtchian et al., 2010) . PASCAL
|
45 |
+
VOC-2008 data set consists of 1,000 images and has been used by a
|
46 |
+
number of image description systems. The image captions of the data
|
47 |
+
set are released under a CreativeCommons Attribution-ShareAlike
|
48 |
+
license, the descriptions itself are free.
|
49 |
+
|
50 |
+
- track5.en-en: This text is a subset of the Stanford Natural
|
51 |
+
Language Inference (SNLI) corpus, by The Stanford NLP Group is
|
52 |
+
licensed under a Creative Commons Attribution-ShareAlike 4.0
|
53 |
+
International License. Based on a work at
|
54 |
+
http://shannon.cs.illinois.edu/DenotationGraph/.
|
55 |
+
https://creativecommons.org/licenses/by-sa/4.0/
|
56 |
+
|
57 |
+
- answers-answers: user content from stack-exchange. Check the license
|
58 |
+
below in ======ANSWERS-ANSWERS======
|
59 |
+
|
60 |
+
- answers-forums: user content from stack-exchange. Check the license
|
61 |
+
below in ======ANSWERS-FORUMS======
|
62 |
+
|
63 |
+
|
64 |
+
|
65 |
+
======ANSWER-ANSWER======
|
66 |
+
|
67 |
+
Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)
|
68 |
+
http://creativecommons.org/licenses/by-sa/3.0/
|
69 |
+
|
70 |
+
Attribution Requirements:
|
71 |
+
|
72 |
+
"* Visually display or otherwise indicate the source of the content
|
73 |
+
as coming from the Stack Exchange Network. This requirement is
|
74 |
+
satisfied with a discreet text blurb, or some other unobtrusive but
|
75 |
+
clear visual indication.
|
76 |
+
|
77 |
+
* Ensure that any Internet use of the content includes a hyperlink
|
78 |
+
directly to the original question on the source site on the Network
|
79 |
+
(e.g., http://stackoverflow.com/questions/12345)
|
80 |
+
|
81 |
+
* Visually display or otherwise clearly indicate the author names for
|
82 |
+
every question and answer used
|
83 |
+
|
84 |
+
* Ensure that any Internet use of the content includes a hyperlink for
|
85 |
+
each author name directly back to his or her user profile page on the
|
86 |
+
source site on the Network (e.g.,
|
87 |
+
http://stackoverflow.com/users/12345/username), directly to the Stack
|
88 |
+
Exchange domain, in standard HTML (i.e. not through a Tinyurl or other
|
89 |
+
such indirect hyperlink, form of obfuscation or redirection), without
|
90 |
+
any “nofollow” command or any other such means of avoiding detection by
|
91 |
+
search engines, and visible even with JavaScript disabled."
|
92 |
+
|
93 |
+
(https://archive.org/details/stackexchange)
|
94 |
+
|
95 |
+
|
96 |
+
|
97 |
+
======ANSWERS-FORUMS======
|
98 |
+
|
99 |
+
|
100 |
+
Stack Exchange Inc. generously made the data used to construct the STS 2015 answer-answer statement pairs available under a Creative Commons Attribution-ShareAlike (cc-by-sa) 3.0 license.
|
101 |
+
|
102 |
+
The license is reproduced below from: https://archive.org/details/stackexchange
|
103 |
+
|
104 |
+
The STS.input.answers-forums.txt file should be redistributed with this LICENSE text and the accompanying files in LICENSE.answers-forums.zip. The tsv files in the zip file contain the additional information that's needed to comply with the license.
|
105 |
+
|
106 |
+
--
|
107 |
+
|
108 |
+
All user content contributed to the Stack Exchange network is cc-by-sa 3.0 licensed, intended to be shared and remixed. We even provide all our data as a convenient data dump.
|
109 |
+
|
110 |
+
http://creativecommons.org/licenses/by-sa/3.0/
|
111 |
+
|
112 |
+
But our cc-by-sa 3.0 licensing, while intentionally permissive, does *require attribution*:
|
113 |
+
|
114 |
+
"Attribution — You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work)."
|
115 |
+
|
116 |
+
Specifically the attribution requirements are as follows:
|
117 |
+
|
118 |
+
1. Visually display or otherwise indicate the source of the content as coming from the Stack Exchange Network. This requirement is satisfied with a discreet text blurb, or some other unobtrusive but clear visual indication.
|
119 |
+
|
120 |
+
2. Ensure that any Internet use of the content includes a hyperlink directly to the original question on the source site on the Network (e.g., http://stackoverflow.com/questions/12345)
|
121 |
+
|
122 |
+
3. Visually display or otherwise clearly indicate the author names for every question and answer so used.
|
123 |
+
|
124 |
+
4. Ensure that any Internet use of the content includes a hyperlink for each author name directly back to his or her user profile page on the source site on the Network (e.g., http://stackoverflow.com/users/12345/username), directly to the Stack Exchange domain, in standard HTML (i.e. not through a Tinyurl or other such indirect hyperlink, form of obfuscation or redirection), without any “nofollow” command or any other such means of avoiding detection by search engines, and visible even with JavaScript disabled.
|
125 |
+
|
126 |
+
Our goal is to maintain the spirit of fair attribution. That means attribution to the website, and more importantly, to the individuals who so generously contributed their time to create that content in the first place!
|
127 |
+
|
128 |
+
For more information, see the Stack Exchange Terms of Service: http://stackexchange.com/legal/terms-of-service
|
129 |
+
|
130 |
+
|
131 |
+
|
132 |
+
|
133 |
+
|
134 |
+
|
135 |
+
|
136 |
+
|
stsbenchmark/correlation.pl
ADDED
@@ -0,0 +1,119 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/perl
|
2 |
+
|
3 |
+
|
4 |
+
=head1 $0
|
5 |
+
|
6 |
+
=head1 SYNOPSIS
|
7 |
+
|
8 |
+
correlation.pl gs system
|
9 |
+
|
10 |
+
Outputs the Pearson correlation.
|
11 |
+
|
12 |
+
Example:
|
13 |
+
|
14 |
+
$ ./correlation.pl gs sys
|
15 |
+
|
16 |
+
Author: Eneko Agirre, Aitor Gonzalez-Agirre
|
17 |
+
|
18 |
+
Dec. 31, 2012
|
19 |
+
|
20 |
+
=cut
|
21 |
+
|
22 |
+
use Getopt::Long qw(:config auto_help);
|
23 |
+
use Pod::Usage;
|
24 |
+
use warnings;
|
25 |
+
use strict;
|
26 |
+
use Math::Complex;
|
27 |
+
|
28 |
+
pod2usage if $#ARGV != 1 ;
|
29 |
+
|
30 |
+
if (-e $ARGV[1]) {
|
31 |
+
my $continue = 0;
|
32 |
+
my %filtered;
|
33 |
+
my $do = 0;
|
34 |
+
my %a ;
|
35 |
+
my %b ;
|
36 |
+
my %c ;
|
37 |
+
|
38 |
+
open(I,$ARGV[0]) or die $! ;
|
39 |
+
my $filter = 0;
|
40 |
+
my $i = 0;
|
41 |
+
while (<I>) {
|
42 |
+
chomp ;
|
43 |
+
next if /^\#/ ;
|
44 |
+
if ($_ eq "") {
|
45 |
+
$filter++;
|
46 |
+
$filtered{$filter} = 1;
|
47 |
+
}
|
48 |
+
else {
|
49 |
+
my @fields = (split(/\t/,$_)) ;
|
50 |
+
my $score = $fields[4] ;
|
51 |
+
warn "wrong range of score in gold standard: $score\n" if ($score > 5) or ($score < 0) ;
|
52 |
+
$a{$i++} = $score ;
|
53 |
+
$filter++;
|
54 |
+
}
|
55 |
+
}
|
56 |
+
close(I) ;
|
57 |
+
|
58 |
+
my $j = 0 ;
|
59 |
+
|
60 |
+
open(I,$ARGV[1]) or die $! ;
|
61 |
+
my $line = 1;
|
62 |
+
while (<I>) {
|
63 |
+
if(!defined($filtered{$line})) {
|
64 |
+
chomp ;
|
65 |
+
next if /^\#/ ;
|
66 |
+
my @fields = (split(/\s+/,$_)) ;
|
67 |
+
my ($score) = @fields ;
|
68 |
+
$b{$j} = $score ;
|
69 |
+
$c{$j} = 100;
|
70 |
+
$continue = 1;
|
71 |
+
$j++;
|
72 |
+
}
|
73 |
+
$line++;
|
74 |
+
}
|
75 |
+
close(I) ;
|
76 |
+
|
77 |
+
if ($continue == 1) {
|
78 |
+
my $sumw=0;
|
79 |
+
|
80 |
+
my $sumwy=0;
|
81 |
+
for(my $y = 0; $y < $i; $y++) {
|
82 |
+
$sumwy = $sumwy + (100 * $a{$y});
|
83 |
+
$sumw = $sumw + 100;
|
84 |
+
}
|
85 |
+
my $meanyw = $sumwy/$sumw;
|
86 |
+
|
87 |
+
my $sumwx=0;
|
88 |
+
for(my $x = 0; $x < $i; $x++) {
|
89 |
+
$sumwx = $sumwx + ($c{$x} * $b{$x});
|
90 |
+
}
|
91 |
+
my $meanxw = $sumwx/$sumw;
|
92 |
+
|
93 |
+
my $sumwxy = 0;
|
94 |
+
for(my $x = 0; $x < $i; $x++) {
|
95 |
+
$sumwxy = $sumwxy + $c{$x}*($b{$x} - $meanxw)*($a{$x} - $meanyw);
|
96 |
+
}
|
97 |
+
my $covxyw = $sumwxy/$sumw;
|
98 |
+
|
99 |
+
my $sumwxx = 0;
|
100 |
+
for(my $x = 0; $x < $i; $x++) {
|
101 |
+
$sumwxx = $sumwxx + $c{$x}*($b{$x} - $meanxw)*($b{$x} - $meanxw);
|
102 |
+
}
|
103 |
+
my $covxxw = $sumwxx/$sumw;
|
104 |
+
|
105 |
+
my $sumwyy = 0;
|
106 |
+
for(my $x = 0; $x < $i; $x++) {
|
107 |
+
$sumwyy = $sumwyy + $c{$x}*($a{$x} - $meanyw)*($a{$x} - $meanyw);
|
108 |
+
}
|
109 |
+
my $covyyw = $sumwyy/$sumw;
|
110 |
+
|
111 |
+
my $corrxyw = $covxyw/sqrt($covxxw*$covyyw);
|
112 |
+
|
113 |
+
printf "Pearson: %.5f\n", $corrxyw ;
|
114 |
+
}
|
115 |
+
}
|
116 |
+
else{
|
117 |
+
printf "Pearson: nan\n";
|
118 |
+
exit(1);
|
119 |
+
}
|
stsbenchmark/readme.txt
ADDED
@@ -0,0 +1,174 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
STS Benchmark: Main English dataset
|
3 |
+
|
4 |
+
Semantic Textual Similarity 2012-2017 Dataset
|
5 |
+
|
6 |
+
http://ixa2.si.ehu.eus/stswiki
|
7 |
+
|
8 |
+
|
9 |
+
STS Benchmark comprises a selection of the English datasets used in
|
10 |
+
the STS tasks organized by us in the context of SemEval between 2012
|
11 |
+
and 2017.
|
12 |
+
|
13 |
+
In order to provide a standard benchmark to compare among systems, we
|
14 |
+
organized it into train, development and test. The development part
|
15 |
+
can be used to develop and tune hyperparameters of the systems, and
|
16 |
+
the test part should be only used once for the final system.
|
17 |
+
|
18 |
+
The benchmark comprises 8628 sentence pairs. This is the breakdown
|
19 |
+
according to genres and train-dev-test splits:
|
20 |
+
|
21 |
+
train dev test total
|
22 |
+
-----------------------------
|
23 |
+
news 3299 500 500 4299
|
24 |
+
caption 2000 625 525 3250
|
25 |
+
forum 450 375 254 1079
|
26 |
+
-----------------------------
|
27 |
+
total 5749 1500 1379 8628
|
28 |
+
|
29 |
+
For reference, this is the breakdown according to the original names
|
30 |
+
and task years of the datasets:
|
31 |
+
|
32 |
+
genre file years train dev test
|
33 |
+
------------------------------------------------
|
34 |
+
news MSRpar 2012 1000 250 250
|
35 |
+
news headlines 2013-16 1999 250 250
|
36 |
+
news deft-news 2014 300 0 0
|
37 |
+
captions MSRvid 2012 1000 250 250
|
38 |
+
captions images 2014-15 1000 250 250
|
39 |
+
captions track5.en-en 2017 0 125 125
|
40 |
+
forum deft-forum 2014 450 0 0
|
41 |
+
forum answers-forums 2015 0 375 0
|
42 |
+
forum answer-answer 2016 0 0 254
|
43 |
+
|
44 |
+
In addition to the standard benchmark, we also include other datasets
|
45 |
+
(see readme.txt in "companion" directory).
|
46 |
+
|
47 |
+
|
48 |
+
Introduction
|
49 |
+
------------
|
50 |
+
|
51 |
+
Given two sentences of text, s1 and s2, the systems need to compute
|
52 |
+
how similar s1 and s2 are, returning a similarity score between 0 and
|
53 |
+
5. The dataset comprises naturally occurring pairs of sentences drawn
|
54 |
+
from several domains and genres, annotated by crowdsourcing. See
|
55 |
+
papers by Agirre et al. (2012; 2013; 2014; 2015; 2016; 2017).
|
56 |
+
|
57 |
+
Format
|
58 |
+
------
|
59 |
+
|
60 |
+
Each file is encoded in utf-8 (a superset of ASCII), and has the
|
61 |
+
following tab separated fields:
|
62 |
+
|
63 |
+
genre filename year score sentence1 sentence2
|
64 |
+
|
65 |
+
optionally there might be some license-related fields after sentence2.
|
66 |
+
|
67 |
+
NOTE: Given that some sentence pairs have been reused here and
|
68 |
+
elsewhere, systems should NOT use the following datasets to develop or
|
69 |
+
train their systems (see below for more details on datasets):
|
70 |
+
|
71 |
+
- Any of the datasets in Semeval STS competitions, including Semeval
|
72 |
+
2014 task 1 (also known as SICK).
|
73 |
+
- The test part of MSR-Paraphrase (development and train are fine).
|
74 |
+
- The text of the videos in MSR-Video.
|
75 |
+
|
76 |
+
|
77 |
+
Evaluation script
|
78 |
+
-----------------
|
79 |
+
|
80 |
+
The official evaluation is the Pearson correlation coefficient. Given
|
81 |
+
an output file comprising the system scores (one per line) in a file
|
82 |
+
called sys.txt, you can use the evaluation script as follows:
|
83 |
+
|
84 |
+
$ perl correlation.pl sts-dev.txt sys.txt
|
85 |
+
|
86 |
+
|
87 |
+
Other
|
88 |
+
-----
|
89 |
+
|
90 |
+
Please check http://ixa2.si.ehu.eus/stswiki
|
91 |
+
|
92 |
+
We recommend that interested researchers join the (low traffic)
|
93 |
+
mailing list:
|
94 |
+
|
95 |
+
http://groups.google.com/group/STS-semeval
|
96 |
+
|
97 |
+
Notse on datasets and licenses
|
98 |
+
------------------------------
|
99 |
+
|
100 |
+
If using this data in your research please cite (Agirre et al. 2017)
|
101 |
+
and the STS website: http://ixa2.si.ehu.eus/stswiki.
|
102 |
+
|
103 |
+
Please see LICENSE.txt
|
104 |
+
|
105 |
+
|
106 |
+
Organizers of tasks by year
|
107 |
+
---------------------------
|
108 |
+
|
109 |
+
2012 Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre
|
110 |
+
|
111 |
+
2013 Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre,
|
112 |
+
WeiWei Guo
|
113 |
+
|
114 |
+
2014 Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab,
|
115 |
+
Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau,
|
116 |
+
Janyce Wiebe
|
117 |
+
|
118 |
+
2015 Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab,
|
119 |
+
Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse
|
120 |
+
Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, Janyce
|
121 |
+
Wiebe
|
122 |
+
|
123 |
+
2016 Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor
|
124 |
+
Gonzalez-Agirre, Rada Mihalcea, German Rigau, Janyce
|
125 |
+
Wiebe
|
126 |
+
|
127 |
+
2017 Eneko Agirre, Daniel Cer, Mona Diab, Iñigo Lopez-Gazpio, Lucia
|
128 |
+
Specia
|
129 |
+
|
130 |
+
|
131 |
+
References
|
132 |
+
----------
|
133 |
+
|
134 |
+
Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre. Task 6: A
|
135 |
+
Pilot on Semantic Textual Similarity. Procceedings of Semeval 2012
|
136 |
+
|
137 |
+
Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, WeiWei
|
138 |
+
Guo. *SEM 2013 shared task: Semantic Textual
|
139 |
+
Similarity. Procceedings of *SEM 2013
|
140 |
+
|
141 |
+
Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab,
|
142 |
+
Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau,
|
143 |
+
Janyce Wiebe. Task 10: Multilingual Semantic Textual
|
144 |
+
Similarity. Proceedings of SemEval 2014.
|
145 |
+
|
146 |
+
Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab,
|
147 |
+
Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse
|
148 |
+
Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, Janyce
|
149 |
+
Wiebe. Task 2: Semantic Textual Similarity, English, Spanish and
|
150 |
+
Pilot on Interpretability. Proceedings of SemEval 2015.
|
151 |
+
|
152 |
+
Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor
|
153 |
+
Gonzalez-Agirre, Rada Mihalcea, German Rigau, Janyce
|
154 |
+
Wiebe. Semeval-2016 Task 1: Semantic Textual Similarity,
|
155 |
+
Monolingual and Cross-Lingual Evaluation. Proceedings of SemEval
|
156 |
+
2016.
|
157 |
+
|
158 |
+
Eneko Agirre, Daniel Cer, Mona Diab, Iñigo Lopez-Gazpio, Lucia
|
159 |
+
Specia. Semeval-2017 Task 1: Semantic Textual Similarity
|
160 |
+
Multilingual and Crosslingual Focused Evaluation. Proceedings of
|
161 |
+
SemEval 2017.
|
162 |
+
|
163 |
+
Clive Best, Erik van der Goot, Ken Blackler, Tefilo Garcia, and David
|
164 |
+
Horby. 2005. Europe media monitor - system description. In EUR
|
165 |
+
Report 22173-En, Ispra, Italy.
|
166 |
+
|
167 |
+
Cyrus Rashtchian, Peter Young, Micah Hodosh, and Julia Hockenmaier.
|
168 |
+
Collecting Image Annotations Using Amazon's Mechanical Turk. In
|
169 |
+
Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and
|
170 |
+
Language Data with Amazon's Mechanical Turk.
|
171 |
+
|
172 |
+
|
173 |
+
|
174 |
+
|
stsbenchmark/sts-dev.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
stsbenchmark/sts-test.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
stsbenchmark/sts-train.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|