Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
iamahern commited on
Commit
36e6aef
1 Parent(s): a83595a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - text2text-generation
5
+ language:
6
+ - en
7
+ tags:
8
+ - text-to-sql
9
+ pretty_name: Spider-Syn
10
+ size_categories:
11
+ - 1K<n<10K
12
+ ---
13
+
14
+ # Dataset Card for Sypder-Syn
15
+
16
+ [Spyder-Syn](https://github.com/ygan/Spider-Syn) is a human curated variant of the [Spider](https://yale-lily.github.io/spider) Text-to-SQL database.
17
+ The database was created to test the robustness of text-to-SQL models for robustness of synonym substitution.
18
+
19
+ The source GIT repo for Sypder-Syn is located here: https://github.com/ygan/Spider-Syn
20
+
21
+ Details regarding the data perterbation methods used and objectives are described in ACL 2021: [arXiv](https://arxiv.org/abs/2106.01065)
22
+
23
+
24
+ ## Paper Abstract
25
+
26
+ > Recently, there has been significant progress in studying neural networks to translate text descriptions into SQL queries. Despite achieving good performance on some public benchmarks, existing text-to-SQL models typically rely on the lexical matching between words in natural language (NL) questions and tokens in table schemas, which may render the models vulnerable to attacks that break the schema linking mechanism. In this work, we investigate the robustness of text-to-SQL models to synonym substitution. In particular, we introduce Spider-Syn, a human-curated dataset based on the Spider benchmark for text-to-SQL translation. NL questions in Spider-Syn are modified from Spider, by replacing their schema-related words with manually selected synonyms that reflect real-world question paraphrases. We observe that the accuracy dramatically drops by eliminating such explicit correspondence between NL questions and table schemas, even if the synonyms are not adversarially selected to conduct worst-case adversarial attacks. Finally, we present two categories of approaches to improve the model robustness. The first category of approaches utilizes additional synonym annotations for table schemas by modifying the model input, while the second category is based on adversarial training. We demonstrate that both categories of approaches significantly outperform their counterparts without the defense, and the first category of approaches are more effective.
27
+
28
+
29
+ ## Citation Information
30
+ ```
31
+ @inproceedings{gan-etal-2021-towards,
32
+ title = "Towards Robustness of Text-to-{SQL} Models against Synonym Substitution",
33
+ author = "Gan, Yujian and
34
+ Chen, Xinyun and
35
+ Huang, Qiuping and
36
+ Purver, Matthew and
37
+ Woodward, John R. and
38
+ Xie, Jinxia and
39
+ Huang, Pengsheng",
40
+ month = aug,
41
+ year = "2021",
42
+ address = "Online",
43
+ publisher = "Association for Computational Linguistics",
44
+ url = "https://aclanthology.org/2021.acl-long.195",
45
+ doi = "10.18653/v1/2021.acl-long.195",
46
+ pages = "2505--2515",
47
+ }
48
+ ```