Update README.md
Browse files
README.md
CHANGED
@@ -1,38 +1,39 @@
|
|
1 |
-
---
|
2 |
-
language: en
|
3 |
-
tags:
|
4 |
-
- tapex
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
}
|
|
|
38 |
```
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
tags:
|
4 |
+
- tapex
|
5 |
+
- table-question-answering
|
6 |
+
license: mit
|
7 |
+
---
|
8 |
+
|
9 |
+
# TAPEX (large-sized model)
|
10 |
+
|
11 |
+
TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
|
12 |
+
|
13 |
+
## Model description
|
14 |
+
|
15 |
+
TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries.
|
16 |
+
|
17 |
+
TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.
|
18 |
+
|
19 |
+
## Intended Uses
|
20 |
+
|
21 |
+
⚠️ This model checkpoint is **ONLY** used for fine-tuining on downstream tasks, and you **CANNOT** use this model for simulating neural SQL execution, i.e., employ TAPEX to execute a SQL query on a given table. The one that can neurally execute SQL queries is at [here](https://huggingface.co/microsoft/tapex-large-sql-execution).
|
22 |
+
> This separation of two models for two kinds of intention is because of a known issue in BART large, and we recommend readers to see [this comment](https://github.com/huggingface/transformers/issues/15559#issuecomment-1062880564) for more details.
|
23 |
+
|
24 |
+
### How to Fine-tuning
|
25 |
+
|
26 |
+
Please find the fine-tuning script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex).
|
27 |
+
|
28 |
+
### BibTeX entry and citation info
|
29 |
+
|
30 |
+
```bibtex
|
31 |
+
@inproceedings{
|
32 |
+
liu2022tapex,
|
33 |
+
title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},
|
34 |
+
author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},
|
35 |
+
booktitle={International Conference on Learning Representations},
|
36 |
+
year={2022},
|
37 |
+
url={https://openreview.net/forum?id=O50443AsCP}
|
38 |
+
}
|
39 |
```
|