Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,69 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# kwang2049/TSDAE-askubuntu2nli_stsb
|
2 |
+
|
3 |
+
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain AskUbuntu. Training procedure of this model:
|
4 |
+
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
|
5 |
+
2. Unsupervised training on AskUbuntu with the TSDAE objective;
|
6 |
+
3. Supervised training on the NLI data with cross-entropy loss;
|
7 |
+
4. Supervised training on the STSb data with MSE loss.
|
8 |
+
|
9 |
+
The pooling method is CLS-pooling.
|
10 |
+
|
11 |
+
## Usage
|
12 |
+
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
|
13 |
+
```bash
|
14 |
+
pip install sentence-transformers
|
15 |
+
```
|
16 |
+
And then load the model and use it to encode sentences:
|
17 |
+
```python
|
18 |
+
from sentence_transformers import SentenceTransformer, models
|
19 |
+
dataset = 'askubuntu'
|
20 |
+
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
|
21 |
+
model = SentenceTransformer(model_name_or_path)
|
22 |
+
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
|
23 |
+
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
|
24 |
+
```
|
25 |
+
|
26 |
+
## Evaluation
|
27 |
+
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
|
28 |
+
```bash
|
29 |
+
pip install useb # Or git clone and pip install .
|
30 |
+
python -m useb.downloading all # Download both training and evaluation data
|
31 |
+
```
|
32 |
+
And then do the evaluation:
|
33 |
+
```python
|
34 |
+
from sentence_transformers import SentenceTransformer, models
|
35 |
+
import torch
|
36 |
+
from useb import run_on
|
37 |
+
|
38 |
+
dataset = 'askubuntu'
|
39 |
+
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
|
40 |
+
model = SentenceTransformer(model_name_or_path)
|
41 |
+
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
|
42 |
+
|
43 |
+
@torch.no_grad()
|
44 |
+
def semb_fn(sentences) -> torch.Tensor:
|
45 |
+
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
|
46 |
+
|
47 |
+
result = run_on(
|
48 |
+
dataset,
|
49 |
+
semb_fn=semb_fn,
|
50 |
+
eval_type='test',
|
51 |
+
data_eval_path='data-eval'
|
52 |
+
)
|
53 |
+
```
|
54 |
+
|
55 |
+
## Training
|
56 |
+
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
|
57 |
+
|
58 |
+
## Cite & Authors
|
59 |
+
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
|
60 |
+
```bibtex
|
61 |
+
@article{wang-2021-TSDAE,
|
62 |
+
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
|
63 |
+
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
|
64 |
+
journal= "arXiv preprint arXiv:2104.06979",
|
65 |
+
month = "4",
|
66 |
+
year = "2021",
|
67 |
+
url = "https://arxiv.org/abs/2104.06979",
|
68 |
+
}
|
69 |
+
```
|