Bachstelze
commited on
Commit
•
dd41359
1
Parent(s):
d995124
Update README.md
Browse files
README.md
CHANGED
@@ -45,7 +45,7 @@ alt="instruction BERT drawing" width="600"/>
|
|
45 |
A minimalistic instruction model with an already good analysed and pretrained encoder like BERT.
|
46 |
So we can research the [Bertology](https://aclanthology.org/2020.tacl-1.54.pdf) with instruction-tuned models, [look at the attention](https://colab.research.google.com/drive/1mNP7c0RzABnoUgE6isq8FTp-NuYNtrcH?usp=sharing) and investigate [what happens to BERT embeddings during fine-tuning](https://aclanthology.org/2020.blackboxnlp-1.4.pdf).
|
47 |
|
48 |
-
The
|
49 |
We used the Huggingface API for [warm-starting](https://huggingface.co/blog/warm-starting-encoder-decoder) [BertGeneration](https://huggingface.co/docs/transformers/model_doc/bert-generation) with [Encoder-Decoder-Models](https://huggingface.co/docs/transformers/v4.35.2/en/model_doc/encoder-decoder) for this purpose.
|
50 |
|
51 |
## Run the model with a longer output
|
|
|
45 |
A minimalistic instruction model with an already good analysed and pretrained encoder like BERT.
|
46 |
So we can research the [Bertology](https://aclanthology.org/2020.tacl-1.54.pdf) with instruction-tuned models, [look at the attention](https://colab.research.google.com/drive/1mNP7c0RzABnoUgE6isq8FTp-NuYNtrcH?usp=sharing) and investigate [what happens to BERT embeddings during fine-tuning](https://aclanthology.org/2020.blackboxnlp-1.4.pdf).
|
47 |
|
48 |
+
The training code is released at the [instructionBERT repository](https://gitlab.com/Bachstelze/instructionbert).
|
49 |
We used the Huggingface API for [warm-starting](https://huggingface.co/blog/warm-starting-encoder-decoder) [BertGeneration](https://huggingface.co/docs/transformers/model_doc/bert-generation) with [Encoder-Decoder-Models](https://huggingface.co/docs/transformers/v4.35.2/en/model_doc/encoder-decoder) for this purpose.
|
50 |
|
51 |
## Run the model with a longer output
|