Bachstelze
commited on
Commit
•
770c8ab
1
Parent(s):
ad0ceb9
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,74 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
|
5 |
+
tags:
|
6 |
+
- text2text-generation
|
7 |
+
|
8 |
+
widget:
|
9 |
+
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
|
10 |
+
example_title: "Question Answering"
|
11 |
+
- text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering."
|
12 |
+
example_title: "Logical reasoning"
|
13 |
+
- text: "Please answer the following question. What is the boiling point of Nitrogen?"
|
14 |
+
example_title: "Scientific knowledge"
|
15 |
+
- text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?"
|
16 |
+
example_title: "Yes/no question"
|
17 |
+
- text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
|
18 |
+
example_title: "Reasoning task"
|
19 |
+
- text: "Q: ( False or not False or False ) is? A: Let's think step by step"
|
20 |
+
example_title: "Boolean Expressions"
|
21 |
+
- text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?"
|
22 |
+
example_title: "Math reasoning"
|
23 |
+
- text: "Premise: At my age you will probably have learned one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?"
|
24 |
+
example_title: "Premise and hypothesis"
|
25 |
+
|
26 |
+
datasets:
|
27 |
+
- Open-Orca/SlimOrca-Dedup
|
28 |
+
- GAIR/lima
|
29 |
+
- nomic-ai/gpt4all-j-prompt-generations
|
30 |
+
- HuggingFaceH4/ultrachat_200k
|
31 |
+
- ZenMoore/RoleBench
|
32 |
+
- WizardLM/WizardLM_evol_instruct_V2_196
|
33 |
+
- c-s-ale/alpaca-gpt4-data
|
34 |
+
- THUDM/AgentInstruct
|
35 |
+
|
36 |
+
|
37 |
license: apache-2.0
|
38 |
---
|
39 |
+
# Model Card for the test-version of instructionBERT for Bertology
|
40 |
+
|
41 |
+
<img src="https://cdn-lfs-us-1.huggingface.co/repos/af/f0/aff0dca78d45453b348b539097bf576b294ce2fb0d535457e710a8d8dbe30a25/b8575c4fcac97f746ed06d2bde14bf62daf91cf3b33992dfbc8424017f2bf184?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27The_cinematic_puppet_Bert_from_sesame_street_carries_89f3c10a_273b.png%3B+filename%3D%22The_cinematic_puppet_Bert_from_sesame_street_carries_89f3c10a_273b.png%22%3B&response-content-type=image%2Fpng&Expires=1702654270&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcwMjY1NDI3MH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmh1Z2dpbmdmYWNlLmNvL3JlcG9zL2FmL2YwL2FmZjBkY2E3OGQ0NTQ1M2IzNDhiNTM5MDk3YmY1NzZiMjk0Y2UyZmIwZDUzNTQ1N2U3MTBhOGQ4ZGJlMzBhMjUvYjg1NzVjNGZjYWM5N2Y3NDZlZDA2ZDJiZGUxNGJmNjJkYWY5MWNmM2IzMzk5MmRmYmM4NDI0MDE3ZjJiZjE4ND9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=Cq74lOcJRv-w1JieDOg1uYIHbekEe2MccwtxQyRFb08%7ENvQHAVqBAqmjAz2XxIajDmtklq-vh38U75%7ElT9Y5OzYRqJ4JwBv73vLMM8zbKELafPPOGWVfEcAh8KFMW5DKLNuqzxBMvInMKK4ylJ6wdT%7EXHBZijUGzrNC7j1R3pgdiG1uh-ndQ7%7EuL-Vw3AU213qC5YUq%7E8IzD8h0cErf-aQP96WtK03Z-50yZmtwLc6L-2FTO95GT5AUKf6BPbuNwkgMW0zzG4oYjE5raGRwrMWKIbTW2nWQK-2oHm9Ojv5TNAo%7Elc75p3AL0xIKC6yUGIxT8L82DUUWaYIF9IoJnwQ__&Key-Pair-Id=KCD77M1F0VK2B"
|
42 |
+
alt="instruction BERT drawing" width="600"/>
|
43 |
+
|
44 |
+
|
45 |
+
A minimalistic instruction model with an already good analysed and pretrained encoder like BERT.
|
46 |
+
So we can research the [Bertology](https://aclanthology.org/2020.tacl-1.54.pdf) with instruction-tuned models, [look at the attention](https://colab.research.google.com/drive/1mNP7c0RzABnoUgE6isq8FTp-NuYNtrcH?usp=sharing) and investigate [what happens to BERT embeddings during fine-tuning](https://aclanthology.org/2020.blackboxnlp-1.4.pdf).
|
47 |
+
|
48 |
+
The trainings code is released at the [instructionBERT repository](https://gitlab.com/Bachstelze/instructionbert).
|
49 |
+
We used the Huggingface API for [warm-starting](https://huggingface.co/blog/warm-starting-encoder-decoder) [BertGeneration](https://huggingface.co/docs/transformers/model_doc/bert-generation) with [Encoder-Decoder-Models](https://huggingface.co/docs/transformers/v4.35.2/en/model_doc/encoder-decoder) for this purpose.
|
50 |
+
|
51 |
+
## Run the model with a longer output
|
52 |
+
|
53 |
+
```python
|
54 |
+
from transformers import AutoTokenizer, EncoderDecoderModel
|
55 |
+
# load the fine-tuned seq2seq model and corresponding tokenizer
|
56 |
+
model_name = "Bachstelze/instructionBERT"
|
57 |
+
model = EncoderDecoderModel.from_pretrained(model_name)
|
58 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
59 |
+
input = "Write a poem about love, peace and pancake."
|
60 |
+
input_ids = tokenizer(input, return_tensors="pt").input_ids
|
61 |
+
output_ids = model.generate(input_ids, max_new_tokens=200)
|
62 |
+
print(tokenizer.decode(output_ids[0]))
|
63 |
+
```
|
64 |
+
|
65 |
+
## Training parameters
|
66 |
+
|
67 |
+
- base model: "bert-base-uncased"
|
68 |
+
- trained for 1 epoche
|
69 |
+
- batch size of 16
|
70 |
+
- 20000 warm-up steps
|
71 |
+
- learning rate of 0.0001
|
72 |
+
|
73 |
+
## Purpose of instructionBERT
|
74 |
+
InstructionBERT is intended for research purposes. The model-generated text should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
|