File size: 3,496 Bytes
7c16103 770c8ab 7c16103 770c8ab 5425557 770c8ab dd41359 770c8ab |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
---
language:
- en
tags:
- text2text-generation
widget:
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
example_title: "Question Answering"
- text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering."
example_title: "Logical reasoning"
- text: "Please answer the following question. What is the boiling point of Nitrogen?"
example_title: "Scientific knowledge"
- text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?"
example_title: "Yes/no question"
- text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
example_title: "Reasoning task"
- text: "Q: ( False or not False or False ) is? A: Let's think step by step"
example_title: "Boolean Expressions"
- text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?"
example_title: "Math reasoning"
- text: "Premise: At my age you will probably have learned one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?"
example_title: "Premise and hypothesis"
datasets:
- Open-Orca/SlimOrca-Dedup
- GAIR/lima
- nomic-ai/gpt4all-j-prompt-generations
- HuggingFaceH4/ultrachat_200k
- ZenMoore/RoleBench
- WizardLM/WizardLM_evol_instruct_V2_196
- c-s-ale/alpaca-gpt4-data
- THUDM/AgentInstruct
license: apache-2.0
---
# Model Card for the test-version of instructionBERT for Bertology
![BERT illustration](./ The_cinematic_puppet_Bert_from_sesame_street_carries_89f3c10a_273b.png)
A minimalistic instruction model with an already good analysed and pretrained encoder like BERT.
So we can research the [Bertology](https://aclanthology.org/2020.tacl-1.54.pdf) with instruction-tuned models, [look at the attention](https://colab.research.google.com/drive/1mNP7c0RzABnoUgE6isq8FTp-NuYNtrcH?usp=sharing) and investigate [what happens to BERT embeddings during fine-tuning](https://aclanthology.org/2020.blackboxnlp-1.4.pdf).
The training code is released at the [instructionBERT repository](https://gitlab.com/Bachstelze/instructionbert).
We used the Huggingface API for [warm-starting](https://huggingface.co/blog/warm-starting-encoder-decoder) [BertGeneration](https://huggingface.co/docs/transformers/model_doc/bert-generation) with [Encoder-Decoder-Models](https://huggingface.co/docs/transformers/v4.35.2/en/model_doc/encoder-decoder) for this purpose.
## Run the model with a longer output
```python
from transformers import AutoTokenizer, EncoderDecoderModel
# load the fine-tuned seq2seq model and corresponding tokenizer
model_name = "Bachstelze/instructionBERT"
model = EncoderDecoderModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
input = "Write a poem about love, peace and pancake."
input_ids = tokenizer(input, return_tensors="pt").input_ids
output_ids = model.generate(input_ids, max_new_tokens=200)
print(tokenizer.decode(output_ids[0]))
```
## Training parameters
- base model: "bert-base-uncased"
- trained for 1 epoche
- batch size of 16
- 20000 warm-up steps
- learning rate of 0.0001
## Purpose of instructionBERT
InstructionBERT is intended for research purposes. The model-generated text should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications. |