File size: 1,123 Bytes
2ca70c9 1839a31 2ca70c9 8db0c5a a7ff3d5 2ca70c9 8db0c5a 2ca70c9 ab2933d 2ca70c9 07cec19 2ca70c9 7601c96 2ca70c9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
---
language:
- en
license: mit
pipeline_tag: text-generation
model-index:
- name: chef-gpt-en
results: []
widget:
- text: 'ingredients>> salmon, lemon; recipe>>'
---
# chef-gpt-en
Fine-tuned GPT-2 on recipe generation. [This](https://www.kaggle.com/datasets/shuyangli94/food-com-recipes-and-user-interactions/data) is the dataset that it's trained on.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_ID = "auhide/chef-gpt-en"
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
chef_gpt = AutoModelForCausalLM.from_pretrained(MODEL_ID)
ingredients = ", ".join([
"spaghetti",
"tomatoes",
"basel",
"salt",
"chicken",
])
prompt = f"ingredients>> {ingredients}; recipe>>"
tokens = chef_gpt.tokenizer(prompt, return_tensors="pt")
recipe = chef_gpt.generate(**tokens, max_length=124)
print(recipe)
```
Here is a sample result of the prompt:
```bash
ingredients>> spaghetti, tomatoes, basel, salt, chicken; recipe>>Bring a large pot of water to a boil in a medium heat; add enough water to cover the bottom of the pot. Squeeze cooked pasta out of the water,
``` |