File size: 1,464 Bytes
e6e6db8 2686a80 215caf7 e6e6db8 2686a80 e6e6db8 78f7207 e6e6db8 2686a80 e6e6db8 2686a80 e6e6db8 2686a80 e6e6db8 78f7207 e6e6db8 2686a80 e6e6db8 2686a80 e6e6db8 d9b7328 e6e6db8 2686a80 e6e6db8 2686a80 e6e6db8 2686a80 e6e6db8 2686a80 e6e6db8 2686a80 e6e6db8 2686a80 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
base_model: mistralai/Mistral-7B-Instruct-v0.3
datasets:
- nroggendorff/eap
language:
- en
license: mit
tags:
- trl
- sft
- art
- code
- adam
- mistral
model-index:
- name: eap
results: []
pipeline_tag: text-generation
---
# Edgar Allen Poe LLM
EAP is a language model fine-tuned on the [EAP dataset](https://huggingface.co/datasets/nroggendorff/eap) using Supervised Fine-Tuning (SFT) and Teacher Reinforced Learning (TRL) techniques. It is based on the [Mistral 7b Model](mistralai/Mistral-7B-Instruct-v0.3)
## Features
- Utilizes SFT and TRL techniques for improved performance
- Supports English language
## Usage
To use the LLM, you can load the model using the Hugging Face Transformers library:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model_id = "nroggendorff/mistral-eap"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config)
prompt = "[INST] Write a poem about tomatoes in the style of Poe.[/INST]"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs)
generated_text = tokenizer.batch_decode(outputs)[0]
print(generated_text)
```
## License
This project is licensed under the MIT License. |