Phi-Hermes-1.3B / README.md
teknium's picture
Update README.md
786cb77
---
license: other
language:
- en
pipeline_tag: text-generation
datasets:
- teknium/openhermes
---
# Model Card for Puffin-Phi V2
Phi-1.5 fine tuned with Hermes Dataset
## Model Details
### Model Sources
This model was trained on the OpenHermes Dataset, made by me, which is over 240,000 mostly GPT-4 generated synthetic datapoints
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/KFV00TWHS6E0z_l82QDxV.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/8M5xBh_ixVxdtPQnDuCkV.png)
## Uses
Let me know!
## How to Get Started with the Model
Phi does not support device_map "auto", and does not seem to want to inference in fp16, so use bf16.
Here is working code to inference, though it can be improved:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("teknium/Puffin-Phi-v2", trust_remote_code=True, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = AutoTokenizer.from_pretrained("teknium/Puffin-Phi-v2", trust_remote_code=True, torch_dtype=torch.bfloat16)
inputs = tokenizer(f"### Instruction:\nWrite a negative review for the website, Twitter.\n### Response:\n", return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=128, do_sample=True, temperature=0.2, top_p=0.9, use_cache=True, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
The prompt format is Alpaca,
then is prompted like so:
```
### Instruction:
<prompt>
### Response:
```
## Training Details
### Training Procedure
Trained with Axolotl. View the wandb runs for all my puffin runs (this is puffin-phi-4 on wandb):
https://wandb.ai/teknium1/hermes-phi/runs/hermes-phi-1
## Evaluation
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/sQqgzk6dM7mxbyVloFMa1.png)