File size: 815 Bytes
60cf2df
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
license: apache-2.0
---

# cates_phi3_1

cates_phi3_1 is an SFT fine-tuned version of microsoft/Phi-3-mini-4k-instruct using a custom training dataset.
This model was made with [Phinetune]()

## Process
- Learning Rate: 1.41e-05
- Maximum Sequence Length: 2048
- Dataset: fecia/cates
- Split: train

## 💻 Usage
```python
!pip install -qU transformers
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline

model = "fecia/cates_phi3_1"
tokenizer = AutoTokenizer.from_pretrained(model)

# Example prompt
prompt = "Your example prompt here"

# Generate a response
model = AutoModelForCausalLM.from_pretrained(model)
pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer)
outputs = pipeline(prompt, max_length=50, num_return_sequences=1)
print(outputs[0]["generated_text"])
```