Model Summary
The Phi-3-mini-mango-1 is an instruct finetune of Phi-3-mini-4k-instruct with 4K context and 3.8B parameters.
It is a first cut of finetuning Phi-3 (which is a great model!) to explore its properties and behaviour. More to follow.
You will need to update your local transformers to the latest version to run this model (4.41.0 or above):
pip install -U transformers
GGUF Versions
There are GGUF format model files available at rhysjones/Phi-3-mini-mango-1-GGUF
Chat Format
Phi-3-mini-mango uses the same chat format as the original Phi-3 Mini-4K-Instruct model. Note that it does not use a system prompt, instead place any specific instructions as part of the first <|user|> prompt.
You can provide the prompt as a question with a generic template as follow:
<|user|>\nQuestion <|end|>\n<|assistant|>
For example:
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
where the model generates the text after <|assistant|>
. In case of few-shots prompt, the prompt can be formatted as the following:
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
messages = [
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
Some applications/frameworks might not include a BOS token (<s>
) at the start of the conversation. Please ensure that it is included since it provides more reliable results.
The model shares the same limtations as the https://huggingface.co/microsoft/Phi-3-mini-4k-instruct#responsible-ai-considerations
- Downloads last month
- 22