license: mit
datasets:
- Open-Orca/SlimOrca-Dedup
- migtissera/Synthia-v1.3
- LDJnr/Verified-Camel
- LDJnr/Pure-Dove
- LDJnr/Capybara
- meta-math/MetaMathQA
- Intel/orca_dpo_pairs
- argilla/ultrafeedback-binarized-preferences-cleaned
widget:
- example_title: Example interaction
text: Why is the sky blue?
inference:
parameters:
do_sample: true
temperature: 0.1
model-index:
- name: phi-2-orange-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 61.86
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 76.32
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 55.72
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 54.84
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.69
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 57.62
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2
name: Open LLM Leaderboard
This is rhysjones/phi-2-orange-v2, quantized with the help of an importance matrix so it could offer better performance for being quantized, and have quantization levels available for lower-memory devices to run.
Kalomaze's "groups_merged.txt" was used for the importance matrix, with context set to 2,048.
Here's a chart that provides an approximation of the HellaSwag score (out of 1,000 tasks). Thanks to the randomization of tasks, it may be slightly unprecise:
Quantization | HellaSwag |
---|---|
IQ1_S | 32.5% |
IQ2_XXS | 56.3% |
IQ2_XS | 64.7% |
IQ2_S | 67.0% |
IQ2_M | 69.1% |
Q2_K_S | 65.3% |
Q2_K | 69.2% |
IQ3_XXS | Untested |
IQ3_XS | Untested |
IQ3_S | Untested |
IQ3_M | Untested |
Q3_K_M | 73.8% |
IQ4_XS | 74.0% |
IQ4_NL | 73.6% |
Q4_0 | 74.1% |
Q4_K_M | 74.4% |
Q5_K_M | Untested |
Original model card below.
Phi-2 Orange Version 2
A two-step finetune of Phi-2, with a bit more zest.
This is an improved version of the original Phi-2-Orange that uses an updated training process on the same datasets.
It also uses the latest updated model from Microsoft's Phi-2, making it directly usable within Hugging Face's Transformers library (without the need for trust remote code).
Prompt Format
Phi-2 Orange v2 uses ChatML as the prompt format.
(Update 12th March 2024: fixed eos_token issue)
It's recommended to always prompt with a system instruction (use whatever system prompt you like):
<|im_start|>system
You are a helpful assistant for Python which outputs in Markdown format.<|im_end|>
<|im_start|>user
Write a function to calculate the Fibonacci sequence<|im_end|>
<|im_start|>assistant
For example, if you find the model's output to be overly verbose, instruct it to be short and concise:
<|im_start|>system
You are a helpful assistant. Be short and direct in your answers.<|im_end|>
<|im_start|>user
Was Tom Hanks in the movie Forrest Gump? If so, who did he play and give details of the plot.<|im_end|>
<|im_start|>assistant
Evaluations
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Average | 63.67 |
AI2 Reasoning Challenge (25-Shot) | 61.86 |
HellaSwag (10-Shot) | 76.32 |
MMLU (5-Shot) | 55.72 |
TruthfulQA (0-shot) | 54.84 |
Winogrande (5-shot) | 75.69 |
GSM8k (5-shot) | 57.62 |
YALL - Yet Another LLM Leaderboard
Evaluation from mlabonne's alternative LLM leaderboard:
Metric | Value |
---|---|
Average | 49.64 |
AGIEval | 34.55 |
GPT4All | 70.96 |
TruthfulQA | 54.87 |
Bigbench | 38.17 |
Limitations
This model shares the same limitations as the underlying Phi-2 model, details of which are found here.