|
--- |
|
language: |
|
- en |
|
license: llama2 |
|
library_name: transformers |
|
tags: |
|
- merge |
|
- mergekit |
|
- lazymergekit |
|
datasets: |
|
- teknium/openhermes |
|
- cognitivecomputations/dolphin |
|
base_model: |
|
- cognitivecomputations/dolphin-llama2-7b |
|
- Tensoic/Llama-2-openhermes |
|
pipeline_tag: text-generation |
|
model-index: |
|
- name: OpenDolphinHermes_Llama2_7B |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 55.03 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/OpenDolphinHermes_Llama2_7B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 78.74 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/OpenDolphinHermes_Llama2_7B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 52.25 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/OpenDolphinHermes_Llama2_7B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 46.1 |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/OpenDolphinHermes_Llama2_7B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 73.16 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/OpenDolphinHermes_Llama2_7B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 20.17 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/OpenDolphinHermes_Llama2_7B |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
# OpenDolphinHermes_Llama2_7B |
|
|
|
|
|
<p align="center"> |
|
<img src="https://huggingface.co/sethuiyer/OpenDolphinHermes_Llama2_7B/resolve/main/dolphin_hermes.webp" height="256px" alt="SynthIQ"> |
|
</p> |
|
|
|
mergekit SLERP of these two models |
|
* [cognitivecomputations/dolphin-llama2-7b](https://huggingface.co/cognitivecomputations/dolphin-llama2-7b) |
|
* [Tensoic/Llama-2-openhermes](https://huggingface.co/Tensoic/Llama-2-openhermes) |
|
|
|
## 🧩 Configuration |
|
|
|
```yaml |
|
slices: |
|
- sources: |
|
- model: cognitivecomputations/dolphin-llama2-7b |
|
layer_range: [0, 32] |
|
- model: Tensoic/Llama-2-openhermes |
|
layer_range: [0, 32] |
|
merge_method: slerp |
|
base_model: Tensoic/Llama-2-openhermes |
|
parameters: |
|
t: |
|
- filter: self_attn |
|
value: [0, 0.5, 0.3, 0.7, 1] |
|
- filter: mlp |
|
value: [1, 0.5, 0.7, 0.3, 0] |
|
- value: 0.5 |
|
dtype: bfloat16 |
|
``` |
|
|
|
# Prompt Template (ChatML) |
|
```text |
|
<|im_start|>system |
|
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. |
|
Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. |
|
Please ensure that your responses are socially unbiased and positive in nature. |
|
|
|
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. |
|
If you don't know the answer to a question, please don't share false information. |
|
<|im_end|> |
|
<|im_start|>user |
|
{ .Prompt} |
|
<|im_end|> |
|
<|im_start|>assistant |
|
``` |
|
|
|
# OpenLLM Leaderboard |
|
|
|
| T | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | |
|
|---|--------------------------------------------|---------|------|-----------|-------|------------|------------|-------| |
|
| 0 | meta-llama/llama-2-13b-hf | 55.69 | 59.39 | 82.13 | 55.77 | 37.38 | 76.64 | 22.82 | |
|
| 1 | sethuiyer/OpenDolphinHermes_Llama2_7B | 54.24 | 55.03| 78.74 | 52.25 | 46.1 | 73.16 | 20.17 | |
|
| 2 | togethercomputer/Llama-2-7B-32K-Instruct | 50.02 | 51.11| 78.51 | 46.11 | 44.86 | 73.88 | 5.69 | |
|
| 3 | togethercomputer/LLaMa-2-7B-32K | 47.07 | 47.53| 76.14 | 43.33 | 39.23 | 71.9 | 4.32 | |
|
|
|
## Why? |
|
|
|
I wanted a LLaMa2-7B model which is as good as base LLaMa2-13B model. |
|
|
|
## 💻 Usage |
|
|
|
```python |
|
!pip install -qU transformers accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "sethuiyer/OpenDolphinHermes_Llama2_7B" |
|
messages = [{"role": "user", "content": "What is a large language model?"}] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |
|
|
|
Output: |
|
```text |
|
A large language model is a type of artificial intelligence system that has been trained on a massive amount of data, often millions or even billions of words, to learn the patterns and relationships between words and phrases. |
|
These models can then be used to generate new text, understand and translate languages, and perform various natural language processing tasks. |
|
They have become increasingly popular in recent years due to advances in machine learning technology and their ability to achieve high levels of accuracy and performance on natural language processing tasks. |
|
Examples of large language models include GPT-2, BERT, and T5. |
|
``` |
|
## Thanks |
|
Thanks to Google Colab for the compute. |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sethuiyer__OpenDolphinHermes_Llama2_7B) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |54.24| |
|
|AI2 Reasoning Challenge (25-Shot)|55.03| |
|
|HellaSwag (10-Shot) |78.74| |
|
|MMLU (5-Shot) |52.25| |
|
|TruthfulQA (0-shot) |46.10| |
|
|Winogrande (5-shot) |73.16| |
|
|GSM8k (5-shot) |20.17| |
|
|
|
|