Text Generation
Transformers
Safetensors
openelm
custom_code
File size: 11,089 Bytes
139d5ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
---
license: other
license_name: apple-sample-code-license
license_link: LICENSE
---

# OpenELM

*Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*

We introduce **OpenELM**, a family of **Open**-source **E**fficient **L**anguage **M**odels. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters.  

Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. 



## Usage

Below we provide an example of loading the model via [HuggingFace Hub](https://huggingface.co/docs/hub/) as:

```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# obtain access to "meta-llama/Llama-2-7b-hf", then see https://huggingface.co/docs/hub/security-tokens to get a token 
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", token="hf_xxxx")

model_path = "apple/OpenELM-450M"

model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)
model = model.cuda().eval()
prompt = "Once upon a time there was"
tokenized_prompt = tokenizer(prompt)
prompt_tensor = torch.tensor(tokenized_prompt["input_ids"], device="cuda").unsqueeze(0)
output_ids = model.generate(prompt_tensor, max_new_tokens=256, repetition_penalty=1.2, pad_token_id=0)
output_ids = output_ids[0].tolist()
output_text = tokenizer.decode(output_ids, skip_special_tokens=True)
print(f'{model_path=}, {prompt=}\n')
print(output_text)

# below is the output:
"""
model_path='apple/OpenELM-450M', prompt='Once upon a time there was'

Once upon a time there was a little girl who lived in the woods. She had a big heart and she loved to play with her friends. One day, she decided to go for a walk in the woods. As she walked, she saw a beautiful tree. It was so tall that it looked like a mountain. The tree was covered with leaves and flowers.
The little girl thought that this tree was very pretty. She wanted to climb up to the tree and see what was inside. So, she went up to the tree and climbed up to the top. She was very excited when she saw that the tree was full of beautiful flowers. She also
"""
```


## Main Results

### Zero-Shot

| **Model Size**                                                              | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA**  | **SciQ**  | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M)                   | 26.45     | 45.08     | **53.98** | 46.71         | 69.75     | **84.70** | **53.91**      | 54.37       |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56     | **52.07**     | **70.78** | 84.40     | 52.72          | **55.11**   |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M)                   | 27.56     | 48.06     | 55.78     | 53.97         | 72.31     | 87.20     | 58.01          | 57.56       |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34**     | **72.63** | **88.00** | **58.96**      | **59.95**   |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B)                   | 32.34     | **55.43** | 63.58     | 64.81         | **75.57** | **90.60** | 61.72          | 63.44       |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23     | **70.00** | **71.20**     | 75.03     | 89.30     | **62.75**      | **65.50**   |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B)                       | 35.58     | 59.89     | 67.40     | 72.44         | 78.24     | **92.70** | 65.51          | 67.39       |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct)     | **39.42** | **61.74** | **68.17** | **76.36**     | **79.00** | 92.50     | **66.85**      | **69.15**   |

### LLM360

| **Model Size**                                                              | **ARC-c** | **HellaSwag** | **MMLU**  | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M)                   | 27.65     | 47.15         | 25.72     | **39.24**      | **53.83**      | 38.72       |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58**     | **26.70** | 38.72          | 53.20          | **40.54**   |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M)                   | 30.20     | 53.86         | **26.01** | 40.18          | 57.22          | 41.50       |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31**     | 25.41     | **40.48**      | **58.33**      | **43.41**   |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B)                   | 36.69     | 65.71         | **27.05** | 36.98          | 63.22          | 45.93       |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83**     | 25.65     | **45.95**      | **64.72**      | **49.94**   |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B)                       | 42.24     | 73.28         | **26.76** | 34.98          | 67.25          | 48.90       |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct)     | **47.70** | **76.87**     | 24.80     | **38.76**      | **67.96**      | **51.22**   |


### OpenLLM Leaderboard

| **Model Size**                                                              | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU**  | **PIQA**  | **RACE**  | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M)                   | 27.65     | **66.79**       | 47.15         | 25.72     | 69.75     | 30.91     | **39.24**      | **53.83**      | 45.13       |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01           | **51.58**     | **26.70** | **70.78** | 33.78     | 38.72          | 53.20          | **46.66**   |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M)                   | 30.20     | **68.63**       | 53.86         | **26.01** | 72.31     | 33.11     | 40.18          | 57.22          | 47.69       |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44           | **59.31**     | 25.41     | **72.63** | **36.84** | **40.48**      | **58.33**      | **49.25**   |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B)                   | 36.69     | **71.74**       | 65.71         | **27.05** | **75.57** | 36.46     | 36.98          | 63.22          | 51.68       |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02           | **71.83**     | 25.65     | 75.03     | **39.43** | **45.95**      | **64.72**      | **54.40**   |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B)                       | 42.24     | **73.29**       | 73.28         | **26.76** | 78.24     | **38.76** | 34.98          | 67.25          | 54.35       |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct)     | **47.70** | 72.33           | **76.87**     | 24.80     | **79.00** | 38.47     | **38.76**      | **67.96**      | **55.73**   |

See the technical report for more results and comparison.

## Evaluation

### Setup

Install the following dependencies:

```bash

# install public lm-eval-harness

harness_repo="public-lm-eval-harness"
git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo}
cd ${harness_repo}
# use main branch on 03-15-2024, SHA is dc90fec
git checkout dc90fec
pip install -e .
cd ..

# 66d6242 is the main branch on 2024-04-01 
pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242
pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0

```

### Evaluate OpenELM

```bash

# OpenELM-270M
hf_model=OpenELM-270M

# this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMa tokenizer which requires add_bos_token to be True
add_bos_token=True
batch_size=1

mkdir lm_eval_output

shot=0
task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
lm_eval --model hf \
        --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
        --tasks ${task} \
        --device cuda:0 \
        --num_fewshot ${shot} \
        --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
        --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log

shot=5
task=mmlu,winogrande
lm_eval --model hf \
        --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
        --tasks ${task} \
        --device cuda:0 \
        --num_fewshot ${shot} \
        --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
        --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log

shot=25
task=arc_challenge,crows_pairs_english
lm_eval --model hf \
        --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
        --tasks ${task} \
        --device cuda:0 \
        --num_fewshot ${shot} \
        --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
        --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log

shot=10
task=hellaswag
lm_eval --model hf \
        --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
        --tasks ${task} \
        --device cuda:0 \
        --num_fewshot ${shot} \
        --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
        --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log

```


## Bias, Risks, and Limitations

Our OpenELM models are not trained with any safety guarantees, the model outputs can be potentially inaccurate, harmful or contain biased information. produce inaccurate, biased or other objectionable responses to user prompts. Therefore, users and developers should conduct extensive safety testing and filtering suited to their specific needs.