File size: 7,987 Bytes
1d8bc64 b514b5e 1d8bc64 75199a5 98442ca 75199a5 1d8bc64 b514b5e 0dfac65 1d8bc64 ad09846 b8e73ec ad09846 1d8bc64 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 |
---
license: apple-ascl
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/63118add64939fabc0108b28/BB42g4V8HTxb5dR4tcy8A.png" alt="DCLM Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for DCLM-Baseline-7B
DCLM-Baseline-7B is a 7 billion parameter language model trained on the DCLM-Baseline dataset, which was curated as part of the DataComp for Language Models (DCLM) benchmark. This model is designed to showcase the effectiveness of systematic data curation techniques for improving language model performance.
## Model Details
| Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
|------|-----------------|--------|-------------|-----------------|----------------|
| 7B | 2.6T | 32 | 4096 | 32 | 8192 |
### Model Description
- **Developed by:** DataComp for Language Models (DCLM) Team
- **Model type:** Decoder-only Transformer language model
- **Language(s):** English (primarily)
- **License:** Apple Sample Code License
- **Contact:** [email protected]
- **Date:** June 2024
### Model Sources
- **Repository:** https://github.com/mlfoundations/dclm
- **Dataset:** https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0
- **Paper:** [DataComp-LM: In search of the next generation of training sets for language models](https://arxiv.org/abs/2406.11794)
## Using Model
First install open_lm
```pip install git+https://github.com/mlfoundations/open_lm.git```
Then:
```
from open_lm.hf import *
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("apple/DCLM-Baseline-7B-8k")
model = AutoModelForCausalLM.from_pretrained("apple/DCLM-Baseline-7B-8k")
inputs = tokenizer(["Machine learning is"], return_tensors="pt")
gen_kwargs = {"max_new_tokens": 50, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
output = model.generate(inputs['input_ids'], **gen_kwargs)
output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
print(output)
```
### Training Details
The model was trained using the following setup:
- **Architecture:** Decoder-only Transformer
- **Framework:** PyTorch with OpenLM
- **Optimizer:** AdamW
- **Learning Rate:** 2e-3 (peak)
- **Weight Decay:** 0.05
- **Batch Size:** 2048 sequences
- **Sequence Length:** 8192 tokens
- **Total Training Tokens:** 2.6T
- **Hardware:** Trained on H100 GPUs
For more detailed training information, please refer to Section 3.4 and Appendix F of the DCLM paper.
To ensure our trained model is broadly useful, including for math and coding tasks, we combine our 3.8T [DCLM-BASELINE](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0) with the [StarCoder](https://huggingface.co/datasets/bigcode/starcoderdata) and [ProofPile2](https://huggingface.co/datasets/EleutherAI/proof-pile-2) data to arrive at a 4.1T token dataset.
An additional 100B of training was done on the same dataset using [Dataset Decomposition](https://arxiv.org/abs/2405.13226) to extend context length from 2k -> 8k.
## Evaluation
Here are the evaluation results for DCLM-Baseline-7B on various tasks (using [llm-foundry](https://github.com/mosaicml/llm-foundry) eval suite)
| Task | Score |
|------|-------|
| MMLU (zero-shot) | 0.5535 |
| MMLU (few-shot) | 0.6369 |
| HellaSwag (zero-shot) | 0.7933 |
| HellaSwag | 0.8103 |
| Jeopardy | 0.5252 |
| TriviaQA | 0.5703 |
| GSM8K (CoT) | 0.1024 |
| AGI Eval SAT Math (CoT) | 0.2227 |
| AQuA (CoT) | 0.1061 |
| SVAMP (CoT) | 0.5133 |
| BigBench QA Wikidata | 0.7344 |
| ARC Easy | 0.8249 |
| ARC Challenge | 0.6126 |
| BigBench Misconceptions | 0.6849 |
| COPA | 0.8800 |
| SIQA | 0.8270 |
| CommonsenseQA | 0.7993 |
| PIQA | 0.8161 |
| OpenBookQA | 0.4500 |
| BigBench Novel Concepts | 0.6563 |
| BigBench Strange Stories | 0.7759 |
| BigBench Strategy QA | 0.6540 |
| LAMBADA | 0.7553 |
| Winograd | 0.9011 |
| Winogrande | 0.7395 |
| BigBench Conlang Translation | 0.1220 |
| BigBench Language Identification | 0.5216 |
| BigBench Conceptual Combinations | 0.6796 |
| BigBench Elementary Math QA | 0.3500 |
| BigBench Dyck Languages | 0.3470 |
| AGI Eval LSAT AR | 0.2609 |
| BigBench CS Algorithms | 0.5379 |
| BigBench Logical Deduction | 0.3653 |
| BigBench Operators | 0.5000 |
| BigBench Repeat Copy Logic | 0.5313 |
| Simple Arithmetic (no spaces) | 0.3000 |
| Simple Arithmetic (with spaces) | 0.3070 |
| MathQA | 0.3108 |
| LogiQA | 0.4147 |
| PubMedQA | 0.7170 |
| SQuAD | 0.6317 |
| AGI Eval LSAT RC | 0.7015 |
| AGI Eval LSAT LR | 0.5373 |
| CoQA | 0.4981 |
| BigBench Understanding Fables | 0.7090 |
| BoolQ | 0.8284 |
| AGI Eval SAT EN | 0.8252 |
| Winogender MC (Female) | 0.6333 |
| Winogender MC (Male) | 0.5833 |
| Enterprise PII Classification | 0.8091 |
| BBQ | 0.6420 |
| GPQA Main | 0.2612 |
| GPQA Diamond | 0.2172 |
Note: All scores are presented as decimal values between 0 and 1, representing the proportion of correct answers or the model's performance on each task.
## Comparison
Below are comparisions of this model with other models in the 7B regime.
| Model | Params | Tokens | Open dataset? | CORE | MMLU | EXTENDED |
|---------------|--------|--------|---------------|----------|----------|----------|
| **Open weights, closed datasets** | | | | | | |
| Llama2 | 7B | 2T | β | 49.2 | 45.8 | 34.1 |
| DeepSeek | 7B | 2T | β | 50.7 | 48.5 | 35.3 |
| Mistral-0.3 | 7B | ? | β | 57.0 | 62.7 | 45.1 |
| QWEN-2 | 7B | ? | β | 57.5 | **71.9** | 50.5 |
| Llama3 | 8B | 15T | β | 57.6 | 66.2 | 46.3 |
| Gemma | 8B | 6T | β | 57.8 | 64.3 | 44.6 |
| Phi-3 | 7B | ? | β | **61.0** | 69.9 | **57.9** |
| **Open weights, open datasets** | | | | | | |
| Falcon | 7B | 1T | β | 44.1 | 27.4 | 25.1 |
| OLMo-1.7 | 7B | 2.1T | β | 47.0 | 54.0 | 34.2 |
| MAP-Neo | 7B | 4.5T | β | **50.2** | **57.1** | **40.4** |
| **Models we trained** | | | | | | |
| FineWeb edu | 7B | 0.14T | β | 38.7 | 26.3 | 22.1 |
| FineWeb edu | 7B | 0.28T | β | 41.9 | 37.3 | 24.5 |
| **DCLM-7B-8k** | 7B | 2.5T | β | **57.1** | **63.7** | **45.4** |
## Limitations and Biases
While DCLM-Baseline-7B demonstrates strong performance across a range of tasks, it's important to note:
1. The model may exhibit biases present in its training data, which is derived from web crawl data.
2. It has not undergone specific alignment or safety fine-tuning, so outputs should be used with caution.
3. Performance on tasks not included in the evaluation suite may vary.
4. The model's knowledge is limited to its training data cutoff date.
## Ethical Considerations
Users should be aware that this model, like all large language models, can potentially generate harmful or biased content. It should not be used for making decisions about individuals or in sensitive applications without appropriate safeguards and human oversight.
## Citation
If you use this model in your research, please cite:
```
@article{Li2024DataCompLM,
title={DataComp-LM: In search of the next generation of training sets for language models},
author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Gadre and Hritik Bansal and Etash Guha and Sedrick Keh and Kushal Arora and [... full author list]},
journal={arXiv preprint arXiv:2406.11794},
year={2024}
}
```
|