GGUF
Composer
MosaicML
llm-foundry
maddes8cht commited on
Commit
d958557
1 Parent(s): 6c37481

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +257 -0
README.md ADDED
@@ -0,0 +1,257 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ datasets:
4
+ - jeffwan/sharegpt_vicuna
5
+ - Hello-SimpleAI/HC3
6
+ - tatsu-lab/alpaca
7
+ - Anthropic/hh-rlhf
8
+ - victor123/evol_instruct_70k
9
+ tags:
10
+ - Composer
11
+ - MosaicML
12
+ - llm-foundry
13
+ inference: false
14
+ ---
15
+ [![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]()
16
+
17
+ I am continuously enhancing the structure of these model descriptions, and they now provide even more comprehensive information to help you find the best models for your specific needs.
18
+
19
+ # mpt-7b-chat - GGUF
20
+ - Model creator: [mosaicml](https://huggingface.co/mosaicml)
21
+ - Original model: [mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat)
22
+
23
+ # Note: Important Update for Falcon Models in llama.cpp Versions After October 18, 2023
24
+
25
+ As noted on the [Llama.cpp]([ggerganov/llama.cpp: Port of Facebook's LLaMA model in C/C++ (github.com)](https://github.com/ggerganov/llama.cpp#hot-topics) GitHub repository, all new releases of Llama.cpp will require a re-quantization due to the implementation of the new BPE tokenizer. While I am working diligently to make the updated models available for you, please be aware of the following:
26
+
27
+ **Stay Informed:** Application software using llama.cpp libraries will follow soon. Keep an eye on the release schedules of your favorite software applications that rely on llama.cpp. They will likely provide instructions on how to integrate the new models.
28
+
29
+ **Monitor Upload Times:** Please keep a close watch on the upload times of the available files on my Hugging Face Model pages. This will help you identify which files have already been updated and are ready for download, ensuring you have the most current Falcon models at your disposal.
30
+
31
+ **Download Promptly:** Once the updated Falcon models are available on my Hugging Face page, be sure to download them promptly to ensure compatibility with the latest [llama.cpp]([ggerganov/llama.cpp: Port of Facebook's LLaMA model in C/C++ (github.com)](https://github.com/ggerganov/llama.cpp) versions.
32
+
33
+ Please understand that this change specifically affects Falcon and Starcoder models, other models remain unaffected. Consequently, software providers may not emphasize this change as prominently.
34
+
35
+ As a solo operator of this page, I'm doing my best to expedite the process, but please bear with me as this may take some time.
36
+
37
+
38
+
39
+
40
+
41
+
42
+ # About GGUF format
43
+
44
+ `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
45
+ A growing list of Software is using it and can therefore use this model.
46
+ The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
47
+
48
+ # Quantization variants
49
+
50
+ There is a bunch of quantized files available. How to choose the best for you:
51
+
52
+ # legacy quants
53
+
54
+ Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
55
+ Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
56
+ Falcon 7B models cannot be quantized to K-quants.
57
+
58
+ # K-quants
59
+
60
+ K-quants are based on the idea that the quantization of certain parts affects the quality in different ways. If you quantize certain parts more and others less, you get a more powerful model with the same file size, or a smaller file size and lower memory load with comparable performance.
61
+ So, if possible, use K-quants.
62
+ With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.
63
+
64
+
65
+
66
+ # Original Model Card:
67
+ # MPT-7B-Chat
68
+
69
+ MPT-7B-Chat is a chatbot-like model for dialogue generation.
70
+ It was built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3),
71
+ [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets.
72
+ * License: _CC-By-NC-SA-4.0_ (non-commercial use only)
73
+ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat)
74
+
75
+
76
+ This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
77
+
78
+ ## Model Date
79
+
80
+ May 5, 2023
81
+
82
+ ## Model License
83
+
84
+ _CC-By-NC-SA-4.0_ (non-commercial use only)
85
+
86
+ ## Documentation
87
+
88
+ * [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
89
+ * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
90
+ * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
91
+
92
+ ### Example Dialogue
93
+
94
+ **SamIAm85**:
95
+ >I want you to come up with a tweet based on this summary of the article:
96
+ >"Introducing MPT-7B, the latest entry in our MosaicML Foundation Series.
97
+ >MPT-7B is a transformer trained from scratch on IT tokens of text and code.
98
+ >It is open source and it matches the quality of LLaMA-7B.
99
+ >MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of ~$200k.
100
+ >Starting today, you can train, finetune, and deploy your own private MPT models, either starting from one of our checkpoints or training from scratch.
101
+ >For inspiration, we are also releasing three finetuned models in addition to the base MPT-7B: MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-Story Writer-65k+, the last of which uses a context length of 65k tokens!
102
+
103
+ **MPT-7B-Instruct**:
104
+ >Here's my suggestion for a tweet:
105
+ >"Exciting news! @MosaicML has launched #MPT_7B - a new standard for open-source LLMs. With training on 17 tokens of text & code, it matches the quality of LLaMA-7B. Now you can train, fine-tune, and deploy your own private MPT models via their platform. Check out the full blog post here: https://bit.ly/3j6UJzZ"
106
+
107
+ **SamIAm85**:
108
+ >Is there anything you want to say to MosaicML about this exciting news?
109
+
110
+ **MPT-7B-Instruct**:
111
+ > As an Al language model myself, I am very impressed with the advancements made by MosaicML in developing such high-quality open-source LLMs.
112
+ > This new version of MPT-7B is truly impressive and I look forward to seeing what innovative applications developers will create using these powerful tools.
113
+ > Thank you for your hard work and dedication to advancing Al research and development.
114
+
115
+ ## How to Use
116
+
117
+ This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
118
+
119
+ ```python
120
+ import transformers
121
+ model = transformers.AutoModelForCausalLM.from_pretrained(
122
+ 'mosaicml/mpt-7b-chat',
123
+ trust_remote_code=True
124
+ )
125
+ ```
126
+ Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
127
+ This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
128
+ `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
129
+
130
+ To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
131
+ ```python
132
+ import torch
133
+ import transformers
134
+
135
+ name = 'mosaicml/mpt-7b-chat'
136
+
137
+ config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
138
+ config.attn_config['attn_impl'] = 'triton'
139
+ config.init_device = 'cuda:0' # For fast initialization directly on GPU!
140
+
141
+ model = transformers.AutoModelForCausalLM.from_pretrained(
142
+ name,
143
+ config=config,
144
+ torch_dtype=torch.bfloat16, # Load model weights in bfloat16
145
+ trust_remote_code=True
146
+ )
147
+ ```
148
+
149
+ Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
150
+
151
+ ```python
152
+ import transformers
153
+
154
+ name = 'mosaicml/mpt-7b-chat'
155
+
156
+ config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
157
+ config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
158
+
159
+ model = transformers.AutoModelForCausalLM.from_pretrained(
160
+ name,
161
+ config=config,
162
+ trust_remote_code=True
163
+ )
164
+ ```
165
+
166
+ This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
167
+
168
+ ```python
169
+ from transformers import AutoTokenizer
170
+ tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
171
+ ```
172
+
173
+ The model can then be used, for example, within a text-generation pipeline.
174
+ Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
175
+
176
+ ```python
177
+ from transformers import pipeline
178
+
179
+ pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
180
+
181
+ with torch.autocast('cuda', dtype=torch.bfloat16):
182
+ print(
183
+ pipe('Here is a recipe for vegan banana bread:\n',
184
+ max_new_tokens=100,
185
+ do_sample=True,
186
+ use_cache=True))
187
+ ```
188
+
189
+ ## Model Description
190
+
191
+ The architecture is a modification of a standard decoder-only transformer.
192
+
193
+ The model has been modified from a standard transformer in the following ways:
194
+ * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
195
+ * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
196
+ * It does not use biases
197
+
198
+
199
+ | Hyperparameter | Value |
200
+ |----------------|-------|
201
+ |n_parameters | 6.7B |
202
+ |n_layers | 32 |
203
+ | n_heads | 32 |
204
+ | d_model | 4096 |
205
+ | vocab size | 50432 |
206
+ | sequence length | 2048 |
207
+
208
+ ### Training Configuration
209
+
210
+ This model was trained on 8 A100-80GBs for about 8.2 hours, followed by training for 6.7 hours on 32 A100-40GBs using the [MosaicML Platform](https://www.mosaicml.com/platform).
211
+ The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
212
+
213
+ ## Limitations and Biases
214
+
215
+ _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
216
+
217
+ MPT-7B-Chat can produce factually incorrect output, and should not be relied on to produce factually accurate information.
218
+ MPT-7B-Chat was trained on various public datasets.
219
+ While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
220
+
221
+ ## Acknowledgements
222
+
223
+ This model was finetuned by Sam Havens and the MosaicML NLP team
224
+
225
+ ## Disclaimer
226
+
227
+ The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
228
+
229
+
230
+ ## MosaicML Platform
231
+
232
+ If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
233
+
234
+
235
+ ## Citation
236
+
237
+ Please cite this model using the following format:
238
+
239
+ ```
240
+ @online{MosaicML2023Introducing,
241
+ author = {MosaicML NLP Team},
242
+ title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
243
+ year = {2023},
244
+ url = {www.mosaicml.com/blog/mpt-7b},
245
+ note = {Accessed: 2023-03-28}, % change this date
246
+ urldate = {2023-03-28} % change this date
247
+ }
248
+ ```## Please consider to support my work
249
+ **Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
250
+
251
+ <center>
252
+ [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)
253
+ [![Stack Exchange](https://stackexchange.com/users/flair/26485911.png)](https://stackexchange.com/users/26485911)
254
+ [![GitHub](https://maddes8cht.github.io/assets/buttons/github-button.png)](https://github.com/maddes8cht)
255
+ [![HuggingFace](https://maddes8cht.github.io/assets/buttons/huggingface-button.png)](https://huggingface.co/maddes8cht)
256
+ [![Twitter](https://maddes8cht.github.io/assets/buttons/twitter-button.png)](https://twitter.com/maddes1966)
257
+ </center>