jacobfulano commited on
Commit
495b105
1 Parent(s): 6d09da8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -1
README.md CHANGED
@@ -56,7 +56,84 @@ _CC-By-NC-SA-4.0_ (non-commercial use only)
56
  > This new version of MPT-7B is truly impressive and I look forward to seeing what innovative applications developers will create using these powerful tools.
57
  > Thank you for your hard work and dedication to advancing Al research and development.
58
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
 
60
  ## Acknowledgements
61
 
62
- This model was finetuned by Sam Havens and the MosaicML NLP team
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  > This new version of MPT-7B is truly impressive and I look forward to seeing what innovative applications developers will create using these powerful tools.
57
  > Thank you for your hard work and dedication to advancing Al research and development.
58
 
59
+ ## How to Use
60
+
61
+ This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
62
+
63
+ ```python
64
+ import transformers
65
+ model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b-chat', trust_remote_code=True)
66
+ ```
67
+ Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
68
+ This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
69
+ `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
70
+
71
+ To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
72
+ ```python
73
+ config = transformers.AutoConfig.from_pretrained('mosaicml/mpt-7b-chat', trust_remote_code=True)
74
+ config.attn_config['attn_impl'] = 'triton'
75
+
76
+ model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b-chat', config=config, torch_dtype=torch.bfloat16, trust_remote_code=True)
77
+ model.to(device='cuda:0')
78
+ ```
79
+
80
+ Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
81
+
82
+ ```python
83
+ config = transformers.AutoConfig.from_pretrained('mosaicml/mpt-7b-chat', trust_remote_code=True)
84
+ config.update({"max_seq_len": 4096})
85
+ model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b-chat', config=config, trust_remote_code=True)
86
+ ```
87
+
88
+ This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
89
+
90
+ ```python
91
+ from transformers import AutoTokenizer
92
+ tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
93
+
94
+ ## Model Description
95
+
96
+ The architecture is a modification of a standard decoder-only transformer.
97
+
98
+ The model has been modified from a standard transformer in the following ways:
99
+ * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
100
+ * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
101
+ * It does not use biases
102
+
103
+
104
+ | Hyperparameter | Value |
105
+ |----------------|-------|
106
+ |n_parameters | 6.7B |
107
+ |n_layers | 32 |
108
+ | n_heads | 32 |
109
+ | d_model | 4096 |
110
+ | vocab size | 50432 |
111
+ | sequence length | 2048 |
112
+
113
+ ## Limitations and Biases
114
+
115
+ _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
116
+
117
+ MPT-7B-Chat can produce factually incorrect output, and should not be relied on to produce factually accurate information.
118
+ MPT-7B-CHat was trained on various public datasets.
119
+ While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
120
 
121
  ## Acknowledgements
122
 
123
+ This model was finetuned by Sam Havens and the MosaicML NLP team
124
+
125
+ ## Citation
126
+
127
+ Please cite this model using the following format:
128
+
129
+ ```
130
+ @online{MosaicML2023Introducing,
131
+ author = {MosaicML NLP Team},
132
+ title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
133
+ year = {2023},
134
+ url = {www.mosaicml.com/blog/mpt-7b},
135
+ note = {Accessed: 2023-03-28}, % change this date
136
+ urldate = {2023-03-28} % change this date
137
+ }
138
+ ```
139
+