Zangs3011 commited on
Commit
5322ca1
1 Parent(s): 76728c2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +158 -121
README.md CHANGED
@@ -1,199 +1,236 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
 
11
 
12
- ## Model Details
13
 
14
- ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
 
 
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
 
 
 
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
- ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
 
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
45
 
46
- ### Downstream Use [optional]
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
 
50
- [More Information Needed]
 
 
 
51
 
52
- ### Out-of-Scope Use
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
 
56
- [More Information Needed]
57
 
58
- ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
 
62
- [More Information Needed]
63
 
64
- ### Recommendations
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
 
 
 
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
  ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
73
 
74
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  ## Training Details
77
 
78
  ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
 
84
- ### Training Procedure
 
 
 
 
 
 
 
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
 
88
- #### Preprocessing [optional]
89
 
90
- [More Information Needed]
91
 
 
92
 
93
  #### Training Hyperparameters
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
 
107
- ### Testing Data, Factors & Metrics
108
 
109
- #### Testing Data
110
 
111
- <!-- This should link to a Dataset Card if possible. -->
112
 
113
- [More Information Needed]
114
 
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
 
137
- <!-- Relevant interpretability work for the model goes here -->
138
 
139
- [More Information Needed]
140
 
141
- ## Environmental Impact
142
 
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
 
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
 
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
 
153
- ## Technical Specifications [optional]
154
 
155
- ### Model Architecture and Objective
 
 
156
 
157
- [More Information Needed]
 
 
 
 
 
 
158
 
159
  ### Compute Infrastructure
160
 
161
- [More Information Needed]
162
-
163
  #### Hardware
164
 
165
- [More Information Needed]
166
 
167
  #### Software
168
 
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
 
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
 
187
- [More Information Needed]
188
 
189
- ## More Information [optional]
 
 
 
 
 
 
 
190
 
191
- [More Information Needed]
192
 
193
- ## Model Card Authors [optional]
 
 
 
 
 
 
 
 
 
 
194
 
195
- [More Information Needed]
196
 
197
- ## Model Card Contact
198
 
199
- [More Information Needed]
 
 
1
  ---
2
+ datasets:
3
+ - tiiuae/falcon-refinedweb
4
+ language:
5
+ - en
6
+ inference: false
7
+ license: apache-2.0
8
  ---
9
 
10
+ # 🚀 Falcon-7B
11
 
12
+ **Falcon-7B is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the Apache 2.0 license.**
13
 
14
+ *Paper coming soon* 😊.
15
 
16
+ 🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
17
 
 
18
 
19
+ ## Why use Falcon-7B?
20
 
21
+ * **It outperforms comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
22
+ * **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
23
+ * **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions.
24
 
25
+ ⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
26
 
27
+ 🔥 **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother!
 
 
 
 
 
 
28
 
29
+ ```python
30
+ from transformers import AutoTokenizer, AutoModelForCausalLM
31
+ import transformers
32
+ import torch
33
 
34
+ model = "tiiuae/falcon-7b"
35
 
36
+ tokenizer = AutoTokenizer.from_pretrained(model)
37
+ pipeline = transformers.pipeline(
38
+ "text-generation",
39
+ model=model,
40
+ tokenizer=tokenizer,
41
+ torch_dtype=torch.bfloat16,
42
+ trust_remote_code=True,
43
+ device_map="auto",
44
+ )
45
+ sequences = pipeline(
46
+ "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
47
+ max_length=200,
48
+ do_sample=True,
49
+ top_k=10,
50
+ num_return_sequences=1,
51
+ eos_token_id=tokenizer.eos_token_id,
52
+ )
53
+ for seq in sequences:
54
+ print(f"Result: {seq['generated_text']}")
55
 
56
+ ```
57
 
58
+ 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
59
 
60
+ For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
61
 
62
+ You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B.
63
 
64
+ # Model Card for Falcon-7B
65
 
66
+ ## Model Details
67
 
68
+ ### Model Description
69
 
70
+ - **Developed by:** [https://www.tii.ae](https://www.tii.ae);
71
+ - **Model type:** Causal decoder-only;
72
+ - **Language(s) (NLP):** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
73
+ - **License:** Apache 2.0.
74
 
75
+ ### Model Source
76
 
77
+ - **Paper:** *coming soon*.
78
 
79
+ ## Uses
80
 
81
+ ### Direct Use
82
 
83
+ Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
84
 
85
+ ### Out-of-Scope Use
86
 
87
+ Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
88
 
89
+ ## Bias, Risks, and Limitations
90
+
91
+ Falcon-7B is trained on English and French data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
92
+
93
+ ### Recommendations
94
 
95
+ We recommend users of Falcon-7B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
96
 
97
  ## How to Get Started with the Model
98
 
 
99
 
100
+ ```python
101
+ from transformers import AutoTokenizer, AutoModelForCausalLM
102
+ import transformers
103
+ import torch
104
+
105
+ model = "tiiuae/falcon-7b"
106
+
107
+ tokenizer = AutoTokenizer.from_pretrained(model)
108
+ pipeline = transformers.pipeline(
109
+ "text-generation",
110
+ model=model,
111
+ tokenizer=tokenizer,
112
+ torch_dtype=torch.bfloat16,
113
+ trust_remote_code=True,
114
+ device_map="auto",
115
+ )
116
+ sequences = pipeline(
117
+ "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
118
+ max_length=200,
119
+ do_sample=True,
120
+ top_k=10,
121
+ num_return_sequences=1,
122
+ eos_token_id=tokenizer.eos_token_id,
123
+ )
124
+ for seq in sequences:
125
+ print(f"Result: {seq['generated_text']}")
126
+
127
+ ```
128
 
129
  ## Training Details
130
 
131
  ### Training Data
132
 
133
+ Falcon-7B was trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)).
 
 
134
 
135
+ | **Data source** | **Fraction** | **Tokens** | **Sources** |
136
+ |--------------------|--------------|------------|-----------------------------------|
137
+ | [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 79% | 1,185B | massive web crawl |
138
+ | Books | 7% | 110B | |
139
+ | Conversations | 6% | 85B | Reddit, StackOverflow, HackerNews |
140
+ | Code | 3% | 45B | |
141
+ | RefinedWeb-French | 3% | 45B | massive web crawl |
142
+ | Technical | 2% | 30B | arXiv, PubMed, USPTO, etc. |
143
 
 
144
 
145
+ The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
146
 
147
+ ### Training Procedure
148
 
149
+ Falcon-7B was trained on 384 A100 40GB GPUs, using a 2D parallelism strategy (PP=2, DP=192) combined with ZeRO.
150
 
151
  #### Training Hyperparameters
152
 
153
+ | **Hyperparameter** | **Value** | **Comment** |
154
+ |--------------------|------------|-------------------------------------------|
155
+ | Precision | `bfloat16` | |
156
+ | Optimizer | AdamW | |
157
+ | Learning rate | 6e-4 | 4B tokens warm-up, cosine decay to 1.2e-5 |
158
+ | Weight decay | 1e-1 | |
159
+ | Z-loss | 1e-4 | |
160
+ | Batch size | 2304 | 30B tokens ramp-up |
 
 
 
161
 
 
162
 
163
+ #### Speeds, Sizes, Times
164
 
165
+ Training happened in early March 2023 and took about two weeks.
166
 
 
167
 
168
+ ## Evaluation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
169
 
170
+ *Paper coming soon*.
171
 
172
+ See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
173
 
 
174
 
175
+ ## Technical Specifications
176
 
177
+ ### Model Architecture and Objective
178
 
179
+ Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
 
 
 
 
180
 
181
+ The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
182
 
183
+ * **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
184
+ * **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
185
+ * **Decoder-block:** parallel attention/MLP with a single layer norm.
186
 
187
+ | **Hyperparameter** | **Value** | **Comment** |
188
+ |--------------------|-----------|----------------------------------------|
189
+ | Layers | 32 | |
190
+ | `d_model` | 4544 | Increased to compensate for multiquery |
191
+ | `head_dim` | 64 | Reduced to optimise for FlashAttention |
192
+ | Vocabulary | 65024 | |
193
+ | Sequence length | 2048 | |
194
 
195
  ### Compute Infrastructure
196
 
 
 
197
  #### Hardware
198
 
199
+ Falcon-7B was trained on AWS SageMaker, on 384 A100 40GB GPUs in P4d instances.
200
 
201
  #### Software
202
 
203
+ Falcon-7B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
204
 
 
205
 
206
+ ## Citation
207
 
208
+ *Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
209
+ ```
210
+ @article{falcon40b,
211
+ title={{Falcon-40B}: an open large language model with state-of-the-art performance},
212
+ author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
213
+ year={2023}
214
+ }
215
+ ```
216
 
217
+ To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
218
 
219
+ ```
220
+ @article{refinedweb,
221
+ title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
222
+ author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
223
+ journal={arXiv preprint arXiv:2306.01116},
224
+ eprint={2306.01116},
225
+ eprinttype = {arXiv},
226
+ url={https://arxiv.org/abs/2306.01116},
227
+ year={2023}
228
+ }
229
+ ```
230
 
231
+ ## License
232
 
233
+ Falcon-7B is made available under the Apache 2.0 license.
234
 
235
+ ## Contact
236