Datasets:

ArXiv:
DOI:
joaogante HF staff commited on
Commit
ba6b3c3
1 Parent(s): 070818e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -20
README.md CHANGED
@@ -1,3 +1,5 @@
 
 
1
  ---
2
  title: "Assisted Generation: a new direction toward low-latency text generation"
3
  thumbnail: /blog/assets/assisted-generation/thumbnail.png
@@ -10,23 +12,35 @@ authors:
10
  <!-- {blog_metadata} -->
11
  <!-- {authors} -->
12
 
13
- Large language models are all the rage these days, with many companies investing significant resources to scale them up and unlock new capabilities. However, as humans with ever-decreasing attention spawns, we also dislike their slow response times. Latency is critical for a good user experience, and smaller models are often used despite their lower quality (e.g. in [code completion](https://ai.googleblog.com/2022/07/ml-enhanced-code-completion-improves.html)).
14
 
15
  Why is text generation so slow? What’s preventing you from deploying low-latency large language models without going bankrupt? In this blog post, we will revisit the bottlenecks for autoregressive text generation and introduce a new decoding method to tackle the latency problem. You’ll see that by using our new method, assisted generation, you can reduce latency up to 10x in commodity hardware!
16
 
17
  ## Understanding text generation latency
18
 
19
- The core of modern text generation is straightforward to understand. Let’s look at the central piece, the ML model. Its input contains a text sequence, which includes the text generated so far, and potentially other model-specific components (for instance, Whisper also has an audio input). The model takes the input and runs a forward pass: the input is fed to the model and passed sequentially along its layers until the unnormalized log probabilities for the next token are predicted (also known as logits). A token may consist in entire words, sub-words, or even individual characters, depending on the model. The [illustrated GPT-2](https://jalammar.github.io/illustrated-gpt2/) is a great reference if you’d like to dive deeper into this part of text generation.
20
 
21
  <!-- [GIF 1 -- FWD PASS] -->
22
- <video autoplay loop muted playsinline src="/blog/assets/assisted-generation/gif_1_1080p.mov"></video>
 
 
 
 
 
 
23
 
24
- A model forward pass gets you the logits for the next token, which you can freely manipulate (e.g. set the probability of undesirable words or sequences to 0). The following step in text generation is to select the next token from these logits. Common strategies include picking the most likely token, known as greedy decoding, or sampling from this distribution, also called multinomial sampling. Chaining model forward passes with next token selection iteratively gets you text generation. This explanation is the tip of the iceberg when it comes to decoding methods; please refer to [our blog post on text generation](https://huggingface.co/blog/how-to-generate) for an in-depth exploration.
25
 
26
  <!-- [GIF 2 -- TEXT GENERATION] -->
27
- <video autoplay loop muted playsinline src="/blog/assets/assisted-generation/gif_2_1080p.mov"></video>
 
 
 
 
 
 
28
 
29
- From the description above, the latency bottleneck in text generation is clear: running a model forward pass for large models is slow, and you may need to do hundreds of them in a sequence. But let’s dive deeper: why are forward passes slow? Forward passes are typically dominated by matrix multiplications and, after a quick visit to the [corresponding wikipedia section](https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm#Communication-avoiding_and_distributed_algorithms), you call tell that memory bandwidth is the limitation in this operation (e.g. from the GPU RAM to the GPU compute cores). In other words, *the bottleneck in the forward pass comes from loading the model layer weights into the computation cores of your device, not from performing the computations themselves*.
30
 
31
  At the moment, you have three main avenues you can explore to get the most out of text generation, all tackling the performance of the model forward pass. First, you have the hardware-specific model optimizations. For instance, your device may be compatible with [Flash Attention](https://github.com/HazyResearch/flash-attention), which speeds up the attention layer through a reorder of the operations, or [INT8 quantization](https://huggingface.co/blog/hf-bitsandbytes-integration), which reduces the size of the model weights.
32
 
@@ -68,7 +82,7 @@ These three types of improvements can be used in tandem, resulting in [high thro
68
 
69
  ## Language decoder forward pass, revisited
70
 
71
- You’ve read above that each model forward pass yields the logits for the next token, but that’s actually an incomplete description. During text generation, the typical iteration consists in the model receiving as input the latest generated token, plus cached internal computations for all other previous inputs, returning the next token logits. Caching is used to avoid redundant computations, resulting in faster forward passes, but it’s not mandatory (and can be used partially). When caching is disabled, the input contains the entire sequence of tokens generated so far and the output contains the logits corresponding to the next token for *all positions* in the sequence! The logits at position N correspond to the distribution for the next token if the input consisted in the first N tokens, ignoring all subsequent tokens in the sequence. In the particular case of greedy decoding, if you pass the generated sequence as input and apply the argmax operator to the resulting logits, you will obtain the generated sequence back.
72
 
73
 
74
  ```python
@@ -91,7 +105,13 @@ This means that you can use a model forward pass for a different purpose: in add
91
 
92
 
93
  <!-- [GIF 3 -- FWD CONFIRMATION] -->
94
- <video autoplay loop muted playsinline src="/blog/assets/assisted-generation/gif_3_1080p.mov"></video>
 
 
 
 
 
 
95
 
96
 
97
  Let’s consider for a second that you have access to a magical latency-free oracle model that generates the same sequence as your model, for any given input. For argument’s sake, it can’t be used directly, it’s limited to being an assistant to your generation procedure. Using the property described above, you could use this assistant model to get candidate output tokens followed by a forward pass with your model to confirm that they are indeed correct. In this utopian scenario, the latency of text generation would be reduced from `O(n)` to `O(1)`, with `n` being the number of generated tokens. For long generations, we're talking about several orders of magnitude.
@@ -117,9 +137,15 @@ Wrapping all up, here’s our original implementation of the assisted generation
117
  6. Adjust the number of candidate tokens to be produced in the next iteration — our original heuristic increases it by `2` if ALL tokens match and decreases it by `1` otherwise.
118
 
119
  <!-- [GIF 4 -- ASSISTED GENERATION] -->
120
- <video autoplay loop muted playsinline src="/blog/assets/assisted-generation/gif_4_1080p.mov"></video>
 
 
 
 
 
 
121
 
122
- We’ve designed the API in 🤗 Transformers such that this process is hassle-free for you. All you need to do is to pass the assistant model under the new `assistant_model` keyword argument and reap the latency gains! At the time of the release of this blog post, assisted generation is limited to a batch size of 1.
123
 
124
 
125
  ```python
@@ -142,13 +168,16 @@ print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
142
  ```
143
 
144
 
145
- Is the additional internal complexity worth it? Let’s have a look at the latency numbers for the greedy decoding case (results for sampling are in the next section), considering a batch size of 1. These results were pulled directly out of 🤗 Transformers without any additional optimizations, so you should be able to reproduce them in your setup.
146
 
147
 
148
  <!-- [SPACE WITH GREEDY DECODING PERFORMANCE NUMBERS] -->
149
- <script type="module" src="https://gradio.s3-us-west-2.amazonaws.com/3.23.0/gradio.js"></script>
 
 
 
150
 
151
- <gradio-app src="https://huggingface.co/spaces/joaogante/assisted_generation_benchmarks"></gradio-app>
152
 
153
 
154
  Glancing at the collected numbers, we see that assisted generation can deliver significant latency reductions in diverse settings, but it is not a silver bullet – you should benchmark it before applying it to your use case. We can conclude that assisted generation:
@@ -166,17 +195,15 @@ Drawing samples from a probability distribution for the next token will cause ou
166
 
167
  <!-- [TEMPERATURE RESULTS, SHOW THAT LATENCY INCREASES STEADILY WITH TEMP] -->
168
  <div align="center">
169
- <img src="/blog/assets/assisted-generation/temperature.png"/>
170
  </div>
171
 
172
 
173
- Why do you see it for yourself, so get a feeling of assisted generation?
174
 
175
 
176
  <!-- [DEMO] -->
177
- <script type="module" src="https://gradio.s3-us-west-2.amazonaws.com/3.23.0/gradio.js"></script>
178
-
179
- <gradio-app src="https://huggingface.co/spaces/joaogante/assisted_generation_demo"></gradio-app>
180
 
181
 
182
  ## Future directions
@@ -187,8 +214,27 @@ Initially released under our 🤗 Transformers library, to be used with the `.ge
187
 
188
  Finally, assisted generation resurfaces a crucial question in text generation. The field has been evolving with the constraint where all new tokens are the result of a fixed amount of compute, for a given model. One token per homogeneous forward pass, in pure autoregressive fashion. This blog post reinforces the idea that it shouldn’t be the case: large subsections of the generated output can also be equally generated by models that are a fraction of the size. For that, we’ll need new model architectures and decoding methods – we’re excited to see what the future holds!
189
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
190
  ## Acknowledgements
191
 
192
  I'd like to thank Sylvain Gugger, Nicolas Patry, and Lewis Tunstall for sharing many valuable suggestions to improve this blog post. Finally, kudos to Chunte Lee for designing the gorgeous cover you can see in our web page.
193
-
194
- <!-- [ADD CITATION INFO] -->
 
1
+ ### As seen on https://huggingface.co/blog/assisted-generation
2
+
3
  ---
4
  title: "Assisted Generation: a new direction toward low-latency text generation"
5
  thumbnail: /blog/assets/assisted-generation/thumbnail.png
 
12
  <!-- {blog_metadata} -->
13
  <!-- {authors} -->
14
 
15
+ Large language models are all the rage these days, with many companies investing significant resources to scale them up and unlock new capabilities. However, as humans with ever-decreasing attention spans, we also dislike their slow response times. Latency is critical for a good user experience, and smaller models are often used despite their lower quality (e.g. in [code completion](https://ai.googleblog.com/2022/07/ml-enhanced-code-completion-improves.html)).
16
 
17
  Why is text generation so slow? What’s preventing you from deploying low-latency large language models without going bankrupt? In this blog post, we will revisit the bottlenecks for autoregressive text generation and introduce a new decoding method to tackle the latency problem. You’ll see that by using our new method, assisted generation, you can reduce latency up to 10x in commodity hardware!
18
 
19
  ## Understanding text generation latency
20
 
21
+ The core of modern text generation is straightforward to understand. Let’s look at the central piece, the ML model. Its input contains a text sequence, which includes the text generated so far, and potentially other model-specific components (for instance, Whisper also has an audio input). The model takes the input and runs a forward pass: the input is fed to the model and passed sequentially along its layers until the unnormalized log probabilities for the next token are predicted (also known as logits). A token may consist of entire words, sub-words, or even individual characters, depending on the model. The [illustrated GPT-2](https://jalammar.github.io/illustrated-gpt2/) is a great reference if you’d like to dive deeper into this part of text generation.
22
 
23
  <!-- [GIF 1 -- FWD PASS] -->
24
+ <figure class="image table text-center m-0 w-full">
25
+ <video
26
+ style="max-width: 90%; margin: auto;"
27
+ autoplay loop muted playsinline
28
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_1_1080p.mov"
29
+ ></video>
30
+ </figure>
31
 
32
+ A model forward pass gets you the logits for the next token, which you can freely manipulate (e.g. set the probability of undesirable words or sequences to 0). The following step in text generation is to select the next token from these logits. Common strategies include picking the most likely token, known as greedy decoding, or sampling from their distribution, also called multinomial sampling. Chaining model forward passes with next token selection iteratively gets you text generation. This explanation is the tip of the iceberg when it comes to decoding methods; please refer to [our blog post on text generation](https://huggingface.co/blog/how-to-generate) for an in-depth exploration.
33
 
34
  <!-- [GIF 2 -- TEXT GENERATION] -->
35
+ <figure class="image table text-center m-0 w-full">
36
+ <video
37
+ style="max-width: 90%; margin: auto;"
38
+ autoplay loop muted playsinline
39
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_2_1080p.mov"
40
+ ></video>
41
+ </figure>
42
 
43
+ From the description above, the latency bottleneck in text generation is clear: running a model forward pass for large models is slow, and you may need to do hundreds of them in a sequence. But let’s dive deeper: why are forward passes slow? Forward passes are typically dominated by matrix multiplications and, after a quick visit to the [corresponding wikipedia section](https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm#Communication-avoiding_and_distributed_algorithms), you can tell that memory bandwidth is the limitation in this operation (e.g. from the GPU RAM to the GPU compute cores). In other words, *the bottleneck in the forward pass comes from loading the model layer weights into the computation cores of your device, not from performing the computations themselves*.
44
 
45
  At the moment, you have three main avenues you can explore to get the most out of text generation, all tackling the performance of the model forward pass. First, you have the hardware-specific model optimizations. For instance, your device may be compatible with [Flash Attention](https://github.com/HazyResearch/flash-attention), which speeds up the attention layer through a reorder of the operations, or [INT8 quantization](https://huggingface.co/blog/hf-bitsandbytes-integration), which reduces the size of the model weights.
46
 
 
82
 
83
  ## Language decoder forward pass, revisited
84
 
85
+ You’ve read above that each model forward pass yields the logits for the next token, but that’s actually an incomplete description. During text generation, the typical iteration consists in the model receiving as input the latest generated token, plus cached internal computations for all other previous inputs, returning the next token logits. Caching is used to avoid redundant computations, resulting in faster forward passes, but it’s not mandatory (and can be used partially). When caching is disabled, the input contains the entire sequence of tokens generated so far and the output contains the logits corresponding to the next token for *all positions* in the sequence! The logits at position N correspond to the distribution for the next token if the input consisted of the first N tokens, ignoring all subsequent tokens in the sequence. In the particular case of greedy decoding, if you pass the generated sequence as input and apply the argmax operator to the resulting logits, you will obtain the generated sequence back.
86
 
87
 
88
  ```python
 
105
 
106
 
107
  <!-- [GIF 3 -- FWD CONFIRMATION] -->
108
+ <figure class="image table text-center m-0 w-full">
109
+ <video
110
+ style="max-width: 90%; margin: auto;"
111
+ autoplay loop muted playsinline
112
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_3_1080p.mov"
113
+ ></video>
114
+ </figure>
115
 
116
 
117
  Let’s consider for a second that you have access to a magical latency-free oracle model that generates the same sequence as your model, for any given input. For argument’s sake, it can’t be used directly, it’s limited to being an assistant to your generation procedure. Using the property described above, you could use this assistant model to get candidate output tokens followed by a forward pass with your model to confirm that they are indeed correct. In this utopian scenario, the latency of text generation would be reduced from `O(n)` to `O(1)`, with `n` being the number of generated tokens. For long generations, we're talking about several orders of magnitude.
 
137
  6. Adjust the number of candidate tokens to be produced in the next iteration — our original heuristic increases it by `2` if ALL tokens match and decreases it by `1` otherwise.
138
 
139
  <!-- [GIF 4 -- ASSISTED GENERATION] -->
140
+ <figure class="image table text-center m-0 w-full">
141
+ <video
142
+ style="max-width: 90%; margin: auto;"
143
+ autoplay loop muted playsinline
144
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_4_1080p.mov"
145
+ ></video>
146
+ </figure>
147
 
148
+ We’ve designed the API in 🤗 Transformers such that this process is hassle-free for you. All you need to do is to pass the assistant model under the new `assistant_model` keyword argument and reap the latency gains! At the time of the release of this blog post, assisted generation is limited to a batch size of `1`.
149
 
150
 
151
  ```python
 
168
  ```
169
 
170
 
171
+ Is the additional internal complexity worth it? Let’s have a look at the latency numbers for the greedy decoding case (results for sampling are in the next section), considering a batch size of `1`. These results were pulled directly out of 🤗 Transformers without any additional optimizations, so you should be able to reproduce them in your setup.
172
 
173
 
174
  <!-- [SPACE WITH GREEDY DECODING PERFORMANCE NUMBERS] -->
175
+ <script
176
+ type="module"
177
+ src="https://gradio.s3-us-west-2.amazonaws.com/3.28.2/gradio.js"
178
+ ></script>
179
 
180
+ <gradio-app space="joaogante/assisted_generation_benchmarks"></gradio-app>
181
 
182
 
183
  Glancing at the collected numbers, we see that assisted generation can deliver significant latency reductions in diverse settings, but it is not a silver bullet – you should benchmark it before applying it to your use case. We can conclude that assisted generation:
 
195
 
196
  <!-- [TEMPERATURE RESULTS, SHOW THAT LATENCY INCREASES STEADILY WITH TEMP] -->
197
  <div align="center">
198
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/temperature.png"/>
199
  </div>
200
 
201
 
202
+ Why don't you see it for yourself, so get a feeling of assisted generation?
203
 
204
 
205
  <!-- [DEMO] -->
206
+ <gradio-app space="joaogante/assisted_generation_demo"></gradio-app>
 
 
207
 
208
 
209
  ## Future directions
 
214
 
215
  Finally, assisted generation resurfaces a crucial question in text generation. The field has been evolving with the constraint where all new tokens are the result of a fixed amount of compute, for a given model. One token per homogeneous forward pass, in pure autoregressive fashion. This blog post reinforces the idea that it shouldn’t be the case: large subsections of the generated output can also be equally generated by models that are a fraction of the size. For that, we’ll need new model architectures and decoding methods – we’re excited to see what the future holds!
216
 
217
+
218
+ ## Related Work
219
+
220
+ After the original release of this blog post, it came to my attention that other works have explored the same core principle (use a forward pass to validate longer continuations). In particular, have a look at the following works:
221
+ - [Blockwise Parallel Decoding](https://proceedings.neurips.cc/paper/2018/file/c4127b9194fe8562c64dc0f5bf2c93bc-Paper.pdf), by Google Brain
222
+ - [Speculative Sampling](https://arxiv.org/abs/2302.01318), by DeepMind
223
+
224
+
225
+ ## Citation
226
+ ```bibtex
227
+ @misc {gante2023assisted,
228
+ author = { {Joao Gante} },
229
+ title = { Assisted Generation: a new direction toward low-latency text generation },
230
+ year = 2023,
231
+ url = { https://huggingface.co/blog/assisted-generation },
232
+ doi = { 10.57967/hf/0638 },
233
+ publisher = { Hugging Face Blog }
234
+ }
235
+ ```
236
+
237
+
238
  ## Acknowledgements
239
 
240
  I'd like to thank Sylvain Gugger, Nicolas Patry, and Lewis Tunstall for sharing many valuable suggestions to improve this blog post. Finally, kudos to Chunte Lee for designing the gorgeous cover you can see in our web page.