Papers
arxiv:2306.15595

Extending Context Window of Large Language Models via Positional Interpolation

Published on Jun 27, 2023
· Submitted by akhaliq on Jun 28, 2023
#1 Paper of the day
Authors:
,
,
,

Abstract

We present Position Interpolation (PI) that extends the context window sizes of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal fine-tuning (within 1000 steps), while demonstrating strong empirical results on various tasks that require long context, including passkey retrieval, language modeling, and long document summarization from LLaMA 7B to 65B. Meanwhile, the extended model by Position Interpolation preserve quality relatively well on tasks within its original context window. To achieve this goal, Position Interpolation linearly down-scales the input position indices to match the original context window size, rather than extrapolating beyond the trained context length which may lead to catastrophically high attention scores that completely ruin the self-attention mechanism. Our theoretical study shows that the upper bound of interpolation is at least sim 600 times smaller than that of extrapolation, further demonstrating its stability. Models extended via Position Interpolation retain its original architecture and can reuse most pre-existing optimization and infrastructure.

Community

Amazing

So cool, RoPE is so potential

KaikoKenDev has discovered something similar!

image.png

Great idea, use Interpolation to achieve Extrapolation

Extending Context Windows in LLMs with Position Interpolation

Links 🔗:

👉 Subscribe: https://www.youtube.com/@Arxflix
👉 Twitter: https://x.com/arxflix
👉 LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

https://arxiv.org/html/2402.13753v1
long RoPE Feb 2024
extends initial paper

Sign up or log in to comment

Models citing this paper 24

Browse 24 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2306.15595 in a dataset README.md to link it from this page.

Spaces citing this paper 24

Collections including this paper 6