VPTQ Online Demo

VPTQ (Vector Post-Training Quantization) is an advanced compression technique that dramatically reduces the size of large language models such as the 70B and 405B Llama models. VPTQ efficiently compresses these models to 1-2 bits within just a few hours, enabling them to run effectively on GPUs with limited memory. For more information, visit the following links: