Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free Paper • 2410.10814 • Published 27 days ago • 48
MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More Paper • 2410.06270 • Published Oct 8 • 1
SVG: 3D Stereoscopic Video Generation via Denoising Frame Matrix Paper • 2407.00367 • Published Jun 29 • 9
SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models Paper • 2405.14917 • Published May 23 • 1
How Good Are Low-bit Quantized LLaMA3 Models? An Empirical Study Paper • 2404.14047 • Published Apr 22 • 44
BiLLM: Pushing the Limit of Post-Training Quantization for LLMs Paper • 2402.04291 • Published Feb 6 • 48