-
Unified Normalization for Accelerating and Stabilizing Transformers
Paper • 2208.01313 • Published • 1 -
Interpret Vision Transformers as ConvNets with Dynamic Convolutions
Paper • 2309.10713 • Published • 1 -
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs
Paper • 2003.00152 • Published • 1 -
SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-parameterized Batch Normalization
Paper • 2405.11582 • Published • 12
Collections
Discover the best community collections!
Collections including paper arxiv:2309.10713
-
Trellis Networks for Sequence Modeling
Paper • 1810.06682 • Published • 1 -
Pruning Very Deep Neural Network Channels for Efficient Inference
Paper • 2211.08339 • Published • 1 -
LAPP: Layer Adaptive Progressive Pruning for Compressing CNNs from Scratch
Paper • 2309.14157 • Published • 1 -
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Paper • 2312.00752 • Published • 138
-
The Impact of Depth and Width on Transformer Language Model Generalization
Paper • 2310.19956 • Published • 9 -
Retentive Network: A Successor to Transformer for Large Language Models
Paper • 2307.08621 • Published • 170 -
RWKV: Reinventing RNNs for the Transformer Era
Paper • 2305.13048 • Published • 12 -
Attention Is All You Need
Paper • 1706.03762 • Published • 41
-
Replacing softmax with ReLU in Vision Transformers
Paper • 2309.08586 • Published • 17 -
Softmax Bias Correction for Quantized Generative Models
Paper • 2309.01729 • Published • 1 -
The Closeness of In-Context Learning and Weight Shifting for Softmax Regression
Paper • 2304.13276 • Published • 1 -
Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing
Paper • 2306.12929 • Published • 12
-
Woodpecker: Hallucination Correction for Multimodal Large Language Models
Paper • 2310.16045 • Published • 14 -
HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Paper • 2310.14566 • Published • 25 -
SILC: Improving Vision Language Pretraining with Self-Distillation
Paper • 2310.13355 • Published • 6 -
Conditional Diffusion Distillation
Paper • 2310.01407 • Published • 20