-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 7 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14
Collections
Discover the best community collections!
Collections including paper arxiv:2310.16944
-
Zephyr: Direct Distillation of LM Alignment
Paper • 2310.16944 • Published • 121 -
Exponentially Faster Language Modelling
Paper • 2311.10770 • Published • 118 -
System 2 Attention (is something you might need too)
Paper • 2311.11829 • Published • 39 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 48
-
FinGPT: Large Generative Models for a Small Language
Paper • 2311.05640 • Published • 27 -
LCM-LoRA: A Universal Stable-Diffusion Acceleration Module
Paper • 2311.05556 • Published • 81 -
Distributed Deep Learning in Open Collaborations
Paper • 2106.10207 • Published • 2 -
Datasets: A Community Library for Natural Language Processing
Paper • 2109.02846 • Published • 10
-
Evaluate & Evaluation on the Hub: Better Best Practices for Data and Model Measurements
Paper • 2210.01970 • Published • 11 -
Zephyr: Direct Distillation of LM Alignment
Paper • 2310.16944 • Published • 121 -
Datasets: A Community Library for Natural Language Processing
Paper • 2109.02846 • Published • 10 -
HuggingFace's Transformers: State-of-the-art Natural Language Processing
Paper • 1910.03771 • Published • 16