OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework
Abstract
The reproducibility and transparency of large language models are crucial for advancing open research, ensuring the trustworthiness of results, and enabling investigations into data and model biases, as well as potential risks. To this end, we release OpenELM, a state-of-the-art open language model. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. For example, with a parameter budget of approximately one billion parameters, OpenELM exhibits a 2.36% improvement in accuracy compared to OLMo while requiring 2times fewer pre-training tokens. Diverging from prior practices that only provide model weights and inference code, and pre-train on private datasets, our release includes the complete framework for training and evaluation of the language model on publicly available datasets, including training logs, multiple checkpoints, and pre-training configurations. We also release code to convert models to MLX library for inference and fine-tuning on Apple devices. This comprehensive release aims to empower and strengthen the open research community, paving the way for future open research endeavors. Our source code along with pre-trained model weights and training recipes is available at https://github.com/apple/corenet. Additionally, \model models can be found on HuggingFace at: https://huggingface.co/apple/OpenELM.
Community
Great to see developments in the small language model family.
βI sense a great disturbance in the source, as if millions of developers suddenly cried out in excitement and were suddenly empowered. I fear something remarkable has happened. The βApplesβ and βMetasβ of the tech empire have opened their vaults, joining the open-source Resistance. This is the beginning of a new collaboration, a new hope for innovation.β πππ» May the source be with you
I cross posted this to Linkedin as I thought it was too funny not to share!
@MichaelBarryUK
check it out -- would like to tag your LI profile:
https://www.linkedin.com/feed/update/urn:li:activity:7188878136430710784?commentUrn=urn%3Ali%3Acomment%3A%28activity%3A7188878136430710784%2C7188905002994671616%29&dashCommentUrn=urn%3Ali%3Afsd_comment%3A%287188905002994671616%2Curn%3Ali%3Aactivity%3A7188878136430710784%29
Apple realized their golden age of being considered the king of innovation is long over. Now they're lagging far behind in the generative AI race.
Got a plain-english rewrite of the paper here if anyone is interested: https://www.aimodels.fyi/papers/arxiv/openelm-efficient-language-model-family-open-source
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Stable LM 2 1.6B Technical Report (2024)
- LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models (2024)
- Long-Context Language Modeling with Parallel Context Encoding (2024)
- Jamba: A Hybrid Transformer-Mamba Language Model (2024)
- Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
I have featured this paper on my blog - https://ajithp.com/2024/05/04/openelm-apples-groundbreaking-open-language-model/
Unlocking New Levels of Language Modeling with OpenELM! π§ β¨
Links π:
π Subscribe: https://www.youtube.com/@Arxflix
π Twitter: https://x.com/arxflix
π LMNT (Partner): https://lmnt.com/
Models citing this paper 28
Browse 28 models citing this paperDatasets citing this paper 0
No dataset linking this paper