File size: 3,873 Bytes
e06ed14 06ef4bd e06ed14 06ef4bd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
---
license: apache-2.0
language:
- en
library_name: transformers
---
<div align="center"><img src="https://github.com/BudEcosystem/boomer/blob/main/assets/boomer-logo.png" width=200></div>
<p align="center"><i>Democratizing access to LLMs for the open-source community.<br>Let's advance AI, together. </i></p>
----
## Introduction 🎉
We are open-sourcing one of our early experiments of pretraining with custom architecture and datasets. This 1.1B parameter model is pre-trained from scratch using a custom-curated dataset of 41B tokens. The model's architecture experiments contain the addition of flash attention and a higher intermediate dimension of the MLP layer. The dataset is a combination of wiki, stories, arxiv, math and code. The model is available on huggingface [Boomer1B](https://huggingface.co/budecosystem/boomer-1b)
<div align="center"><img src="https://github.com/BudEcosystem/boomer/blob/main/assets/boomer-arch.jpg" width=500></div>
## Getting Started on GitHub 💻
Ready to dive in? Here's how you can get started with our models on GitHub.
Install the necessary dependencies with the following command:
```bash
pip install -r requirements.txt
```
### Generate responses
Now that your model is fine-tuned, you're ready to generate responses. You can do this using our generate.py script, which runs inference from the Hugging Face model hub and inference on a specified input. Here's an example of usage:
```bash
python generate.py --base_model 'budecosystem/boomer-1b' --prompt="the president of India is"
```
### Fine-tuning 🎯
It's time to upgrade the model by fine-tuning the model. You can do this using our provided finetune.py script. Here's an example command:
```bash
torchrun --nproc_per_node 4 train.py \
--base_model budecosystem/boomer-1b \
--data_path dataset.json \
--output_dir output \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 2 \
--num_train_epochs 1 \
--learning_rate 2e-5 \
--fp16 True \
--logging_steps 10 \
--deepspeed ds_config.json
```
## Model details
| Parameters | Value |
| :------------- | :----: |
| n_layers | 4 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 32000 |
| sequence length | 4096 |
| Intermediate size | 11008 |
### Tokenizer
We used the SentencePiece tokenizer during the fine-tuning process. This tokenizer is known for its capability to handle open-vocabulary language tasks efficiently.
### Training details
The model is trained of 4 A100 80GB for approximately 250hrs.
| Hyperparameters | Value |
| :----------------------------| :-----: |
| per_device_train_batch_size | 2 |
| gradient_accumulation_steps | 2 |
| learning_rate | 2e-4 |
| optimizer | adamw |
| beta | 0.9, 0.95 |
| fp16 | True |
| GPU | 4 A100 80GB |
## Evaluations
We have evaluated the pre-trained model on few of the benchmarks
| Model Name | ARC | MMLU | Human Eval | Hellaswag | BBH | DROP | GSM8K |
|:----------:|:--------:|:----:|:----------:|:---------:|:-----: |:-----:|:----:|
| Boomer1B | 22.35 | 25.92| 6.1 | 31.66 | 28.65 | 6.13 | 1.5 |
### Why use BOOMER?
Retrieval augmentation
Inference at the edge
Language modeling use cases
### Final thought on Boomer!
This isn't the end. It's just the beginning of a journey towards creating more advanced, more efficient, and more accessible language models. We invite you to join us on this exciting journey.
### Aknowledgements
We'd like to thank the open-source community and the researchers whose foundational work laid the path for BOOMER. Special shoutout to our dedicated team who have worked relentlessly to curate the dataset and fine-tune the model to perfection. |