Beingjoy commited on
Commit
bb25b84
1 Parent(s): 06ef4bd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -4,6 +4,7 @@ language:
4
  - en
5
  library_name: transformers
6
  ---
 
7
  <div align="center"><img src="https://github.com/BudEcosystem/boomer/blob/main/assets/boomer-logo.png" width=200></div>
8
 
9
 
@@ -15,7 +16,7 @@ library_name: transformers
15
 
16
  We are open-sourcing one of our early experiments of pretraining with custom architecture and datasets. This 1.1B parameter model is pre-trained from scratch using a custom-curated dataset of 41B tokens. The model's architecture experiments contain the addition of flash attention and a higher intermediate dimension of the MLP layer. The dataset is a combination of wiki, stories, arxiv, math and code. The model is available on huggingface [Boomer1B](https://huggingface.co/budecosystem/boomer-1b)
17
 
18
- <div align="center"><img src="https://github.com/BudEcosystem/boomer/blob/main/assets/boomer-arch.jpg" width=500></div>
19
 
20
  ## Getting Started on GitHub 💻
21
 
 
4
  - en
5
  library_name: transformers
6
  ---
7
+
8
  <div align="center"><img src="https://github.com/BudEcosystem/boomer/blob/main/assets/boomer-logo.png" width=200></div>
9
 
10
 
 
16
 
17
  We are open-sourcing one of our early experiments of pretraining with custom architecture and datasets. This 1.1B parameter model is pre-trained from scratch using a custom-curated dataset of 41B tokens. The model's architecture experiments contain the addition of flash attention and a higher intermediate dimension of the MLP layer. The dataset is a combination of wiki, stories, arxiv, math and code. The model is available on huggingface [Boomer1B](https://huggingface.co/budecosystem/boomer-1b)
18
 
19
+ <div align="center"><img src="https://accubits-assests.s3.ap-south-1.amazonaws.com/boomer/boomer-arch.jpg" width=500></div>
20
 
21
  ## Getting Started on GitHub 💻
22