matlok commited on
Commit
f48f821
1 Parent(s): 6520dc9

make the readme easier to read

Browse files
Files changed (1) hide show
  1. README.md +11 -7
README.md CHANGED
@@ -4,13 +4,7 @@ license: unknown
4
 
5
  ## Merging models like lego blocks using ddare and ties
6
 
7
- If you want to fine-tune, here's an example Unsloth fine tuning guide for:
8
-
9
- - [Alpaca + TinyLlama + RoPE Scaling full example.ipynb](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing#scrollTo=LjY75GoYUCB8)
10
-
11
- ## How do I generate my own model merges?
12
-
13
- The code below merges the following HuggingFace TinyLlama models:
14
 
15
  - TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
16
  - Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct
@@ -18,6 +12,16 @@ The code below merges the following HuggingFace TinyLlama models:
18
  - Tensoic/TinyLlama-1.1B-3T-openhermes
19
  - Josephgflowers/TinyLlama-3T-Cinder-v1.3
20
 
 
 
 
 
 
 
 
 
 
 
21
  ```python3
22
  import transformers
23
  import torch
 
4
 
5
  ## Merging models like lego blocks using ddare and ties
6
 
7
+ This model was merged with the following HuggingFace TinyLlama models using ties:
 
 
 
 
 
 
8
 
9
  - TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
10
  - Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct
 
12
  - Tensoic/TinyLlama-1.1B-3T-openhermes
13
  - Josephgflowers/TinyLlama-3T-Cinder-v1.3
14
 
15
+ ## How do I fine-tune this model?
16
+
17
+ Please refer to the Unsloth fine-tuning guide for:
18
+
19
+ - [Alpaca + TinyLlama + RoPE Scaling full example.ipynb](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing)
20
+
21
+ ## How do I generate my own model merges?
22
+
23
+ Here's the standalone python script we used with logs below:
24
+
25
  ```python3
26
  import transformers
27
  import torch