jsfs11 commited on
Commit
688126a
1 Parent(s): 5bfa402

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -11,14 +11,14 @@ base_model:
11
  - meta-llama/Meta-Llama-3-8B-Instruct
12
  ---
13
 
14
- # PruneMELLama8bTEST-22_30
15
 
16
  This model was pruned after being analyzed with [PruneMe](https://github.com/arcee-ai/PruneMe)
17
 
18
  *INFO:root:Layer 22 to 30 has the minimum average distance of 0.26598974609375. Consider examining this layer more closely for potential optimization or removal.*
19
 
20
 
21
- PruneMELLama8bTEST-22_30 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
22
  * [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
23
  * [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
24
 
@@ -45,7 +45,7 @@ from transformers import AutoTokenizer
45
  import transformers
46
  import torch
47
 
48
- model = "jsfs11/PruneMELLama8bTEST-22_30"
49
  messages = [{"role": "user", "content": "What is a large language model?"}]
50
 
51
  tokenizer = AutoTokenizer.from_pretrained(model)
 
11
  - meta-llama/Meta-Llama-3-8B-Instruct
12
  ---
13
 
14
+ # meta-LLama3-8b-PruneME-TEST-22_30
15
 
16
  This model was pruned after being analyzed with [PruneMe](https://github.com/arcee-ai/PruneMe)
17
 
18
  *INFO:root:Layer 22 to 30 has the minimum average distance of 0.26598974609375. Consider examining this layer more closely for potential optimization or removal.*
19
 
20
 
21
+ meta-LLama3-8b-PruneME-TEST-22_30 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
22
  * [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
23
  * [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
24
 
 
45
  import transformers
46
  import torch
47
 
48
+ model = "jsfs11/meta-LLama3-8b-PruneME-TEST-22_30"
49
  messages = [{"role": "user", "content": "What is a large language model?"}]
50
 
51
  tokenizer = AutoTokenizer.from_pretrained(model)