Update README.md
Browse files
README.md
CHANGED
@@ -12,6 +12,11 @@ base_model:
|
|
12 |
|
13 |
# PruneMELLama8bTEST-22_30
|
14 |
|
|
|
|
|
|
|
|
|
|
|
15 |
PruneMELLama8bTEST-22_30 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
16 |
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
|
17 |
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
|
|
|
12 |
|
13 |
# PruneMELLama8bTEST-22_30
|
14 |
|
15 |
+
This model was pruned after being analyzed with [PruneMe](https://github.com/arcee-ai/PruneMe)
|
16 |
+
|
17 |
+
*INFO:root:Layer 22 to 30 has the minimum average distance of 0.26598974609375. Consider examining this layer more closely for potential optimization or removal.*
|
18 |
+
|
19 |
+
|
20 |
PruneMELLama8bTEST-22_30 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
21 |
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
|
22 |
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
|