|
--- |
|
license: other |
|
license_name: other |
|
license_link: LICENSE |
|
--- |
|
|
|
<a href="https://ibb.co/ThHYWwy"><img src="https://i.ibb.co/Jkzm3cZ/Screenshot-2024-05-20-at-4-21-39-PM.png" alt="Screenshot-2024-05-20-at-4-21-39-PM" border="0"></a> |
|
|
|
Model Mixed by [Solo Merge Method](https://medium.com/@puffanddmx82/enhancing-language-models-with-dynamic-attention-version-2-84ef8adc3646) |
|
|
|
Keep in mind that the accuracy of your desired questions may vary for this merge. |
|
|
|
Regardless of whether the idea of new merge method is good or bad, I believe that the actual result of what i thought is of great significance. |
|
|
|
Once again, there is no right answer for the famous LLM. The correct answer is what you choose based on your evidence with so many real human random test. |
|
|
|
It is good to rely on the evaluation result score, but in LLM, the most important thing is what you actually feel after taking your real fact random test. |
|
|
|
The gap is bigger than I thought... |
|
|
|
If you keep going with the wrong first button, you could end up in a black hole from which you can never escape... |
|
|
|
By the time you realize it, it’s already too late... |
|
|
|
When looking at an LLM, don't trust others, trust yourself by real fact check. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) |
|
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) |
|
* [maum-ai/Llama-3-MAAL-8B-Instruct-v0.1](https://huggingface.co/maum-ai/Llama-3-MAAL-8B-Instruct-v0.1) |