asiansoul's picture
Update README.md
c65139f verified
|
raw
history blame
2.08 kB
---
base_model:
- beomi/Llama-3-Open-Ko-8B
- aaditya/Llama3-OpenBioLLM-8B
- MLP-KTLim/llama-3-Korean-Bllossom-8B
- maum-ai/Llama-3-MAAL-8B-Instruct-v0.1
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B-Instruct
- Locutusque/llama-3-neural-chat-v2.2-8B
- asiansoul/Solo-Llama-3-MAAL-MLP-KoEn-8B
library_name: transformers
tags:
- mergekit
- merge
---
# U-GO-GIRL-Llama-3-KoEn-8B
<a href="https://ibb.co/cr8X8zd"><img src="https://i.ibb.co/Tg0q0z5/ugoo.png" alt="ugoo" border="0"></a>
**Is U-GO_GIRL the top-tier AI magic you’ve been craving?**
Experience the pinnacle of artificial intelligence with U-GO_GIRL.
Keep in mind that the accuracy of your desired questions may vary for this merge.
When looking at an LLM, don't trust others, trust yourself by real fact check.
Buy me a cup of coffee if i can do the more work for you.
[Toonation Donation](https://toon.at/donate/asiansoul)
ETH/USDT(ERC20) Donation : 0x8BB117dD4Cc0E19E5536ab211070c0dE039a85c0
## Use
Korean, English, Medical, Writing, Coding, and ETC.
### Models Mixed
The following models were included in the mixtape:
* [asiansoul/Solo-Llama-3-MAAL-MLP-KoEn-8B](https://huggingface.co/asiansoul/Solo-Llama-3-MAAL-MLP-KoEn-8B)
* [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B)
* [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B)
* [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B)
* [maum-ai/Llama-3-MAAL-8B-Instruct-v0.1](https://huggingface.co/maum-ai/Llama-3-MAAL-8B-Instruct-v0.1)
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
* [Locutusque/llama-3-neural-chat-v2.2-8B](https://huggingface.co/Locutusque/llama-3-neural-chat-v2.2-8B)
## Citation
**Language Model**
```text
@misc{bllossom,
author = {JayLee aka "asiansoul"},
title = {Merge with Dare, Solo},
year = {2024},
paperLink = {\url{https://medium.com/@puffanddmx82/enhancing-language-models-with-dynamic-attention-version-2-84ef8adc3646}},
},
}
```