Edit model card

What is it? A 2x7B MoE model for Roleplay(?).

You will occur GPT-like responses sometimes, just skip it and reroll (gacha time). Overall, I think it good enough for Roleplaying.

You may want see this: https://huggingface.co/Alsebay/My_LLMs_Leaderboard

This model is is a Mixure of Experts (MoE) made with the following models:

  • udkai/Turdus
  • Kquant03/Samlagast-7B-laser-bf16

If you used it, please let me know if it good or not. Thank you :)

GGUF version here:

https://huggingface.co/Alsebay/RainyMotip-2x7B-GGUF

Want more Quantization?

mradermacher made full of them. Check it out :) https://huggingface.co/mradermacher/RainyMotip-2x7B-GGUF

Downloads last month
34
Safetensors
Model size
12.9B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Alsebay/RainyMotip-2x7B

Quantizations
2 models

Collection including Alsebay/RainyMotip-2x7B