test_tiny_mixtral
test_tiny_mixtral is a Mixure of Experts (MoE) made with the following models using LazyMergekit:
- openaccess-ai-collective/tiny-mistral
- openaccess-ai-collective/tiny-mistral
- openaccess-ai-collective/tiny-mistral
- openaccess-ai-collective/tiny-mistral
𧩠Configuration
base_model: openaccess-ai-collective/tiny-mistral
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: openaccess-ai-collective/tiny-mistral
positive_prompts:
- "math"
# You can add negative_prompts if needed
- source_model: openaccess-ai-collective/tiny-mistral
positive_prompts:
- "science"
- source_model: openaccess-ai-collective/tiny-mistral
positive_prompts:
- "writing"
# You can add negative_prompts if needed
- source_model: openaccess-ai-collective/tiny-mistral
positive_prompts:
- "general"
π» Usage
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "JSpergel/test_tiny_mixtral"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for JSpergel/test_tiny_mixtral
Base model
openaccess-ai-collective/tiny-mistral