mlx-community/Qwen2.5-Coder-7B-Instruct-4bit
The Model mlx-community/Qwen2.5-Coder-7B-Instruct-4bit was converted to MLX format from Qwen/Qwen2.5-Coder-7B-Instruct using mlx-lm version 0.18.1.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen2.5-Coder-7B-Instruct-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
- Downloads last month
- 140
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.