Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Qwen1.5-MoE-A2.7B - bnb 4bits

Original model description:

license: other license_name: tongyi-qianwen license_link: >- https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - pretrained - moe

Qwen1.5-MoE-A2.7B

Introduction

Qwen1.5-MoE is a transformer-based MoE decoder-only language model pretrained on a large amount of data.

For more details, please refer to our blog post and GitHub repo.

Model Details

Qwen1.5-MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, Qwen1.5-MoE-A2.7B is upcycled from Qwen-1.8B. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while achieving comparable performance to Qwen1.5-7B, it only requires 25% of the training resources. We also observed that the inference speed is 1.74 times that of Qwen1.5-7B.

Requirements

The code of Qwen1.5-MoE has been in the latest Hugging face transformers and we advise you to build from source with command pip install git+https://github.com/huggingface/transformers, or you might encounter the following error:

KeyError: 'qwen2_moe'.

Usage

We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.

Downloads last month
5
Safetensors
Model size
7.68B params
Tensor type
F32
FP16
U8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.