Edit model card

Dimensity/Dimensity-3B-GGUF

Quantized GGUF model files for Dimensity-3B from Dimensity

Name Quant method Size
dimensity-3b.fp16.gguf fp16 5.59 GB
dimensity-3b.q2_k.gguf q2_k 1.20 GB
dimensity-3b.q3_k_m.gguf q3_k_m 1.39 GB
dimensity-3b.q4_k_m.gguf q4_k_m 1.71 GB
dimensity-3b.q5_k_m.gguf q5_k_m 1.99 GB
dimensity-3b.q6_k.gguf q6_k 2.30 GB
dimensity-3b.q8_0.gguf q8_0 2.97 GB

Original Model Card:

Dimensity-3B

Model Details

Dimensity-3B is a finetuned version of the StableLM framework trained on a variety of conversational data. It contains 3 billion parameters.

Intended Uses

This model is intended for conversational AI applications. It can engage in open-ended dialogue by generating responses to user prompts.

Factors

Training Data

The model was trained on a large dataset of over 100 million conversational exchanges extracted from Reddit comments, customer support logs, and other online dialogues.

Prompt Template

The model was finetuned using the following prompt template:

### Human: {prompt} 

### Assistant:

This prompts the model to take on an assistant role.

Ethical Considerations

As the model was trained on public conversational data, it may generate responses that contain harmful stereotypes or toxic content. The model should be used with caution in sensitive contexts.

Caveats and Recommendations

This model is designed for open-ended conversation. It may sometimes generate plausible-sounding but incorrect information. Outputs should be validated against external sources.

Downloads last month
32
GGUF
Model size
2.8B params
Architecture
stablelm

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for afrideva/Dimensity-3B-GGUF

Quantized
(1)
this model