Edit model card

QuantFactory/glm-4-9b-chat-abliterated-GGUF

This is quantized version of byroneverson/glm-4-9b-chat-abliterated created using llama.cpp

Original Model Card

GLM 4 9B Chat - Abliterated

Check out the jupyter notebook for details of how this model was abliterated from glm-4-9b-chat.

The python package "tiktoken" is required to quantize the model into gguf format. So I had to create a fork of GGUF My Repo (+tiktoken).

Logo

Downloads last month
355
GGUF
Model size
9.4B params
Architecture
chatglm

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for QuantFactory/glm-4-9b-chat-abliterated-GGUF

Quantized
(6)
this model