munish0838's picture
Upload README.md with huggingface_hub
ade834f verified
|
raw
history blame
1.32 kB
metadata
base_model: THUDM/glm-4-9b-chat
pipeline_tag: text-generation
license: other
license_name: glm-4
license_link: https://huggingface.co/THUDM/glm-4-9b-chat/blob/main/LICENSE
language:
  - zh
  - en
tags:
  - glm
  - chatglm
  - thudm
  - chat
  - abliterated
library_name: transformers

QuantFactory/glm-4-9b-chat-abliterated-GGUF

This is quantized version of byroneverson/glm-4-9b-chat-abliterated created using llama.cpp

Original Model Card

GLM 4 9B Chat - Abliterated

Check out the jupyter notebook for details of how this model was abliterated from glm-4-9b-chat.

The python package "tiktoken" is required to quantize the model into gguf format. So I had to create a fork of GGUF My Repo (+tiktoken).

Logo