base_model: THUDM/glm-4-9b-chat | |
pipeline_tag: text-generation | |
license: other | |
license_name: glm-4 | |
license_link: https://huggingface.co/THUDM/glm-4-9b-chat/blob/main/LICENSE | |
language: | |
- zh | |
- en | |
tags: | |
- glm | |
- chatglm | |
- thudm | |
- chat | |
- abliterated | |
library_name: transformers | |
![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ) | |
# QuantFactory/glm-4-9b-chat-abliterated-GGUF | |
This is quantized version of [byroneverson/glm-4-9b-chat-abliterated](https://huggingface.co/byroneverson/glm-4-9b-chat-abliterated) created using llama.cpp | |
# Original Model Card | |
# GLM 4 9B Chat - Abliterated | |
Check out the <a href="https://huggingface.co/byroneverson/glm-4-9b-chat-abliterated/blob/main/abliterate-glm-4-9b-chat.ipynb">jupyter notebook</a> for details of how this model was abliterated from glm-4-9b-chat. | |
The python package "tiktoken" is required to quantize the model into gguf format. So I had to create <a href="https://huggingface.co/spaces/byroneverson/gguf-my-repo-plus-tiktoken">a fork of GGUF My Repo (+tiktoken)</a>. | |
![Logo](https://huggingface.co/byroneverson/glm-4-9b-chat-abliterated/resolve/main/logo.png "Logo") | |