Edit model card

BenevolenceMessiah/internlm2_5-7b-chat-Q8_0-GGUF

Asalamu Alaikum! This model was converted to GGUF format from internlm/internlm2_5-7b-chat using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Description (per TheBloke)

This repo contains GGUF format model files.

These files were quantised using ggml-org/gguf-my-repo [https://huggingface.co/spaces/ggml-org/gguf-my-repo]

About GGUF (per TheBloke)

GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.

Here is an incomplete list of clients and libraries that are known to support GGUF:

  • llama.cpp. The source project for GGUF. Offers a CLI and a server option.
  • text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
  • KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
  • GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
  • LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
  • LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.
  • Faraday.dev, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
  • llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
  • candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.
  • ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.

Compatibility

These quantised GGUFv2 files are compatible with llama.cpp from August 27th 2023 onwards, as of commit d0cee0d

They are also compatible with many third party UIs and libraries - please see the list at the top of this README.

Explanation of quantisation methods

Click to see details

The new methods available are:

  • GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
  • GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
  • GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
  • GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
  • GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw

Refer to the Provided Files table below to see what files use which methods, and how.

Provided Files (Not Including iMatrix Quantization)

Quant method Bits Example Size Max RAM required Use case
Q2_K 2 2.72 GB 5.22 GB significant quality loss - not recommended for most purposes
Q3_K_S 3 3.16 GB 5.66 GB very small, high quality loss
Q3_K_M 3 3.52 GB 6.02 GB very small, high quality loss
Q3_K_L 3 3.82 GB 6.32 GB small, substantial quality loss
Q4_0 4 4.11 GB 6.61 GB legacy; small, very high quality loss - prefer using Q3_K_M
Q4_K_S 4 4.14 GB 6.64 GB small, greater quality loss
Q4_K_M 4 4.37 GB 6.87 GB medium, balanced quality - recommended
Q5_0 5 5.00 GB 7.50 GB legacy; medium, balanced quality - prefer using Q4_K_M
Q5_K_S 5 5.00 GB 7.50 GB large, low quality loss - recommended
Q5_K_M 5 5.13 GB 7.63 GB large, very low quality loss - recommended
Q6_K 6 5.94 GB 8.44 GB very large, extremely low quality loss
Q8_0 8 7.70 GB 10.20 GB very large, extremely low quality loss - not recommended

Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.


Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo BenevolenceMessiah/internlm2_5-7b-chat-Q8_0-GGUF --hf-file internlm2_5-7b-chat-q8_0.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo BenevolenceMessiah/internlm2_5-7b-chat-Q8_0-GGUF --hf-file internlm2_5-7b-chat-q8_0.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo BenevolenceMessiah/internlm2_5-7b-chat-Q8_0-GGUF --hf-file internlm2_5-7b-chat-q8_0.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo BenevolenceMessiah/internlm2_5-7b-chat-Q8_0-GGUF --hf-file internlm2_5-7b-chat-q8_0.gguf -c 2048
Downloads last month
4
GGUF
Model size
7.74B params
Architecture
internlm2

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for BenevolenceMessiah/internlm2_5-7b-chat-Q8_0-GGUF

Quantized
(15)
this model