kennylam's picture
Added various bpw branches
00821b2
|
raw
history blame
1.34 kB
metadata
pipeline_tag: text-generation
license: apache-2.0
language:
  - zh
  - en

Model Card for Breeze-7B-Instruct-64k-v0_1 with ExLlamaV2 Quantization

Original model 原始模型: https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0_1

This is a quantizated model from MediaTek-Research/Breeze-7B-Instruct-64k-v0_1 in exl2 format.

You are currently at the main branch, which provides only measurement.json used in the ExLlamaV2 quantization. Please take a look of your choices in following table of branches.

這裡是main branch, 只提供EvLlamaV2量化時所用到的measurement.json檔案。

8.0bpw-h8 8 bits per weight.

6.0bpw-h6 6 bits per weight.

5.0bpw-h6 5 bits per weight.

4.0bpw-h6 4 bits per weight.

3.0bpw-h6 3 bits per weight.

Citation

@article{breeze7b2024,
  title={},
  author={},
  journal={arXiv},
  year={2024}
}