justinthelaw commited on
Commit
e1e25d4
1 Parent(s): 7ee0383

fix README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -18,7 +18,7 @@ tags:
18
  base_model: mistralai/Mistral-7B-v0.1
19
  datasets:
20
  - teknium/OpenHermes-2.5
21
- - VMWare/open-instruct
22
  ---
23
 
24
  # Hermes-2-Pro-Mistral-7B GPTQ 4-bit 32g Group Size
@@ -32,7 +32,7 @@ datasets:
32
  <!-- description start -->
33
  ## Description
34
 
35
- This repo contains GPTQ 4-bit, 128g Group Size, quantized model files for the recently released upgrade of [Hermes-2-Pro-Mistral-7B](https://huggingface.co/justinthelaw/Hermes-2-Pro-Mistral-7B-4bit-128g-instruct).
36
 
37
  <!-- README_GPTQ.md-provided-files start -->
38
  ## GPTQ parameters
@@ -41,7 +41,7 @@ Models are released as sharded safetensors files.
41
 
42
  | Bits | GS | GPTQ Dataset | Seq Len | Size |
43
  | ---- | -- | ----------- | ------- | ---- |
44
- | 4 | 128 | [VMWare Open Instruct](https://huggingface.co/datasets/vmware/open-instruct) | 128,000 | 2.28 GB
45
 
46
  <!-- README_GPTQ.md-provided-files end -->
47
 
 
18
  base_model: mistralai/Mistral-7B-v0.1
19
  datasets:
20
  - teknium/OpenHermes-2.5
21
+ - vmware/open-instruct
22
  ---
23
 
24
  # Hermes-2-Pro-Mistral-7B GPTQ 4-bit 32g Group Size
 
32
  <!-- description start -->
33
  ## Description
34
 
35
+ This repo contains GPTQ 4-bit, 32g Group Size, quantized model files for the Nous Research [Hermes-2-Pro-Mistral-7B](https://huggingface.co/justinthelaw/Hermes-2-Pro-Mistral-7B-4bit-128g-instruct) fine-tune of the [Mistral-7b-Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) model.
36
 
37
  <!-- README_GPTQ.md-provided-files start -->
38
  ## GPTQ parameters
 
41
 
42
  | Bits | GS | GPTQ Dataset | Seq Len | Size |
43
  | ---- | -- | ----------- | ------- | ---- |
44
+ | 4 | 32 | [VMWare Open Instruct](https://huggingface.co/datasets/vmware/open-instruct) | 128,000 | 4.57 GB
45
 
46
  <!-- README_GPTQ.md-provided-files end -->
47