intervitens commited on
Commit
daa1e59
1 Parent(s): 387fa84

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -0
README.md CHANGED
@@ -8,6 +8,18 @@ language:
8
  - en
9
  inference: false
10
  ---
 
 
 
 
 
 
 
 
 
 
 
 
11
  # Model Card for Mixtral-8x7B
12
  The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
13
 
 
8
  - en
9
  inference: false
10
  ---
11
+
12
+
13
+ Quantized using samples of 8192 tokens from the default ExllamaV2 dataset.
14
+
15
+ Requires ExllamaV2 version 0.0.11 and up.
16
+
17
+ Original model link: [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
18
+
19
+ Original model README below.
20
+
21
+ ***
22
+
23
  # Model Card for Mixtral-8x7B
24
  The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
25