shuyuej commited on
Commit
fa3e4f4
1 Parent(s): 43280ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -3
README.md CHANGED
@@ -1,3 +1,39 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ # The Quantized Meta LLaMA 3 70B Instruct Model
6
+
7
+ Original Base Model: `meta-llama/Meta-Llama-3-70B-Instruct`.<br>
8
+ Link: [https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)
9
+
10
+ ## Quantization Configurations
11
+ ```
12
+ "quantization_config": {
13
+ "batch_size": 1,
14
+ "bits": 4,
15
+ "block_name_to_quantize": null,
16
+ "cache_block_outputs": true,
17
+ "damp_percent": 0.1,
18
+ "dataset": null,
19
+ "desc_act": false,
20
+ "exllama_config": {
21
+ "version": 1
22
+ },
23
+ "group_size": 128,
24
+ "max_input_length": null,
25
+ "model_seqlen": null,
26
+ "module_name_preceding_first_block": null,
27
+ "modules_in_block_to_quantize": null,
28
+ "pad_token_id": null,
29
+ "quant_method": "gptq",
30
+ "sym": true,
31
+ "tokenizer": null,
32
+ "true_sequential": true,
33
+ "use_cuda_fp16": false,
34
+ "use_exllama": true
35
+ },
36
+ ```
37
+
38
+ ## Source Codes
39
+ Source Codes: [https://github.com/vkola-lab/medpodgpt/tree/main/quantization](https://github.com/vkola-lab/medpodgpt/tree/main/quantization).