luow-amd commited on
Commit
1da6eaf
1 Parent(s): d349040

Update Readme.md

Browse files
Files changed (1) hide show
  1. README.md +73 -3
README.md CHANGED
@@ -1,3 +1,73 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.1
3
+ ---
4
+ # Meta-Llama-3.1-8B-Instruct-FP8-KV
5
+ This model was created by applying [Quark](https://quark.docs.amd.com/latest/index.html) with calibration samples from Pile dataset.
6
+ - ## Quantization Stragegy
7
+ - ***Quantized Layers***:All linear layers excluding "lm_head"
8
+ - ***Weight***:FP8 symmetric per-tensor
9
+ - ***Activation***: FP8 symmetric per-tensor
10
+ - ***KV Cache***: FP8 symmetric per-tensor
11
+ - ## Quick Start
12
+ 1. [Download and install Quark](https://quark.docs.amd.com/latest/install.html)
13
+ 2. Run the quantization script in the example folder using the following command line:
14
+ ```sh
15
+ export MODEL_DIR = [local model checkpoint folder] or meta-llama/Meta-Llama-3.1-8B-Instruct
16
+ # single GPU
17
+ python3 quantize_quark.py \
18
+ --model_dir $MODEL_DIR \
19
+ --output_dir llama31_8b_amd \
20
+ --quant_scheme w_fp8_a_fp8 \
21
+ --kv_cache_dtype fp8 \
22
+ --num_calib_data 128 \
23
+ --model_export quark_safetensors
24
+
25
+ # If model size is too large for single GPU, please use multi GPU instead.
26
+ python3 quantize_quark.py \
27
+ --model_dir $MODEL_DIR \
28
+ --output_dir llama31_8b_amd \
29
+ --quant_scheme w_fp8_a_fp8 \
30
+ --kv_cache_dtype fp8 \
31
+ --num_calib_data 128 \
32
+ --multi_gpu \
33
+ --model_export quark_safetensors
34
+ ```
35
+ ## Evaluation
36
+ Quark currently uses perplexity(PPL) as the evaluation metric for accuracy loss before and after quantization.The specific PPL algorithm can be referenced in the quantize_quark.py.
37
+
38
+
39
+ #### Evaluation scores
40
+ <table>
41
+ <tr>
42
+ <td><strong>Benchmark</strong>
43
+ </td>
44
+ <td><strong>Meta-Llama-3.1-8B-Instruct </strong>
45
+ </td>
46
+ <td><strong>Meta-Llama-3.1-8B-Instruct-FP8-KV(this model)</strong>
47
+ </td>
48
+ </tr>
49
+ <tr>
50
+ <td>Perplexity-wikitext2
51
+ </td>
52
+ <td>7.2169
53
+ </td>
54
+ <td>7.2752
55
+ </td>
56
+ </tr>
57
+
58
+ </table>
59
+
60
+ #### License
61
+ Copyright (c) 2018-2024 Advanced Micro Devices, Inc. All Rights Reserved.
62
+
63
+ Licensed under the Apache License, Version 2.0 (the "License");
64
+ you may not use this file except in compliance with the License.
65
+ You may obtain a copy of the License at
66
+
67
+ http://www.apache.org/licenses/LICENSE-2.0
68
+
69
+ Unless required by applicable law or agreed to in writing, software
70
+ distributed under the License is distributed on an "AS IS" BASIS,
71
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
72
+ See the License for the specific language governing permissions and
73
+ limitations under the License.