Qwen
/

yangapku commited on
Commit
68e72e3
1 Parent(s): 193987f

update readme

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -43,9 +43,9 @@ For more details about the open-source model of Qwen-7B, please refer to the [Gi
43
 
44
  ## 依赖项(Dependency)
45
 
46
- 运行Qwen-7B-Chat,请确保机器环境pytorch版本不低于1.12,再执行以下pip命令安装依赖库
47
 
48
- To run Qwen-7B-Chat, please make sure that pytorch version is not lower than 1.12, and then execute the following pip commands to install the dependent libraries.
49
 
50
  ```bash
51
  pip install transformers==4.31.0 accelerate tiktoken einops
@@ -292,9 +292,9 @@ Qwen-7B-Chat also has the capability to be used as a [HuggingFace Agent](https:/
292
 
293
  ## 量化(Quantization)
294
 
295
- 如希望使用更低精度的量化模型,如4比特和8比特的模型,我们提供了简单的示例来说明如何快速使用量化模型。在开始前,确保你已经安装了`bitsandbytes`。请注意:`bitsandbytes`的安装要求是:
296
 
297
- We provide examples to show how to load models in `NF4` and `Int8`. For starters, make sure you have implemented `bitsandbytes`. Note that the requirements for `bitsandbytes` is:
298
 
299
  ```
300
  **Requirements** Python >=3.8. Linux distribution (Ubuntu, MacOS, etc.) + CUDA > 10.0.
@@ -309,7 +309,7 @@ Windows users should find another option, which might be [bitsandbytes-windows-w
309
  Then you only need to add your quantization configuration to `AutoModelForCausalLM.from_pretrained`. See the example below:
310
 
311
  ```python
312
- from transformers import BitsAndBytesConfig
313
 
314
  # quantization configuration for NF4 (4 bits)
315
  quantization_config = BitsAndBytesConfig(
 
43
 
44
  ## 依赖项(Dependency)
45
 
46
+ 运行Qwen-7B-Chat,请确保满足上述要求,再执行以下pip命令安装依赖库
47
 
48
+ To run Qwen-7B-Chat, please make sure you meet the above requirements, and then execute the following pip commands to install the dependent libraries.
49
 
50
  ```bash
51
  pip install transformers==4.31.0 accelerate tiktoken einops
 
292
 
293
  ## 量化(Quantization)
294
 
295
+ 如希望使用更低精度的量化模型,如4比特和8比特的模型,我们提供了简单的示例来说明如何快速使用量化模型。在开始前,确保你已经安装了`bitsandbytes`。请注意,`bitsandbytes`的安装要求是:
296
 
297
+ We provide examples to show how to load models in `NF4` and `Int8`. For starters, make sure you have implemented `bitsandbytes`. Note that the requirements for `bitsandbytes` are:
298
 
299
  ```
300
  **Requirements** Python >=3.8. Linux distribution (Ubuntu, MacOS, etc.) + CUDA > 10.0.
 
309
  Then you only need to add your quantization configuration to `AutoModelForCausalLM.from_pretrained`. See the example below:
310
 
311
  ```python
312
+ from transformers import AutoModelForCausalLM, BitsAndBytesConfig
313
 
314
  # quantization configuration for NF4 (4 bits)
315
  quantization_config = BitsAndBytesConfig(