Update README.md
Browse files
README.md
CHANGED
@@ -76,8 +76,8 @@ Overall, InternLM-20B comprehensively outperforms open-source models in the 13B
|
|
76 |
To load the InternLM 7B Chat model using Transformers, use the following code:
|
77 |
```python
|
78 |
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
|
79 |
-
>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-20b
|
80 |
-
>>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-20b
|
81 |
>>> model = model.eval()
|
82 |
>>> inputs = tokenizer(["Coming to the beautiful nature, we found"], return_tensors="pt")
|
83 |
>>> for k,v in inputs.items():
|
@@ -146,8 +146,8 @@ InternLM 20B 在模型结构上选择了深结构,层数设定为60层,超
|
|
146 |
通过以下的代码加载 InternLM 20B 模型
|
147 |
```python
|
148 |
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
|
149 |
-
>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-20b
|
150 |
-
>>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-20b
|
151 |
>>> model = model.eval()
|
152 |
>>> inputs = tokenizer(["来到美丽的大自然,我们发现"], return_tensors="pt")
|
153 |
>>> for k,v in inputs.items():
|
|
|
76 |
To load the InternLM 7B Chat model using Transformers, use the following code:
|
77 |
```python
|
78 |
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
|
79 |
+
>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-20b", trust_remote_code=True)
|
80 |
+
>>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-20b", trust_remote_code=True).cuda()
|
81 |
>>> model = model.eval()
|
82 |
>>> inputs = tokenizer(["Coming to the beautiful nature, we found"], return_tensors="pt")
|
83 |
>>> for k,v in inputs.items():
|
|
|
146 |
通过以下的代码加载 InternLM 20B 模型
|
147 |
```python
|
148 |
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
|
149 |
+
>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-20b", trust_remote_code=True)
|
150 |
+
>>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-20b", trust_remote_code=True).cuda()
|
151 |
>>> model = model.eval()
|
152 |
>>> inputs = tokenizer(["来到美丽的大自然,我们发现"], return_tensors="pt")
|
153 |
>>> for k,v in inputs.items():
|