Update README.md
Browse files
README.md
CHANGED
@@ -73,7 +73,7 @@ The table below compares the performance of mainstream open-source models on som
|
|
73 |
Overall, InternLM-20B comprehensively outperforms open-source models in the 13B parameter range in terms of overall capabilities, and on inference evaluation sets, it approaches or even surpasses the performance of Llama-65B.
|
74 |
|
75 |
## Import from Transformers
|
76 |
-
To load the InternLM
|
77 |
```python
|
78 |
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
|
79 |
>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-20b", trust_remote_code=True)
|
|
|
73 |
Overall, InternLM-20B comprehensively outperforms open-source models in the 13B parameter range in terms of overall capabilities, and on inference evaluation sets, it approaches or even surpasses the performance of Llama-65B.
|
74 |
|
75 |
## Import from Transformers
|
76 |
+
To load the InternLM 20B model using Transformers, use the following code:
|
77 |
```python
|
78 |
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
|
79 |
>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-20b", trust_remote_code=True)
|