Cedille commited on
Commit
3d4e8d1
1 Parent(s): f758e10

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -27,7 +27,7 @@ tokenizer = AutoTokenizer.from_pretrained("Cedille/de-anna")
27
  model = AutoModelForCausalLM.from_pretrained("Cedille/de-anna")
28
  ```
29
  ### Lower memory usage
30
- Loading a model with Huggingface requires two copies of the weights, so 48+ GB of RAM for [GPT_J models](https://huggingface.co/docs/transformers/v4.15.0/model_doc/gptj) in float32 precision.
31
  The first trick would be to load the model with the specific argument below to load only one copy of the weights.
32
  ```
33
  from transformers import AutoTokenizer, AutoModelForCausalLM
 
27
  model = AutoModelForCausalLM.from_pretrained("Cedille/de-anna")
28
  ```
29
  ### Lower memory usage
30
+ Loading a model with Huggingface requires two copies of the weights, so 48+ GB of RAM for [GPT-J models](https://huggingface.co/docs/transformers/v4.15.0/model_doc/gptj) in float32 precision.
31
  The first trick would be to load the model with the specific argument below to load only one copy of the weights.
32
  ```
33
  from transformers import AutoTokenizer, AutoModelForCausalLM