[AUTOMATED] Model Memory Requirements
Model Memory Requirements
You will need about {'dtype': 'float16/bfloat16', 'Largest Layer or Residual Group': '2.72 GB', 'Total Size': '87.25 GB', 'Training using Adam': '348.99 GB'} VRAM to load this model for inference, and {'dtype': 'int4', 'Largest Layer or Residual Group': '696.02 MB', 'Total Size': '21.81 GB', 'Training using Adam': '87.25 GB'} VRAM to train it using Adam.
These calculations were measured from the Model Memory Utility Space on the Hub.
The minimum recommended vRAM needed for this model assumes using Accelerate or device_map="auto"
and is denoted by the size of the "largest layer".
When performing inference, expect to add up to an additional 20% to this, as found by EleutherAI. More tests will be performed in the future to get a more accurate benchmark for each model.
When training with Adam
, you can expect roughly 4x the reported results to be used. (1x for the model, 1x for the gradients, and 2x for the optimizer).
Results:
dtype | Largest Layer or Residual Group | Total Size | Training using Adam |
---|---|---|---|
float32 | 5.44 GB | 174.49 GB | 697.97 GB |
float16/bfloat16 | 2.72 GB | 87.25 GB | 348.99 GB |
int8 | 1.36 GB | 43.62 GB | 174.49 GB |
int4 | 696.02 MB | 21.81 GB | 87.25 GB |
While using torch.int8 getting below error:
ValueError: Can't instantiate MixtralForCausalLM model under dtype=torch.int8 since it is not a floating point dtype
@parvezkhan
to load the model in int8 precision you need to pass a BitsAndBytesConfig
. Please check out the relevant documentation section about it: https://huggingface.co/docs/transformers/quantization#bitsandbytes
@ybelkada
thanks for the inputs. I think it make lot of sense to pass BitsAndBytesConfig
. I am running and issue with bitandbytes
RuntimeError: Failed to import transformers.integrations.bitsandbytes because of the following error (look up to see its traceback):
CUDA Setup failed despite GPU being available. Please run the following command to get more information:
python -m bitsandbytes
Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
Below is my Setup:
GPU: NVIDIA A100
Python: Python3.10
Cuda: 12.3
Based on these issues: (https://github.com/TimDettmers/bitsandbytes/issues/1022 , https://github.com/TimDettmers/bitsandbytes/issues/956 ) It doesnt support cuda 12.3 unless I am missing something :)