Edit model card
  • Bits: 4
  • Group Size: 128
  • Damp Percent: 0.01
  • Desc Act: false
  • Static Groups: false
  • Sym: false
  • True Sequential: false
  • LM Head: true
  • Model Name or Path: null
  • Model File Base Name: model
  • Quant Method: gptq
  • Checkpoint Format: gptq
  • Meta:
    • Quantizer: intel/auto-round:0.1
    • Packer: autogptq:0.8.0.dev1
    • Iters: 400
    • LR: 0.0025
    • MinMax LR: 0.0025
    • Enable MinMax Tuning: true
    • Use Quant Input: false
    • Scale Dtype: torch.float16
Downloads last month
3,447
Safetensors
Model size
204M params
Tensor type
F32
I32
FP16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.