Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
styalai
/
phi-2_quantize_gptq
like
2
Text Generation
Transformers
Safetensors
phi
custom_code
text-generation-inference
Inference Endpoints
4-bit precision
gptq
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
Edit model card
Model Card for Model ID
Model Details
Model Card for Model ID
Model Details
It's just the model Phi-2 by microsoft quantized by gptq in 4 bits.
Downloads last month
7
Safetensors
Model size
601M params
Tensor type
I32
·
FP16
·
Inference Examples
Text Generation
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to
Inference Endpoints (dedicated)
instead.