Edit model card

Pytorch int8 quantized version of gpt2-large

Usage

Download the .bin file locally. Load with:

Rest of the usage according to original instructions.

import torch

model = torch.load("path/to/pytorch_model_quantized.bin")
Downloads last month
25
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.