Model Card for traclm-v2-7b-instruct-GPTQ
This repo contains an AWQ quantization of TRAC-MTRY/traclm-v2-7b-instruct for utilization of the model on low-resource hardware.
Read more about AWQ quantization here.
Read more about the unquantized model here.
Prompt Format
This model was fine-tuned with the alpaca prompt format. It is highly recommended that you use the same format for any interactions with the model. Failure to do so will degrade performance significantly.
Standard Alpaca Format:
### System:\nBelow is an instruction that describes a task. Write a response that appropriately completes the request.\n\n\n\n### Instruction:\n{prompt}\n\n### Response:\n "
Input Field Variant:
### System:\nBelow is an instruction that describes a task. Write a response that appropriately completes the request.\n\n\n\n### Instruction:\n{prompt}\n\n###Input:\n{input}\n\n### Response:\n "
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.