original model Llama-3-MAAL-8B-Instruct-v0.1
GPTQ quants of Llama-3-MAAL-8B-Instruct-v0.1
Located in the main branch
- 8bit GPTQ model
원본 모델 Llama-3-MAAL-8B-Instruct-v0.1
Llama-3-MAAL-8B-Instruct-v0.1 모델 GPTQ 양자화
메인 branch에 있는 파일
- 8bit GPTQ model
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.