Edit model card

LLAMA 3 8B with capable to output Traditional Chinese

✨ Recommend using LMStudio for this model

I tried using Ollama to run it, but it became quite delulu.

So for now, I'm sticking with LMStudio :)The performance isn't actually that great, but it's capable of answering some basic questions. Sometimes it just acts really dumb though :(

LLAMA 3.1 can actually output pretty well Chinese, so this repo can be ignored.

Downloads last month
24
GGUF
Model size
8.03B params
Architecture
llama

4-bit

Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for suko/Meta-Llama-3-8B-CHT

Quantized
this model

Dataset used to train suko/Meta-Llama-3-8B-CHT