Edit model card

All-MiniLM-L6-v2-GGUF

Original Model

sentence-transformers/all-MiniLM-L6-v2

Run with LlamaEdge

  • LlamaEdge version: v0.8.2 and above

  • Context size: 384

  • Vector size: 256

  • Run as LlamaEdge service

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:all-MiniLM-L6-v2-ggml-model-f16.gguf \
      llama-api-server.wasm \
      --prompt-template llama-2-chat \
      --ctx-size 256 \
      --model-name all-MiniLM-L6-v2
    

Quantized GGUF Models

Name Quant method Bits Size Use case
all-MiniLM-L6-v2-Q2_K.gguf Q2_K 2 19.2 MB smallest, significant quality loss - not recommended for most purposes
all-MiniLM-L6-v2-Q3_K_L.gguf Q3_K_L 3 20.5 MB small, substantial quality loss
all-MiniLM-L6-v2-Q3_K_M.gguf Q3_K_M 3 19.9 MB very small, high quality loss
all-MiniLM-L6-v2-Q3_K_S.gguf Q3_K_S 3 19.2 MB very small, high quality loss
all-MiniLM-L6-v2-Q4_0.gguf Q4_0 4 19.7 MB legacy; small, very high quality loss - prefer using Q3_K_M
all-MiniLM-L6-v2-Q4_K_M.gguf Q4_K_M 4 21 MB medium, balanced quality - recommended
all-MiniLM-L6-v2-Q4_K_S.gguf Q4_K_S 4 20.7 MB small, greater quality loss
all-MiniLM-L6-v2-Q5_0.gguf Q5_0 5 21 MB legacy; medium, balanced quality - prefer using Q4_K_M
all-MiniLM-L6-v2-Q5_K_M.gguf Q5_K_M 5 21.7 MB large, very low quality loss - recommended
all-MiniLM-L6-v2-Q5_K_S.gguf Q5_K_S 5 21.5 MB large, low quality loss - recommended
all-MiniLM-L6-v2-Q6_K.gguf Q6_K 6 24.2 MB very large, extremely low quality loss
all-MiniLM-L6-v2-Q8_0.gguf Q8_0 8 25 MB very large, extremely low quality loss - not recommended
all-MiniLM-L6-v2-ggml-model-f16.gguf Q8_0 8 45.9 MB very large, extremely low quality loss - not recommended

Quantized with llama.cpp b2334

Downloads last month
2,314
GGUF
Model size
22.6M params
Architecture
bert

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for second-state/All-MiniLM-L6-v2-Embedding-GGUF

Quantized
(19)
this model