Edit model card

FLUX.1-dev-GGUF

Original Model

black-forest-labs/FLUX.1-dev

Run with sd-api-server

  • sd-api-server version: 0.1.4

  • Run as LlamaEdge service

    wasmedge --dir .:. sd-api-server.wasm \
      --model-name flux1-dev \
      --diffusion-model flux1-dev-Q4_0.gguf \
      --vae ae.safetensors \
      --clip-l clip_l.safetensors \
      --t5xxl t5xxl-Q8_0.gguf
    
    • Run with LoRA

      Assume that the LoRA model is located in the lora-models directory

      wasmedge --dir .:. \
        --dir lora-models:lora-models \
        sd-api-server.wasm \
        --model-name flux1-dev \
        --diffusion-model flux1-dev-Q4_0.gguf \
        --vae ae.safetensors \
        --clip-l clip_l.safetensors \
        --t5xxl t5xxl-Q8_0.gguf \
        --lora-model-dir lora-models
      

      For details, see https://github.com/LlamaEdge/sd-api-server/blob/main/examples/flux_with_lora.md

Quantized GGUF Models

Name Quant method Bits Size Use case
ae.safetensors f32 32 335 MB
clip_l-Q8_0.gguf Q8_0 8 131 MB
clip_l.safetensors f16 16 246 MB
flux1-dev-Q2_K.gguf Q2_K 2 4.15 GB
flux1-dev-Q3_K.gguf Q3_K 3 5.35 GB
flux1-dev-Q4_0.gguf Q4_0 4 6.93 GB
flux1-dev-Q4_1.gguf Q4_1 4 7.67 GB
flux1-dev-Q4_K.gguf Q4_K 4 6.93 GB
flux1-dev-Q5_0.gguf Q5_0 5 8.40 GB
flux1-dev-Q5_1.gguf Q5_1 5 9.14 GB
flux1-dev-Q8_0.gguf Q8_0 8 12.6 GB
flux1-dev.safetensors f16 16 23.8 GB
t5xxl-Q2_K.gguf Q2_K 2 1.61 GB
t5xxl-Q3_K.gguf Q3_K 3 2.10 GB
t5xxl-Q4_0.gguf Q4_0 4 2.75 GB
t5xxl-Q4_1.gguf Q4_0 4 3.06 GB
t5xxl-Q4_K.gguf Q4_K 4 2.75 GB
t5xxl-Q5_0.gguf Q5_0 5 3.36 GB
t5xxl-Q5_1.gguf Q5_1 5 3.67 GB
t5xxl-Q8_0.gguf Q8_0 8 5.20 GB
t5xxl_fp16.safetensors f16 16 9.79 GB

Quantized with stable-diffusion.cpp master-e71ddce.

Downloads last month
819
GGUF
Model size
4.89B params
Architecture
undefined

2-bit

3-bit

4-bit

5-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for second-state/FLUX.1-dev-GGUF

Quantized
(14)
this model

Collection including second-state/FLUX.1-dev-GGUF