FLUX.1-dev ControlNet Inpainting - Beta
This repository hosts an improved Inpainting ControlNet checkpoint for the alimama-creative/FLUX.1-dev-Controlnet-Inpainting-Alpha model, developed by the AlimamaCreative Team.
Key Enhancements
Our latest inpainting model brings significant improvements compared to the previous version:
- 1024 Resolution Support: Capable of directly processing and generating 1024x1024 resolution images without additional upscaling steps, providing higher quality and more detailed output results.
- Enhanced Detail Generation: Fine-tuned to capture and reproduce finer details in inpainted areas.
- Improved Prompt Control: Offers more precise control over generated content through enhanced prompt interpretation.
Showcase
The following images were generated using a ComfyUI workflow (click here to download) with these settings:
control-strength
= 1.0, control-end-percent
= 1.0, true_cfg
= 1.0
Image & Prompt Input | Alpha Version | Beta Version |
---|
Prompt : 'Write a few lines of words "alimama creative" on the wooden board'
Prompt : "a girl with big beautiful white wing"
Prompt : "red hair"
Prompt : " "
Prompt : "Albert Einstein"
Prompt : "Ravello Outdoor Sectional Sofa Set with Coffee Table"
ComfyUI Usage Guidelines:
Download example ComfyUI workflow here.
Using
t5xxl-FP16
andflux1-dev-fp8
models for 30-step inference @1024px & H20 GPU:- GPU memory usage: 27GB
- Inference time: 48 seconds (true_cfg=3.5), 26 seconds (true_cfg=1)
Different results can be achieved by adjusting the following parameters:
Parameter | Recommended Range | Effect |
---|---|---|
control-strength | 0.6 - 1.0 | Controls how much influence the ControlNet has on the generation. Higher values result in stronger adherence to the control image. |
controlend-percent | 0.35 - 1.0 | Determines at which step in the denoising process the ControlNet influence ends. Lower values allow for more creative freedom in later steps. |
true-cfg (Classifier-Free Guidance Scale) | 1.0 or 3.5 | Influences how closely the generation follows the prompt. Higher values increase prompt adherence but may reduce image quality. |
- More comprehensive full-image prompts can lead to better overall results. For example, in addition to describing the area to be repaired, you can also describe the background, atmosphere, and style of the entire image. This approach can make the generated results more harmonious and natural.
Diffusers Integration
- Install the required diffusers version:
pip install diffusers==0.30.2
- Clone this repository:
git clone https://github.com/alimama-creative/FLUX-Controlnet-Inpainting.git
- Configure
image_path
,mask_path
, andprompt
inmain.py
, then execute:
python main.py
Model Specifications
- Training dataset: 15M images from LAION2B and proprietary sources
- Optimal inference resolution: 1024x1024
License
Our model weights are released under the FLUX.1 [dev] Non-Commercial License.
- Downloads last month
- 197
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for ckpt/FLUX.1-dev-Controlnet-Inpainting-Beta
Base model
black-forest-labs/FLUX.1-dev