Animagine XL 3.0 Base
|
|
Overview
Animagine XL 3.0 Base is the foundational version of the sophisticated anime text-to-image model, Animagine XL 3.0. This base version encompasses the initial two stages of the model's development, focusing on establishing core functionalities and refining key aspects. It lays the groundwork for the full capabilities realized in Animagine XL 3.0. As part of the broader Animagine XL 3.0 project, it employs a two-stage development process rooted in transfer learning. This approach effectively addresses problems in UNet after the first stage of training is finished, such as broken anatomy.
However, this model is not recommended for inference. It is advised to use this model as a foundation to build upon. For inference purposes, please use Animagine XL 3.0.
Model Details
- Developed by: Linaqruf
- Model type: Diffusion-based text-to-image generative model
- Model Description: Animagine XL 3.0 Base forms the foundational phase of the sophisticated anime image generation model. This version focuses on building core competencies in anime imagery, emphasizing foundational concept understanding and initial prompt interpretation. It's designed to establish the groundwork for advanced features seen in the full Animagine XL 3.0 model.
- License: Fair AI Public License 1.0-SD
- Finetuned from model: Animagine XL 2.0
Usage Guidelines
Tag Ordering
Prompting is a bit different in this iteration, for optimal results, it's recommended to follow the structured prompt template because we train the model like this:
1girl/1boy, character name, from what series, everything else in any order.
Special Tags
Like the previous iteration, this model was trained with some special tags to steer the result toward quality, rating and when the posts was created. The model can still do the job without these special tags, but itโs recommended to use them if we want to make the model easier to handle.
Quality Modifiers
Quality Modifier | Score Criterion |
---|---|
masterpiece |
>150 |
best quality |
100-150 |
high quality |
75-100 |
medium quality |
25-75 |
normal quality |
0-25 |
low quality |
-5-0 |
worst quality |
<-5 |
Rating Modifiers
Rating Modifier | Rating Criterion |
---|---|
rating: general |
General |
rating: sensitive |
Sensitive |
rating: questionable , nsfw |
Questionable |
rating: explicit , nsfw |
Explicit |
Year Modifier
These tags help to steer the result toward modern or vintage anime art styles, ranging from newest
to oldest
.
Year Tag | Year Range |
---|---|
Newest |
2022 to 2023 |
late |
2019 to 2021 |
mid |
2015 to 2018 |
early |
2011 to 2014 |
oldest |
2005 to 2010 |
Recommended settings
To guide the model towards generating high-aesthetic images, use negative prompts like:
nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name
For higher quality outcomes, prepend prompts with:
masterpiece, best quality
However, be careful to use masterpiece
, best quality
because many high-scored datasets are NSFW. Itโs better to add nsfw
, rating: sensitive
to the negative prompt and rating: general
to the positive prompt. itโs recommended to use a lower classifier-free guidance (CFG Scale) of around 5-7, sampling steps below 30, and to use Euler Ancestral (Euler a) as a sampler.
Multi Aspect Resolution
This model supports generating images at the following dimensions:
Dimensions | Aspect Ratio |
---|---|
1024 x 1024 |
1:1 Square |
1152 x 896 |
9:7 |
896 x 1152 |
7:9 |
1216 x 832 |
19:13 |
832 x 1216 |
13:19 |
1344 x 768 |
7:4 Horizontal |
768 x 1344 |
4:7 Vertical |
1536 x 640 |
12:5 Horizontal |
640 x 1536 |
5:12 Vertical |
Training and Hyperparameters
- Animagine XL 3.0 was trained on a 2x A100 GPU with 80GB memory for 21 days or over 500 gpu hours. The training process encompassed three stages:
- Feature Alignment Stage: Utilized 1.2m images to acquaint the model with basic anime concepts.
- Refining UNet Stage: Employed 2.5k curated datasets to only fine-tune the UNet.
Hyperparameters
Stage | Epochs | UNet Learning Rate | Train Text Encoder | Text Encoder Learning Rate | Batch Size | Mixed Precision | Noise Offset |
---|---|---|---|---|---|---|---|
Feature Alignment Stage | 10 | 7.5e-6 | True | 3.75e-6 | 48 x 2 | fp16 | N/A |
Refining UNet Stage | 10 | 2e-6 | False | N/A | 48 | fp16 | 0.0357 |
Model Comparison
Training Config
Configuration Item | Animagine XL 2.0 | Animagine 3.0 |
---|---|---|
GPU | A100 80G | 2 x A100 80G |
Dataset | 170k + 83k images | 1271990 + 3500 Images |
Shuffle Separator | N/A | True |
Global Epochs | 20 | 20 |
Learning Rate | 1e-6 | 7.5e-6 |
Batch Size | 32 | 48 x 2 |
Train Text Encoder | True | True |
Train Special Tags | True | True |
Image Resolution | 1024 | 1024 |
Bucket Resolution | 2048 x 512 | 2048 x 512 |
Source code and training config are available here: https://github.com/cagliostrolab/sd-scripts/tree/main/notebook
Limitations
While "Animagine XL 3.0" represents a significant advancement in anime text-to-image generation, it's important to acknowledge its limitations to understand its best use cases and potential areas for future improvement.
- Concept Over Artstyle Focus: The model prioritizes learning concepts rather than specific art styles, which might lead to variations in aesthetic appeal compared to its predecessor.
- Non-Photorealistic Design: Animagine XL 3.0 is not designed for generating photorealistic or realistic images, focusing instead on anime-style artwork.
- Anatomical Challenges: Despite improvements, the model can still struggle with complex anatomical structures, particularly in dynamic poses, resulting in occasional inaccuracies.
- Dataset Limitations: The training dataset of 1.2 million images may not encompass all anime characters or series, limiting the model's ability to generate less known or newer characters.
- Natural Language Processing: The model is not optimized for interpreting natural language, requiring more structured and specific prompts for best results.
- NSFW Content Risk: Using high-quality tags like 'masterpiece' or 'best quality' carries a risk of generating NSFW content inadvertently, due to the prevalence of such images in high-scoring training datasets.
These limitations highlight areas for potential refinement in future iterations and underscore the importance of careful prompt crafting for optimal results. Understanding these constraints can help users better navigate the model's capabilities and tailor their expectations accordingly.
Acknowledgements
We extend our gratitude to the entire team and community that contributed to the development of Animagine XL 3.0, including our partners and collaborators who provided resources and insights crucial for this iteration.
- Main: For the open source grant supporting our research, thank you so much.
- Cagliostro Lab Collaborator: For helping quality checking during pretraining and curating datasets during fine-tuning.
- Kohya SS: For providing the essential training script and merged our PR about
keep_tokens_separator
or Shuffle Separator. - Camenduru Server Community: For invaluable insights and support and quality checking
- NovelAI: For inspiring how to build the datasets and label it using tag ordering.
Collaborators
License
Animagine XL 3.0 now uses the Fair AI Public License 1.0-SD, compatible with Stable Diffusion models. Key points:
- Modification Sharing: If you modify Animagine XL 3.0, you must share both your changes and the original license.
- Source Code Accessibility: If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.
- Distribution Terms: Any distribution must be under this license or another with similar rules.
- Compliance: Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.
The choice of this license aims to keep Animagine XL 3.0 open and modifiable, aligning with open source community spirit. It protects contributors and users, encouraging a collaborative, ethical open-source community. This ensures the model not only benefits from communal input but also respects open-source development freedoms.
- Downloads last month
- 249