AI ExplorerAI Explorer
ToolsCategoriesSitesLLMsCompareAI QuizAlternativesPremium

—

AI Tools

—

Sites & Blogs

—

LLMs & Models

—

Categories

AI Explorer

Find and compare the best artificial intelligence tools for your projects.

Made within France

Explore

  • All tools
  • Sites & Blogs
  • LLMs & Models
  • Compare
  • Chatbots
  • AI Images
  • Code & Dev

Company

  • Premium
  • About
  • Contact
  • Blog

Legal

  • Legal notice
  • Privacy
  • Terms

© 2026 AI Explorer. All rights reserved.

HomeLLMsanimagine xl 3.0

animagine xl 3.0

by cagliostrolab

Open source · 117k downloads · 776 likes

3.6
(778 reviews)ImageAPI & Local
About

Animagine XL 3.0 is an advanced open-source anime image generation model based on Stable Diffusion XL. It excels at creating detailed and stylized images, with significant improvements in hand anatomy, concept comprehension, and prompt interpretation. Designed to learn concepts rather than adhere strictly to aesthetic criteria, it delivers greater fidelity to anime styles and specific universe elements. Perfect for artists, content creators, or enthusiasts, it generates high-quality illustrations suited to both modern and retro styles. What sets it apart is its ability to produce consistent and nuanced results while remaining accessible through user-friendly interfaces like Gradio or Google Colab.

Documentation

Animagine XL 3.0

image

Animagine XL 3.0 is the latest version of the sophisticated open-source anime text-to-image model, building upon the capabilities of its predecessor, Animagine XL 2.0. Developed based on Stable Diffusion XL, this iteration boasts superior image generation with notable improvements in hand anatomy, efficient tag ordering, and enhanced knowledge about anime concepts. Unlike the previous iteration, we focused on training the model to learn concepts rather than just aesthetics.

Model Details

  • Developed by: Cagliostro Research Lab
  • Model type: Diffusion-based text-to-image generative model
  • Model Description: Animagine XL 3.0 is engineered to generate high-quality anime images from textual prompts. It features enhanced hand anatomy, better concept understanding, and prompt interpretation, making it the most advanced model in its series.
  • License: CreativeML Open RAIL++-M License
  • Finetuned from model: Animagine XL 2.0

Gradio & Colab Integration

Animagine XL 3.0 is accessible through user-friendly platforms such as Gradio and Google Colab:

  • Gradio Web UI: Open In Spaces
  • Google Colab: Open In Colab

🧨 Diffusers Installation

To use Animagine XL 3.0, install the required libraries as follows:

Bash
pip install diffusers --upgrade
pip install transformers accelerate safetensors

Example script for generating images with Animagine XL 3.0:

Python
import torch
from diffusers import (
    StableDiffusionXLPipeline, 
    EulerAncestralDiscreteScheduler,
    AutoencoderKL
)

# Load VAE component
vae = AutoencoderKL.from_pretrained(
    "madebyollin/sdxl-vae-fp16-fix",
    torch_dtype=torch.float16
)

# Configure the pipeline
pipe = StableDiffusionXLPipeline.from_pretrained(
    "Linaqruf/animagine-xl-3.0", 
    vae=vae,
    torch_dtype=torch.float16, 
    use_safetensors=True, 
)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.to('cuda')

# Define prompts and generate image
prompt = "1girl, arima kana, oshi no ko, solo, upper body, v, smile, looking at viewer, outdoors, night"
negative_prompt = "nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name"

image = pipe(
    prompt, 
    negative_prompt=negative_prompt, 
    width=832,
    height=1216,
    guidance_scale=7,
    num_inference_steps=28
).images[0]

Usage Guidelines

Tag Ordering

Prompting is a bit different in this iteration, for optimal results, it's recommended to follow the structured prompt template because we train the model like this:

SQL
1girl/1boy, character name, from what series, everything else in any order.

Special Tags

Like the previous iteration, this model was trained with some special tags to steer the result toward quality, rating and when the content was created. The model can still do the job without these special tags, but it’s recommended to use them if we want to make the model easier to handle.

Quality Modifiers

Quality ModifierScore Criterion
masterpiece>150
best quality100-150
high quality75-100
medium quality25-75
normal quality0-25
low quality-5-0
worst quality<-5

Rating Modifiers

Rating ModifierRating Criterion
rating: generalGeneral
rating: sensitiveSensitive
rating: questionable, nsfwQuestionable
rating: explicit, nsfwExplicit

Year Modifier

These tags help to steer the result toward modern or vintage anime art styles, ranging from newest to oldest.

Year TagYear Range
newest2022 to 2023
late2019 to 2021
mid2015 to 2018
early2011 to 2014
oldest2005 to 2010

Recommended settings

To guide the model towards generating high-aesthetic images, use negative prompts like:

CSS
nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name

For higher quality outcomes, prepend prompts with:

Code
masterpiece, best quality

However, exercise caution when using masterpiece and best quality, as these tags are frequently associated with NSFW datasets. It’s better to add nsfw, rating: sensitive to the negative prompt and rating: general to the positive prompt. it’s recommended to use a lower classifier-free guidance (CFG Scale) of around 5-7, sampling steps below 30, and to use Euler Ancestral (Euler a) as a sampler.

Multi Aspect Resolution

This model supports generating images at the following dimensions:

DimensionsAspect Ratio
1024 x 10241:1 Square
1152 x 8969:7
896 x 11527:9
1216 x 83219:13
832 x 121613:19
1344 x 7687:4 Horizontal
768 x 13444:7 Vertical
1536 x 64012:5 Horizontal
640 x 15365:12 Vertical

Training and Hyperparameters

  • Animagine XL 3.0 was trained on a 2x A100 GPU with 80GB memory for 21 days or over 500 gpu hours. The training process encompassed three stages:
    • Base:
      • Feature Alignment Stage: Utilized 1.2m images to acquaint the model with basic anime concepts.
      • Refining UNet Stage: Employed 2.5k curated datasets to only fine-tune the UNet.
    • Curated:
      • Aesthetic Tuning Stage: Employed 3.5k high-quality curated datasets to refine the model's art style.

Hyperparameters

StageEpochsUNet Learning RateTrain Text EncoderText Encoder Learning RateBatch SizeMixed PrecisionNoise Offset
Feature Alignment Stage107.5e-6True3.75e-648 x 2fp16N/A
Refining UNet Stage102e-6FalseN/A48fp160.0357
Aesthetic Tuning Stage101e-6FalseN/A48fp160.0357

Model Comparison

Training Config

Configuration ItemAnimagine XL 2.0Animagine 3.0
GPUA100 80G2 x A100 80G
Dataset170k + 83k images1271990 + 3500 Images
Shuffle SeparatorN/ATrue
Global Epochs2020
Learning Rate1e-67.5e-6
Batch Size3248 x 2
Train Text EncoderTrueTrue
Train Special TagsTrueTrue
Image Resolution10241024
Bucket Resolution2048 x 5122048 x 512

Source code and training config are available here: https://github.com/cagliostrolab/sd-scripts/tree/main/notebook

Limitations

While "Animagine XL 3.0" represents a significant advancement in anime text-to-image generation, it's important to acknowledge its limitations to understand its best use cases and potential areas for future improvement.

  1. Concept Over Artstyle Focus: The model prioritizes learning concepts rather than specific art styles, which might lead to variations in aesthetic appeal compared to its predecessor.
  2. Non-Photorealistic Design: Animagine XL 3.0 is not designed for generating photorealistic or realistic images, focusing instead on anime-style artwork.
  3. Anatomical Challenges: Despite improvements, the model can still struggle with complex anatomical structures, particularly in dynamic poses, resulting in occasional inaccuracies.
  4. Dataset Limitations: The training dataset of 1.2 million images may not encompass all anime characters or series, limiting the model's ability to generate less known or newer characters.
  5. Natural Language Processing: The model is not optimized for interpreting natural language, requiring more structured and specific prompts for best results.
  6. NSFW Content Risk: Using high-quality tags like 'masterpiece' or 'best quality' carries a risk of generating NSFW content inadvertently, due to the prevalence of such images in high-scoring training datasets.

These limitations highlight areas for potential refinement in future iterations and underscore the importance of careful prompt crafting for optimal results. Understanding these constraints can help users better navigate the model's capabilities and tailor their expectations accordingly.

Acknowledgements

We extend our gratitude to the entire team and community that contributed to the development of Animagine XL 3.0, including our partners and collaborators who provided resources and insights crucial for this iteration.

  • Main: For the open source grant supporting our research, thank you so much.
  • Cagliostro Lab Collaborator: For helping quality checking during pretraining and curating datasets during fine-tuning.
  • Kohya SS: For providing the essential training script and merged our PR about keep_tokens_separator or Shuffle Separator.
  • Camenduru Server Community: For invaluable insights and support and quality checking
  • NovelAI: For inspiring how to build the datasets and label it using tag ordering.

Collaborators

  • Linaqruf
  • DamarJati
  • Asahina2K
  • ItsMeBell
  • Zwicky18
  • NekoFi
  • Scipius2121
  • Raelina

License

This model is licensed under the CreativeML Open RAIL++-M License.

To ensure full compatibility with the upstream SDXL ecosystem and standard usage rights, this model adheres strictly to the original SDXL terms, which include:

  • ✅ Permitted: Commercial use, modifications, distribution, private use
  • ❌ Prohibited: Illegal activities, harmful content generation, discrimination, exploitation

Note: This license supersedes any previous community license tags (e.g., FAIPL) applied to earlier versions of this repository, ensuring full compatibility with the standard SDXL ecosystem.

Please refer to the full license agreement for complete details.

Capabilities & Tags
diffuserssafetensorstext-to-imagestable-diffusionstable-diffusion-xlenendpoints_compatible
Links & Resources
Specifications
CategoryImage
AccessAPI & Local
LicenseOpen Source
PricingOpen Source
Rating
3.6

Try animagine xl 3.0

Access the model directly