AI ExplorerAI Explorer
ToolsCategoriesSitesLLMsCompareAI QuizAlternativesPremium

—

AI Tools

—

Sites & Blogs

—

LLMs & Models

—

Categories

AI Explorer

Find and compare the best artificial intelligence tools for your projects.

Made within France

Explore

  • All tools
  • Sites & Blogs
  • LLMs & Models
  • Compare
  • Chatbots
  • AI Images
  • Code & Dev

Company

  • Premium
  • About
  • Contact
  • Blog

Legal

  • Legal notice
  • Privacy
  • Terms

© 2026 AI Explorer. All rights reserved.

HomeLLMsillustrious

illustrious

by calcuis

Open source · 4k downloads · 59 likes

2.2
(59 reviews)ImageAPI & Local
About

Illustrious is an AI model specialized in generating images and visual content, available in quantized versions (gguf) and optimized for less powerful machines. It stands out for its compatibility with older systems through optimizations like the *lowvram* mode and lightweight formats (fp8, bf16), while maintaining satisfactory rendering quality. Its main use cases include artistic creation, illustration, or generating visuals for creative projects, with enhanced flexibility thanks to its integration into tools like ComfyUI. This model distinguishes itself by its ability to run on older hardware while leveraging advanced compression techniques to speed up loading times.

Documentation

gguf quantized and fp8 scaled versions of illustrious (test pack)

setup (in general)

  • drag gguf file(s) to diffusion_models folder (./ComfyUI/models/diffusion_models)
  • drag clip or encoder(s), i.e., illustrious_g_clip and illustrious_l_clip, to text_encoders folder (./ComfyUI/models/text_encoders)
  • drag vae decoder(s), i.e., vae, to illustrious_vae folder (./ComfyUI/models/vae)

run it straight (no installation needed way)

  • get the comfy pack with the new gguf-node (pack)
  • run the .bat file in the main directory

workflow

  • drag any workflow json file to the activated browser; or
  • drag any generated output file (i.e., picture, video, etc.; which contains the workflow metadata) to the activated browser

review

  • use tag/word(s) as input for more accurate results for those legacy models; not very convenient (compare to the recent models) at the very beginning
  • credits should be given to those contributors from civitai platform
  • fast-illustrious gguf was quantized from fp8 scaled safetensors while illustrious gguf was quantized from the original bf16 (this is just an attempt to test: is it true? the trimmed model with 50% tensors lesser really load faster? please test it yourself; btw, some models might have their unique structure/feature affecting the loader performance, never one size fits all)
  • fp8 scaled file works fine in this model; including vae and clips
  • good to run on old machines, i.e., 9xx series or before (legacy mode [--disable-cuda-malloc --lowvram] supported); compatible with the new gguf-node
  • disclaimer: some models (original files) are provided by someone else and we might not easily spot out the creator/contributor(s) behind, unless it was specified in the source; rather let it blank instead of anonymous/unnamed/unknown; if it is your work, do let us know; we will address it back properly and probably; thanks for everything

reference

  • wai creator
  • comfyui comfyanonymous
  • gguf-node (pypi|repo|pack)
Capabilities & Tags
ggufgguf-nodetext-to-imageen
Links & Resources
Specifications
CategoryImage
AccessAPI & Local
LicenseOpen Source
PricingOpen Source
Rating
2.2

Try illustrious

Access the model directly