AI ExplorerAI Explorer
ToolsCategoriesSitesLLMsCompareAI QuizAlternativesPremium

—

AI Tools

—

Sites & Blogs

—

LLMs & Models

—

Categories

AI Explorer

Find and compare the best artificial intelligence tools for your projects.

Made within France

Explore

  • All tools
  • Sites & Blogs
  • LLMs & Models
  • Compare
  • Chatbots
  • AI Images
  • Code & Dev

Company

  • Premium
  • About
  • Contact
  • Blog

Legal

  • Legal notice
  • Privacy
  • Terms

© 2026 AI Explorer. All rights reserved.

HomeLLMszimage turbo training adapter

zimage turbo training adapter

by ostris

Open source · 49k downloads · 132 likes

2.7
(132 reviews)ImageAPI & Local
About

The Z-Image-Turbo Training Adapter is a tool designed to fine-tune the Z-Image-Turbo model while preserving its step-by-step distillation—a technique that accelerates image generation. Acting as a distillation corrector, it slows the loss of this optimization during short training sessions, enabling the addition of new styles, concepts, or characters without degrading the model’s performance. Ideal for quick fine-tuning sessions, it prevents artifacts that might arise from prolonged training. Once adaptation is complete, the adapter can be removed, leaving the original model intact and capable of generating images at optimal speed. Its approach relies on low-rank adaptation (LoRA) training with a low learning rate, preserving existing knowledge while mitigating distillation effects.

Documentation

Z-Image-Turbo Training Adapter

This is a training adapter designed to be used for fine-tuning Tongyi-MAI/Z-Image-Turbo. It was made for use with AI Toolkit but could potentially be used in other trainers as well. If you are implementing it into training code and have questions. I am always heppy to help. Just reach out. It can also be used as a general de-distillation LoRA for inference to remove the "Turbo" from "Z-Image-Turbo".

Why is it needed?

When you train directly on a step distilled model, the distillation breaks down very quickly. This results in losing the step distillation in an unpredictable way. A de-distill training adapter slows this process down significantly allowing you to do short training runs while preserving the step distillation (speed).

What is the catch?

This is really just a hack to significantly slow down the distillation when fine-tuning a distilled model. The distillation will still be broken down over time. What that means is, this adapter will work great for shorter runs such as styles, concepts, and characters. However, doing a long training run will likely lead to the distillation breaking down to a point where artifacts will be produced when the adapter is removed.

How was it made?

I generated thousands of images at various sizes and aspect ratios using Tongyi-MAI/Z-Image-Turbo. Then I simply trained a LoRA on those images at a low learning rate (1e-5). This allowed the distillation to break down while preserving the model's existing knowledge.

How does it work?

Since this adapter has broken down the distillation, if you train a LoRA on top of it, the distillation will no longer break down in your new LoRA, since this adapter has de-distilled the model. Your LoRA will now only learn the subject you are training. When it comes time to run inference / sampling, we remove this training adapter which leaves your new information on the distilled model allowing your new information to run at distilled speeds. Attached, is an example of a short training run on a character with and without this adapter

zimage_adapter

Capabilities & Tags
diffuserstext-to-imagelora
Links & Resources
Specifications
CategoryImage
AccessAPI & Local
LicenseOpen Source
PricingOpen Source
Rating
2.7

Try zimage turbo training adapter

Access the model directly