AI/EXPLORER
OutilsCatégoriesSitesLLMsComparerQuiz IAAlternativesPremium
—Outils IA
—Sites & Blogs
—LLMs & Modèles
—Catégories
AI Explorer

Trouvez et comparez les meilleurs outils d'intelligence artificielle pour vos projets.

Fait avecen France

Explorer

  • ›Tous les outils
  • ›Sites & Blogs
  • ›LLMs & Modèles
  • ›Comparer
  • ›Chatbots
  • ›Images IA
  • ›Code & Dev

Entreprise

  • ›Premium
  • ›À propos
  • ›Contact
  • ›Blog

Légal

  • ›Mentions légales
  • ›Confidentialité
  • ›CGV

© 2026 AI Explorer·Tous droits réservés.

AccueilLLMszimage turbo training adapter

zimage turbo training adapter

par ostris

Open source · 49k downloads · 132 likes

2.7
(132 avis)ImageAPI & Local
À propos

Le Z-Image-Turbo Training Adapter est un outil conçu pour affiner le modèle Z-Image-Turbo en préservant sa distillation par étapes, une technique qui accélère la génération d'images. Il agit comme un correcteur de distillation, ralentissant la perte de cette optimisation lors des entraînements courts, ce qui permet d'ajouter de nouveaux styles, concepts ou personnages sans dégrader les performances du modèle. Idéal pour des sessions de fine-tuning rapides, il évite les artefacts qui pourraient survenir lors d'entraînements prolongés. Une fois l'adaptation terminée, il peut être retiré, laissant le modèle original intact et capable de générer des images à vitesse optimisée. Son approche repose sur un entraînement LoRA à faible taux d'apprentissage, qui préserve les connaissances existantes tout en atténuant la distillation.

Documentation

Z-Image-Turbo Training Adapter

This is a training adapter designed to be used for fine-tuning Tongyi-MAI/Z-Image-Turbo. It was made for use with AI Toolkit but could potentially be used in other trainers as well. If you are implementing it into training code and have questions. I am always heppy to help. Just reach out. It can also be used as a general de-distillation LoRA for inference to remove the "Turbo" from "Z-Image-Turbo".

Why is it needed?

When you train directly on a step distilled model, the distillation breaks down very quickly. This results in losing the step distillation in an unpredictable way. A de-distill training adapter slows this process down significantly allowing you to do short training runs while preserving the step distillation (speed).

What is the catch?

This is really just a hack to significantly slow down the distillation when fine-tuning a distilled model. The distillation will still be broken down over time. What that means is, this adapter will work great for shorter runs such as styles, concepts, and characters. However, doing a long training run will likely lead to the distillation breaking down to a point where artifacts will be produced when the adapter is removed.

How was it made?

I generated thousands of images at various sizes and aspect ratios using Tongyi-MAI/Z-Image-Turbo. Then I simply trained a LoRA on those images at a low learning rate (1e-5). This allowed the distillation to break down while preserving the model's existing knowledge.

How does it work?

Since this adapter has broken down the distillation, if you train a LoRA on top of it, the distillation will no longer break down in your new LoRA, since this adapter has de-distilled the model. Your LoRA will now only learn the subject you are training. When it comes time to run inference / sampling, we remove this training adapter which leaves your new information on the distilled model allowing your new information to run at distilled speeds. Attached, is an example of a short training run on a character with and without this adapter

zimage_adapter

Liens & Ressources
Spécifications
CatégorieImage
AccèsAPI & Local
LicenceOpen Source
TarificationOpen Source
Note
2.7

Essayer zimage turbo training adapter

Accédez directement au modèle