AI/EXPLORER
ToolsCategoriesSitesLLMsCompareAI QuizAlternativesPremium
—AI Tools
—Sites & Blogs
—LLMs & Models
—Categories
AI Explorer

Find and compare the best artificial intelligence tools for your projects.

Made within France

Explore

  • ›All tools
  • ›Sites & Blogs
  • ›LLMs & Models
  • ›Compare
  • ›Chatbots
  • ›AI Images
  • ›Code & Dev

Company

  • ›Premium
  • ›About
  • ›Contact
  • ›Blog

Legal

  • ›Legal notice
  • ›Privacy
  • ›Terms

© 2026 AI Explorer·All rights reserved.

HomeLLMsFLUX.1 Turbo Alpha

FLUX.1 Turbo Alpha

by alimama-creative

Open source · 25k downloads · 640 likes

3.5
(640 reviews)ImageAPI & Local
About

FLUX.1 Turbo Alpha is an optimized image generation model derived from an eight-step distillation of the FLUX.1-dev model. It excels at text-to-image (T2I) generation and can be integrated with tools like ControlNet for advanced features such as inpainting, while significantly reducing generation time. Its performance is particularly well-suited for high-resolution images (1024x1024) and demanding prompts, thanks to training on a large, filtered dataset that ensures high aesthetic quality. The model stands out for its adversarial approach, using a multi-head discriminator to refine results, and strikes a good balance between speed and fidelity compared to the original version. It is ideal for applications requiring fast inference without compromising visual quality.

Documentation
alibaba alimama

中文版Readme

This repository provides a 8-step distilled lora for FLUX.1-dev model released by AlimamaCreative Team.

Description

This checkpoint is a 8-step distilled Lora, trained based on FLUX.1-dev model. We use a multi-head discriminator to improve the distill quality. Our model can be used for T2I, inpainting controlnet and other FLUX related models. The recommended guidance_scale=3.5 and lora_scale=1. Our Lower steps version will release later.

  • Text-to-Image.

  • With alimama-creative/FLUX.1-dev-Controlnet-Inpainting-Beta. Our distilled lora can be well adapted to the Inpainting controlnet, and the accelerated generated effect can follow the original output well.

How to use

diffusers

This model can be used ditrectly with diffusers

Python
import torch
from diffusers.pipelines import FluxPipeline

model_id = "black-forest-labs/FLUX.1-dev"
adapter_id = "alimama-creative/FLUX.1-Turbo-Alpha"

pipe = FluxPipeline.from_pretrained(
  model_id,
  torch_dtype=torch.bfloat16
)
pipe.to("cuda")

pipe.load_lora_weights(adapter_id)
pipe.fuse_lora()

prompt = "A DSLR photo of a shiny VW van that has a cityscape painted on it. A smiling sloth stands on grass in front of the van and is wearing a leather jacket, a cowboy hat, a kilt and a bowtie. The sloth is holding a quarterstaff and a big book."
image = pipe(
            prompt=prompt,
            guidance_scale=3.5,
            height=1024,
            width=1024,
            num_inference_steps=8,
            max_sequence_length=512).images[0]

comfyui

  • T2I turbo workflow: click here
  • Inpainting controlnet turbo workflow: click here

Training Details

The model is trained on 1M open source and internal sources images, with the aesthetic 6.3+ and resolution greater than 800. We use adversarial training to improve the quality. Our method fix the original FLUX.1-dev transformer as the discriminator backbone, and add multi heads to every transformer layer. We fix the guidance scale as 3.5 during training, and use the time shift as 3.

Mixed precision: bf16

Learning rate: 2e-5

Batch size: 64

Image size: 1024x1024

Capabilities & Tags
diffuserssafetensorsText-to-ImageFLUXStable Diffusiontext-to-imageen
Links & Resources
Specifications
CategoryImage
AccessAPI & Local
LicenseOpen Source
PricingOpen Source
Rating
3.5

Try FLUX.1 Turbo Alpha

Access the model directly