AI/EXPLORER
ToolsCategoriesSitesLLMsCompareAI QuizAlternativesPremium
—AI Tools
—Sites & Blogs
—LLMs & Models
—Categories
AI Explorer

Find and compare the best artificial intelligence tools for your projects.

Made within France

Explore

  • ›All tools
  • ›Sites & Blogs
  • ›LLMs & Models
  • ›Compare
  • ›Chatbots
  • ›AI Images
  • ›Code & Dev

Company

  • ›Premium
  • ›About
  • ›Contact
  • ›Blog

Legal

  • ›Legal notice
  • ›Privacy
  • ›Terms

© 2026 AI Explorer·All rights reserved.

HomeLLMstiny sd

tiny sd

by segmind

Open source · 7k downloads · 87 likes

2.4
(87 reviews)ImageAPI & Local
About

The tiny-sd model is an optimized and lightweight version of the Stable Diffusion model, designed to generate images from text more quickly and efficiently. Distilled from Realistic Vision V4.0, it leverages an enhanced subset of artistic data to produce realistic and detailed results. Its key capabilities include generating high-resolution images (512x512) with reduced latency, up to 80% faster than base models, while maintaining comparable visual quality. This model is particularly well-suited for applications requiring real-time or large-scale image generation, such as creative tools, prototypes, or automated pipelines. What sets it apart is its balance between performance and accessibility, offering a lightweight alternative without sacrificing result fidelity.

Documentation

license: creativeml-openrail-m base_model: SG161222/Realistic_Vision_V4.0 datasets:

  • recastai/LAION-art-EN-improved-captions tags:
  • stable-diffusion
  • stable-diffusion-diffusers
  • text-to-image
  • diffusers inference: true

Text-to-image Distillation

This pipeline was distilled from SG161222/Realistic_Vision_V4.0 on a Subset of recastai/LAION-art-EN-improved-captions dataset. Below are some example images generated with the tiny-sd model.

val_imgs_grid

This Pipeline is based upon the paper. Training Code can be found here.

Pipeline usage

You can use the pipeline like so:

Python
from diffusers import DiffusionPipeline
import torch

pipeline = DiffusionPipeline.from_pretrained("segmind/tiny-sd", torch_dtype=torch.float16)
prompt = "Portrait of a pretty girl"
image = pipeline(prompt).images[0]
image.save("my_image.png")

Training info

These are the key hyperparameters used during training:

  • Steps: 125000
  • Learning rate: 1e-4
  • Batch size: 32
  • Gradient accumulation steps: 4
  • Image resolution: 512
  • Mixed-precision: fp16

Speed Comparision

We have observed that the distilled models are upto 80% faster than the Base SD1.5 Models. Below is a comparision on an A100 80GB.

graph comparision

Here is the code for benchmarking the speeds.

Capabilities & Tags
diffusersstable-diffusionstable-diffusion-diffuserstext-to-imageendpoints_compatible
Links & Resources
Specifications
CategoryImage
AccessAPI & Local
LicenseOpen Source
PricingOpen Source
Rating
2.4

Try tiny sd

Access the model directly