AI/EXPLORER
ToolsCategoriesSitesLLMsCompareAI QuizAlternativesPremium
—AI Tools
—Sites & Blogs
—LLMs & Models
—Categories
AI Explorer

Find and compare the best artificial intelligence tools for your projects.

Made within France

Explore

  • ›All tools
  • ›Sites & Blogs
  • ›LLMs & Models
  • ›Compare
  • ›Chatbots
  • ›AI Images
  • ›Code & Dev

Company

  • ›Premium
  • ›About
  • ›Contact
  • ›Blog

Legal

  • ›Legal notice
  • ›Privacy
  • ›Terms

© 2026 AI Explorer·All rights reserved.

HomeLLMsbk sdm small 2m

bk sdm small 2m

by nota-ai

Open source · 3k downloads · 15 likes

1.5
(15 reviews)ImageAPI & Local
About

The BK SDM Small 2M model is an optimized and compressed version of Stable Diffusion, specialized in generating images from text. Trained on a dataset ten times larger than its predecessors (2.3 million image-text pairs from LAION-Aesthetics), it delivers higher-quality generation while maintaining a lightweight and efficient architecture. Its key strengths lie in the rapid synthesis of realistic images that closely align with provided descriptions, featuring clear object distinction and enhanced visual details. Ideal for applications requiring a balance between performance and limited resources, it stands out for its ability to produce consistent results even with reduced denoising steps. This model is particularly suited for developers and creators seeking to integrate a high-performance image generation solution without relying on heavy infrastructure.

Documentation

BK-SDM-2M Model Card

BK-SDM-{Base-2M, Small-2M, Tiny-2M} are pretrained with 10× more data (2.3M LAION image-text pairs) compared to our previous release.

  • Block-removed Knowledge-distilled Stable Diffusion Model (BK-SDM) is an architecturally compressed SDM for efficient text-to-image synthesis.
  • The previous BK-SDM-{Base, Small, Tiny} were obtained via distillation pretraining on 0.22M LAION pairs.
  • Resources for more information: Paper, GitHub, Demo.

Examples with 🤗Diffusers library.

An inference code with the default PNDM scheduler and 50 denoising steps is as follows.

Python
import torch
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained("nota-ai/bk-sdm-small-2m", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "a black vase holding a bouquet of roses"
image = pipe(prompt).images[0]  
    
image.save("example.png")

Compression Method

Adhering to the U-Net architecture and distillation pretraining of BK-SDM, the difference in BK-SDM-2M is a 10× increase in the number of training pairs.

  • Training Data: 2,256,472 image-text pairs (i.e., 2.3M pairs) from LAION-Aesthetics V2 6.25+.
  • Hardware: A single NVIDIA A100 80GB GPU
  • Gradient Accumulations: 4
  • Batch: 256 (=4×64)
  • Optimizer: AdamW
  • Learning Rate: a constant learning rate of 5e-5 for 50K-iteration pretraining

Experimental Results

The following table shows the zero-shot results on 30K samples from the MS-COCO validation split. After generating 512×512 images with the PNDM scheduler and 25 denoising steps, we downsampled them to 256×256 for evaluating generation scores.

  • Our models were drawn at the 50K-th training iteration.
ModelFID↓IS↑CLIP Score↑
(ViT-g/14)
# Params,
U-Net
# Params,
Whole SDM
Stable Diffusion v1.413.0536.760.29580.86B1.04B
BK-SDM-Base (Ours)15.7633.790.28780.58B0.76B
BK-SDM-Base-2M (Ours)14.8134.170.28830.58B0.76B
BK-SDM-Small (Ours)16.9831.680.26770.49B0.66B
BK-SDM-Small-2M (Ours)17.0533.100.27340.49B0.66B
BK-SDM-Tiny (Ours)17.1230.090.26530.33B0.50B
BK-SDM-Tiny-2M (Ours)17.5331.320.26900.33B0.50B

Effect of Different Data Sizes for Training BK-SDM-Small

Increasing the number of training pairs improves the IS and CLIP scores over training progress. The MS-COCO 256×256 30K benchmark was used for evaluation.

Training progress with different data sizes

Furthermore, with the growth in data volume, visual results become more favorable (e.g., better image-text alignment and clear distinction among objects).

Visual results with different data sizes

Additional Visual Examples

additional visual examples

Uses

Follow the usage guidelines of Stable Diffusion v1.

Acknowledgments

  • We express our gratitude to Microsoft for Startups Founders Hub for generously providing the Azure credits used during pretraining.
  • We deeply appreciate the pioneering research on Latent/Stable Diffusion conducted by CompVis, Runway, and Stability AI.
  • Special thanks to the contributors to LAION, Diffusers, and Gradio for their valuable support.

Citation

Bibtex
@article{kim2023architectural,
  title={BK-SDM: A Lightweight, Fast, and Cheap Version of Stable Diffusion},
  author={Kim, Bo-Kyeong and Song, Hyoung-Kyu and Castells, Thibault and Choi, Shinkook},
  journal={arXiv preprint arXiv:2305.15798},
  year={2023},
  url={https://arxiv.org/abs/2305.15798}
}
Bibtex
@article{kim2023bksdm,
  title={BK-SDM: Architecturally Compressed Stable Diffusion for Efficient Text-to-Image Generation},
  author={Kim, Bo-Kyeong and Song, Hyoung-Kyu and Castells, Thibault and Choi, Shinkook},
  journal={ICML Workshop on Efficient Systems for Foundation Models (ES-FoMo)},
  year={2023},
  url={https://openreview.net/forum?id=bOVydU0XKC}
}

This model card was written by Bo-Kyeong Kim and is based on the Stable Diffusion v1 model card.

Capabilities & Tags
diffuserssafetensorsstable-diffusionstable-diffusion-diffuserstext-to-imageendpoints_compatible
Links & Resources
Specifications
CategoryImage
AccessAPI & Local
LicenseOpen Source
PricingOpen Source
Rating
1.5

Try bk sdm small 2m

Access the model directly