by stabilityai
Open source · 4k downloads · 12 likes
SDXL Turbo Ryzen AI is an optimized version of the SDXL-Turbo model designed specifically for AMD Ryzen AI processors, leveraging the XDNA™ 2 architecture of AMD’s NPUs. This model balances the precision of FP16 with the efficiency of INT8, delivering fast and effective image generation while minimizing energy consumption. It excels in creating artistic works, design applications, and educational tools, remaining accessible for both non-commercial and commercial use under certain conditions. What sets it apart is its exclusive hardware optimization, enabling smooth performance on devices equipped with Ryzen AI. However, like all generative models, it may produce non-compliant content, requiring responsible use and additional verification.
This repository hosts the AMD Ryzen™ AI optimized version of SDXL-Turbo created in collaboration with AMD. This ONNX-ported model is the world’s first Block FP16 model with the UNET and VAE decoder completely in Block FP16. Built for the AMD XDNA™ 2 based NPU, this model combines the accuracy of FP16 with the performance of INT8.
This model can be demoed using the Amuse AI application: Amuse
Amuse settings: Open in "EZ Mode", Toggle: Balanced Mode, AMD XDNA™ 2 Stable Diffusion Offload: checked.
Please note: This model is released under the Stability Community License. Visit Stability AI to learn or contact us for commercial licensing details.
Refer to the SDXL-Turbo Model card for more details.
For research purposes, we recommend our generative-models Github repository (https://github.com/Stability-AI/generative-models), which implements the most popular diffusion frameworks (both training and inference).
Repository: https://github.com/Stability-AI/generative-models
SDXL Turbo Paper: https://stability.ai/research/adversarial-diffusion-distillation
Intended uses include the following:
All uses of the model must be in accordance with our Acceptable Use Policy.
The model was not trained to be factual or true representations of people or events. As such, using the model to generate such content is out-of-scope of the abilities of this model.
As part of our safety-by-design and responsible AI deployment approach, we take deliberate measures to ensure Integrity starts at the early stages of development. We implement safety measures throughout the development of our models. We have implemented safety mitigations that are intended to reduce the risk of certain harms, however we recommend that developers conduct their own testing and apply additional mitigations based on their specific use cases.
For more about our approach to Safety, please visit our Safety page.
Our integrity evaluation methods include structured evaluations and red-teaming testing for certain harms. Testing was conducted primarily in English and may not cover all possible harms.
Please report any issues with the model or contact us: