par Laxhar
Open source · 11k downloads · 66 likes
NoobAI XL Vpred 1.0 est un modèle de génération d'images par intelligence artificielle, spécialisé dans la création d'images à partir de texte. Il se distingue par son approche basée sur la prédiction v (v-prediction), différente des modèles classiques, ce qui lui confère une capacité unique à générer des visuels détaillés et cohérents. Entraîné sur des ensembles de données complets comme Danbooru et e621, il excelle dans la production d'images stylisées, notamment dans des styles artistiques ou animés, avec une attention particulière portée à la qualité et à la pertinence des tags utilisés. Ses cas d'usage principaux incluent la création artistique, l'illustration numérique et la génération de contenus visuels pour des projets créatifs. Ce qui le différencie, c'est sa méthode de prédiction et son optimisation pour des paramètres spécifiques, offrant ainsi des résultats plus précis et adaptés aux besoins des utilisateurs avancés.
This image generation model, based on Laxhar/noobai-XL_v1.0, leverages full Danbooru and e621 datasets with native tags and natural language captioning.
Implemented as a v-prediction model (distinct from eps-prediction), it requires specific parameter configurations - detailed in following sections.
Special thanks to my teammate euge for the coding work, and we're grateful for the technical support from many helpful community members.
(If you haven't installed reForge) Install reForge by following the instructions in the repository;
Launch WebUI and use the model as usual!
SAMLPLE with NODES
Note that dev branch is not stable and may contain bugs.
dev branch:git switch dev
git pull
import torch
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerDiscreteScheduler
ckpt_path = "/path/to/model.safetensors"
pipe = StableDiffusionXLPipeline.from_single_file(
ckpt_path,
use_safetensors=True,
torch_dtype=torch.float16,
)
scheduler_args = {"prediction_type": "v_prediction", "rescale_betas_zero_snr": True}
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, **scheduler_args)
pipe.enable_xformers_memory_efficient_attention()
pipe = pipe.to("cuda")
prompt = """masterpiece, best quality,artist:john_kafka,artist:nixeu,artist:quasarcake, chromatic aberration, film grain, horror \(theme\), limited palette, x-shaped pupils, high contrast, color contrast, cold colors, arlecchino \(genshin impact\), black theme, gritty, graphite \(medium\)"""
negative_prompt = "nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthro"
image = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
width=832,
height=1216,
num_inference_steps=28,
guidance_scale=5,
generator=torch.Generator().manual_seed(42),
).images[0]
image.save("output.png")
Note: Please make sure Git is installed and environment is properly configured on your machine.
masterpiece, best quality, newest, absurdres, highres, safe,
nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthro
<1girl/1boy/1other/...>, <character>, <series>, <artists>, <special tags>, <general tags>, <other tags>
For quality tags, we evaluated image popularity through the following process:
Our ultimate goal is to ensure that quality tags effectively track user preferences in recent years.
| Percentile Range | Quality Tags |
|---|---|
| > 95th | masterpiece |
| > 85th, <= 95th | best quality |
| > 60th, <= 85th | good quality |
| > 30th, <= 60th | normal quality |
| <= 30th | worst quality |
| Tag | Description |
|---|---|
| very awa | Top 5% of images in terms of aesthetic score by waifu-scorer |
| worst aesthetic | All the bottom 5% of images in terms of aesthetic score by waifu-scorer and aesthetic-shadow-v2 |
| ... | ... |
There are two types of date tags: year tags and period tags. For year tags, use year xxxx format, i.e., year 2021. For period tags, please refer to the following table:
| Year Range | Period tag |
|---|---|
| 2005-2010 | old |
| 2011-2014 | early |
| 2014-2017 | mid |
| 2018-2020 | recent |
| 2021-2024 | newest |
Communication
QQ Groups:
Discord: Laxhar Dream Lab SDXL NOOB
How to train a LoRA on v-pred SDXL model
A tutorial is intended for LoRA trainers based on sd-scripts.
article link: https://civitai.com/articles/8723
Utility Tool
Laxhar Lab is training a dedicated ControlNet model for NoobXL, and the models are being released progressively. So far, the normal, depth, and canny have been released.
Model link: https://civitai.com/models/929685
This model's license inherits from https://huggingface.co/OnomaAIResearch/Illustrious-xl-early-release-v0 fair-ai-public-license-1.0-sd and adds the following terms. Any use of this model and its variants is bound by this license.
We prohibit any form of commercialization, including but not limited to monetization or commercial use of the model, derivative models, or model-generated products.
Hey everyone! Don’t keep the cool stuff to yourself! 🚀
If you find new tricks, wild art combos, magic prompts, or train fun LoRAs, share them openly!
Post in DC/TG/X/group chats—let’s all grow together.
No more secret models/prompts like the old days.
Open sharing = more fun for all! 💡✨
PS: We're not trying to lock things down! Back in the 1.5/n3 days, tons of secret models/prompts popped up (ugh, messy vibes).
Let’s break that cycle! Sharing = faster progress, wilder ideas, and way more hype.
No gatekeeping—post your 'secret sauce' in public spaces. Everyone wins!
To foster a thriving open-source community,users MUST comply with the following requirements:
Generated models may produce unexpected or harmful outputs. Users must assume all risks and potential consequences of usage.
Narugo1992: Thanks to narugo1992 and the deepghs team for open-sourcing various training sets, image processing tools, and models.
Onommai: Thanks to OnommAI for open-sourcing a powerful base model.
V-Prediction: Thanks to the following individuals for their detailed instructions and experiments.
Community: aria1th261, neggles, sdtana, chewing, irldoggo, reoe, kblueleaf, Yidhar, ageless, 白玲可, Creeper, KaerMorh, 吟游诗人, SeASnAkE, zwh20081, Wenaka~喵, 稀里哗啦, 幸运二副, 昨日の約, 445, EBIX, Sopp, Y_X, Minthybasis, Rakosz