Sdxl turbo vs sdxl reddit. 9 VAE Model, right? There is an extra SDXL VAE provided afaik, but if these are baked into the main models, the 0. Not cherry picked. Which is funny because I sorta forgot about lllite after it released, because cnet seems to do much better with traditional sdxl models (I assume because of either the step count or cfg, but maybe it's something intrinsic to the turbo architecture? "SDXL requires at least 8GB of VRAM" I have a lowly MX250 in a laptop, which has 2GB of VRAM. 4. SDXL Lightning ⚡: A Swift Advancement Over SDXL Turbo. Mouth open, vs mouth closed, extra. play around with the denoise to see what its best for you. 5 denoiser strength, start denoising at 0. 5 at current state. Actually SDXL used 4 prompt boxes. Plus there’s so much more you can do with SD 1. In late 2023, SDXL Turbo made its debut. Honestly I use both. With ComfyUI it generates images with no issues, but it's about 5x slower overall than SD1. Turbo is faster than Lightning because You can encode then decode bck to a normal ksampler with an 1. 1 at main (huggingface. 0 on my RTX 2060 laptop 6gb vram on both A1111 and ComfyUI. It says that as long as the pixels sum is the same as 1024*1024, which is not. 1 is a big jump over 1. Forgot to mention, you will have to download this inpaint model from huggingface and put it in your comfyUI "Unet" folder that can be found in the models folder. managed to do it with a regular ksampler using Euler-a and sgm-uniform at 1cfg. 5 model based off 38 pictures of me using Photon_v1 as a base model and it turned out great. Settings for the original 512x768 image: beautiful girl wearing casual clothes with a cheeky smile, (selfie in a mirror:1. 3. 5 had where the community developed other components that were added after the fact. The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. 5). (released in july 2023). SDXL Resolution Cheat Sheet. Enthusiasts do have the opportunity to train the desired functions. (no negative prompt) Prompt for Midjourney - a viking warrior, facing the camera, medieval village on fire, rain, distant shot, full body --ar 9:16 --s 750. Although, 1. So far I've just tested this with the Dreamshaper SDXL Turbo model, but others are reporting 1-2 seconds per image, if that. 92gb. Install the TensorRT fix FIX. While not exactly the same, to simplify understanding, it's basically like upscaling but without making the image any larger. (The match changed, it was weird. 9 vs BASE SD 1. It's manageable. 0 strength: 1. Well, from my experience with SDXL 0. 5 model for a specific use case much easier than SDXL. SDXL is trained with 1024*1024 = 1048576 sized images with multiple aspect ratio images , so your input size should not greater than that number. With ComfyUI the below image took 0. When it comes to the sampling steps, Dreamshaper SDXL Turbo does not possess any advantage over LCM. 5 and 2. the A1111 took forever to generate an image without refiner the UI was very laggy I did remove all the extensions but nothing really change so the image always stocked on 98% I don't know why. Although there are even more SD 1. SDXL vs SDXL Refiner - Img2Img Denoising Plot. You should use negative prompt, you should put thing that you like in the positive and things you don't like in negative. Live drawing. safetensors" (13. For turbo it was 4 steps and LCM it was 6 steps. TLDR: Results 1, Results 2, Unprompted 1, Unprompted 2, links to checkpoints used at the bottom. At this moment I tagged lcm-lora-sd1. Running A1111 with recommended settings (CFG 2, 3-7 steps, R-ESRGAN 4x+ to upscale from 512 to 1024). Those who have the hardware, should just try it (or use one of the Free Online SDXL Generators) and draw their own conclusions. 5, it uses more resources, power, and time, and most of the time it fails. CFG Scale: from 1 to 2. I know about masks, of course. Probably the first model where these SDXL Turbo renders look reasonably good. json) With sd_xl_turbo_1. This Both sd_xl_turbo_1. 5 vs raw sdxl is clear where the future is. For each prompt I generated 4 images and I selected the one I liked the most. 5, LCM-SDXL, LCM. CFG set to 7 for all, resolution set to 1152x896 for all. 9 working right now (experimental) Currently, it is WORKING in SD. I have also compared it against SDXL Turbo and LCM-LoRA . diffusers/stable-diffusion-xl-1. 19K Members 41 Online. Not using negative prompts is to handicap It's faster for sure but I personally was more interested in quality than speed. Once you’ve altered the latent space with SD1. Upscaling will still be necessary. 0 with the current state of SD1. Assuming it happens. Hello, does anybody know any method how to combine SD Turbo and AnimateDiff? Overall, it's a smart move. 5 because inpainting. It is specially designed for generating highly realistic images, legible text, and The "original" one was sd1. 0 ,0. YMMV, but I've found lllite actually works loads better than cnet with turbo models. but maybe i misunderstood the author. Yep. However, it comes with a trade-off of a slower speed due to its requirement of a 4-step sampling process. download the model through web UI interface -do not use . 5. 0_fp16. Next as usual and start with param: withwebui --backend diffusers. 5. 5 after using XL for a while. 0 models from my drive as they produce inferior results. Add your thoughts and get the conversation going. But at this point 1. 0 models I’ve used. 5 output is about 512px x 512px with an upscale process afterwords. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. Currently I'm using adafactor optimizer with consine scheduler for SDXL full fine tuning, and I'm stuck at local minima, the loss graph is up and down and I have no idea except lowering the learning rate Which might help with the mouth. Try making the switch to ComfyUI, it's easier than it looks, and way faster than A1111. I tend to be quite descriptive, may be I need to be simplified. Also, don't bother with 512x512, those don't work well on SDXL. json workflow. 5 in the beginning). I made a bunch of XYZ plots and here are the best settings I found for Hyper-SDXL-1step-lora: steps: 3-5 (hires steps: 4-6) cfg: 1. 0-inpainting-0. Please share your tips, tricks, and workflows for using this…. 5), (upper body:1. CUI can do a batch of 4 and stay within the 12 GB. 6 background mountains by a lake, flash, high contrast, smile, happy. If more fine-tuned turbo checkpoints keep showing up on Civitai then I think you can safely predict that the future belongs Some technicals: XL Turbo flourishes in the 5 steps 2-3 CFG range, while 1 is too muddy and 4 looks burnt. Idk man, 1. Guess which non-SD1. 5, end denoising at 1 - Adds contrast, detail and improves the image over base. I do notice with long prompt, sdxl does get weird. 5 models though. Unclipped on the other hand is the idea of passing an XL is pretty new so a lot of features are still lacking compared to 1. Flustered is an addition by one button prompt as stated under that particular prompt. x and the vanishingly rare 2. Turbo needs a range of 50% to 80% denoise for latent upscaling using the same seed number. Both Turbo and Lightning are faster than the standard SDXL models while retaining a fair amount of the quality. I played with it all night, quality is surprisingly good. Animatediff could be cool. I extract that aspect ratio full list from SDXL SDXL gives you good results by minimal prompting. On the AI Horde, SDXL is the second most requested model after Anything Diffusion (people gotta have their waifus I guess). Convert. •. SDXL is a new checkpoint, but it also introduces a new thing called a refiner. SDXL takes around 30 seconds on my machine and Turbo takes around 7. My prediction - Highly trained finetunes like RealisticVision, Juggernaut etc will put up a good fight against BASE SDXL in many ways. Dalle3 also is just better with text. The average score for each prompt was subtracted from the score for each image. 9 gb) I want to assume the 'fp16' stands for floating point 16, but I'd rather hear from someone who actually knows. 5 still wins on usability though, XL has longer generating times and models take up far more space. 5 and appears in the info. With SD1. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. Vote. I'm glad it's there for people to make use of but I find it flows better when I completely type a long prompt (or finish drawing a sketch for sketch-to-image) then hit generate and get instant render. 85, although producing some weird paws on some of the steps. SDXL for better initial resolution and composition. 5 would take maybe 120 seconds. You'll end up with . "sd_xl_turbo_1. Last, I also performed the same test with a resize by scale of 2: SDXL vs SDXL Refiner - 2x Img2Img Denoising Plot. 5 examples were added into the comparison, the way I see it so far is: SDXL is superior at fantasy/artistic and digital illustrated images. You can run it locally. 5 vs SDXL comparisons over the next few days and weeks. 1 was? SDXLTurbo+ SDXL Refiner Workflow for more detailed Image Generation. Turbo isn't just distillation though, and the merges between the turbo version and the baseline XL strike a good middle ground imo; with those you can do @ 8 stpes what used to need like 25, so it's just fast enough that you can iterate interactively over your prompts with low-end hardware, and not sacrifice on prompt adherence. There are many great SDXL models doing superb job with photorealism and people. 5 and I’ve completely deleted all of the 1. 3 The most likely reason is that you used an inferior sampler, CFG, or Steps. Join. I've tried "force" normal kohya sdxl method, but with the result is horrible (just blurry picture), and I've tried converting model into LCM using kohya so it could Stay away from sdxl when first starting out if hard drive space is a concern. 5 Will wait for 2 months and see if it gets any better. Welcome to the unofficial ComfyUI subreddit. I then turned that into a Lora using Koyaa and used it with other 1. "High budget" is from the SDXL style selector. What you see as "behind" was simply the length of time SD 1. SDXL vs 1. ComfyUI: 0. And both of them have very small context windows so the render time increases a lot. On the other hand, it is not ignored like SD2. Sampler: DPM++ 2M Karras. safetensors loaded fine in InvokeAI (using config sd_xl_base. When it comes to the sampling steps, Dreamshaper SDXL Turbo does not possess any advantage If we go the SDXL Turbo road, we loose control for a single greedy organization just so that can become an even bigger monopoly. Both are good I would say. The main difference (for you) is what resolution they output at - XL models' optimal resolution is 1024px²; 1. It might be another way to handle details like eyes open, vs closed. SD 1. 5 is the amount of LORAs and specialized models that are available. All images were generated with the following settings: Steps: 20. Most people are just upscaling to get the highest quality. The SDXL is excelling all expectations in so many ways in so many areas, but feels like SD Sort by: chimaeraUndying. It probably will never be unless someone has a ridiculously fast GPU where the difference in time is negligible. Discussion. 1), SDXL boasts remarkable improvements in image quality, aesthetics, and versatility. Next Vlad with SDXL 0. Tested on ComfyUI: workflow. Reply. Thank you for the information. 34. I ve noticed artifacts as well, but thought they were because of loras or not enough steps or sampler problems. I might test a lower denoise, but I remember it looking bad. 5 does have more Loras for now. You can fine tune and use an SD 1. 1 step turbo has slightly less quality than SDXL at 50 steps, while 4 step turbo has significantly more quality than SDXL at 50 steps. A big plus for SD 1. With SDXL picking up steam, I downloaded a swath of the most popular stable diffusion models on CivitAI to use for comparison against each other. As far as the models themselves, SDXL was immediately better in most ways than SD 1. 0 Unclip ! SDXL distilled is a sdxl with a reduced quantity of tokens, basically it removes tokens that are not often used in language models, so it may not catch fringe words you ask it to create but will be faster and more efficient on more common words. Compare base models. . On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. It’s a toss up between a checkpoint and a LoRA so, in all fairness, it’s not an ideal comparison. For example: Phoenix SDXL Turbo. • 3 mo. 5, it already IS more capable in many ways. 3), (straight on:1. 5, but almost all the fine tuned models you see are still on 1. BasedEvader. 5 is still better making realism, but might change in future. Yes, you'd usually get multiple subjects with 1. 5-to-SDXL workflow should work too SDXL SDXL-Turbo is a non-commercial license. safetensors I could only get black or other uniformly colored images out With sd_xl_turbo_1. Open • 2 total votes. Sampling method on ComfyUI: LCM. LoRA based on new sdxl turbo, you can use the TURBO with any stable diffusion xl checkpoint, few seconds = 1 image(4 seconds with a nvidia rtx 3060 with 1024x768 resolution) Tested on webui 1111 v1. So what are the parameters used for SDXL? Please post it for image #1 (CFG, steps, sampler, etc) This image is generated using CFG:7, Step: 30, Sampler: DPM++ 2M Karra. 5 AnimateDiff models. Wow, not bad! Go to Tools / Convert Model tools / Model Type SDXL 1 / Model Type / Diffusers (from the file type pull down on the windows pick a file) - it'll want model_index. --. safetensors and sd_xl_turbo_1. 5 seconds per image for sdxl is excellent. Another low effort comparation using a heavily finetuned model, probably some post process against a base model with bad prompt. 5 from nkmd then i changed model to sdxl turbo and used it as base image. Steps: 30 (the last image was 50 steps because SDXL does best at 50+ steps) Sampler: DPM++ 2M SDE Karras. On the one hand it avoids the flood of nsfw models from SD1. 5 and then bring the results into SDXL and use it for depth map reference (since SDXL behaves better with depth controlnet). For an advanced 1. Two most important things for me are ability to train lora easily, and controlnet, which aren't established yet. Use one gpu (a slower one) to do the sdxl turbo step and use comfyui netdist to run the sd1. 5 refine on another gpu. I finally came up with a setting that actually does give a positive output in SD. Start with Cascade stage C, 896 x 1152, 42 compression. Stable Diffusion XL (SDXL) is a state-of-the-art, open-source generative AI model developed by StabilityAI. No need to reprompt. Then a pass of 1. 27 it/s. Instead of the latent going to the stage B conditioner, instead VAE decode using stage C. It's not perfect though, but still better on average compared to SDXL by a decent margin. ago. *XL models are based on SDXL; unlabeled ones are (typically) based on non-SDXL models (SD 1. I've been using Photoshop for 20 years, so that is kinda a given ;) The controlnets are working differently in SDXL as well, though. In most cases this process run pretty fast even on older PCs. Install the TensorRT plugin TensorRT for A1111. This is just a comparison of the current state of SDXL1. With a higher config it seems to have decent results. 0. There is also the whole checkpoint format now. 5 is way better at producing women with really, really big boobs, which I need in my 'work. For the base SDXL model you must have both the checkpoint and refiner models. 5, which may have a negative impact on stability's business model. So yes. LCM gives good results with 4 steps, while SDXL-Turbo gives them in 1 step. Feb 22, 2024 · Feb 22, 2024. 5 you get quick gens that you then work on with controlnet, inpainting, upscaling, maybe even manual editing in Photoshop and then you get something that follows your prompt. SDXL 1. safetensors I got gray images at 1 step. 1. 1 with its fixed nsfw filter, which could not be bypassed. SD. 80% will look weird, but it's good to see it. 7. - Setup -. r/desmos. Yes, mm_sdxl and hotspot, I coudn't get results close to what I can obtain with the SD1. 5 with lcm with 4 steps and 0. . I used seed 1000000007, lcm sampler and sge_uniform scheduler. The standard resolution for 1. You can’t mix and match models. Every other single combination I've tried has produced at best I made a custom 1. 46gb, and a full version that is 12. next, UniPC for both first and second sampler, 30 steps first pass, 20 second, 0. Reply reply. 6 seconds (total) if I do CodeFormer Face Restore on 1 face. Dreamshaper SDXL Turbo VS Dreamshape SDXL with LCM. The image we get from that is then 4x upscaled using a model upscaler, then nearest exact upscaled by ~1. It's really cool, but unfortunately really limited currently as it has coherency issues and is "native" at only 512 x 512. Safetensors is just safer :) You can use safetensors the same as before in ComfyUI etc. Negative prompt is part of the SDXL generation's prompt. Sampling steps: 4. Closes in 3 days. Same settings for upscaling. Can anyone explain me why SDXL lighting is better/faster than LCM Lora? I´m overwelmed for the amount of new techniques and in this case I don´t understand what is the benefit of SDXL lighting lora. Fuck SD 1. wrench1815 • 21 days ago. 5, non-inbred, non-Korean-overtrained model this is. I was amazed at the accuracy of the results. Prompt for SDXL : A young viking warrior standing in front of a burning village, intricate details, close up shot, tousled hair, night, rain, bokeh. I cant say how good SDXL 1. 5 with all the tutorials and compatible nodes available (ie: animatediff works smoother with sd1. Everyone is getting hyped about SDXL for a good reason. x ones). (longer for more faces) Stable Diffusion: 2-3 seconds + 3-10 seconds for background processes per image. Hi guys, today Stability Inc released their new SDXL Turbo model that can inference an image in as little as 1 step. I've actually been doing something different, where I start with a CGI character and bring it straight into SDXL. Next (Vlad) : 1. Realistic portrait of an 80 years-old woman looking straight into the camera, scarf, dark r/desmos. SD1. Step 2: Download this sample Image. x models are 640px² or 768px² or something. As an upgrade from its predecessors (such as SD 1. GeneSequence. LCM LoRA is much easier though and is model agnostic. safetensors + . 5 Vs SDXL Comparison. Nvidia EVGA 1080 Ti FTW3 (11gb) SDXL Turbo. You could use openpose in SD1. Honestly you can probably just swap out the model and put in the turbo scheduler, i don't think loras are working properly yet but you can feed the images into a proper sdxl model to touch up during generation (slower and tbh doesn't save time over just using a normal SDXL model to begin with), or generate a large amount of stuff to pick and The way SSD-1B scores higher than SDXL makes me think the simulacra aesthetic model or similar was used in the distillation process. 5, and my 16GB of system RAM simply isn't enough to prevent about 20GB of data being "cached" to the internal SSD every single time the base model is loaded. SDXL base vs Realistic Vision 5. At 2-4 steps I got images slightly resembling what I I run on an 8gb card with 16gb of ram and I see 800 seconds PLUS when doing 2k upscales with SDXL, wheras to do the same thing with 1. 6. SDXL is far more dynamic and powerful, but very few have even begun to harness that (much like 1. Prompt: a King with royal robes and jewels with a gold crown and jewelry sitting in a royal chair, photorealistic. Speed is the main issue, but even that has been resolved with Turbo & Lightning. Even with a mere RTX 3060. x models are 512px²; I think 2. 0 will be, hopefully it doesnt require a refiner model because dual model workflows are much more inflexible to work with. And I'm pretty sure even the step generation is faster. But, I can't use Prodigy when trying to full fine tuning on SDXL, even with H100 80GB and gradient checkpointing turned on at 1 batch size. 5 models and that worked good as well. LoRA based on new sdxl turbo, you can use the TURBO with any stable diffusion xl checkpoint, few seconds = 1 image. LCM Lora vs SDXL lighting. See you next year when we can run real-time AI video on a smartphone x). It might also be interesting to use, CLIP, or YOLO, to add tokens to the prompt on a frame by frame bases. But it also seems to be much less expressive, and more literal to the input. SDXL is superior at keeping to the prompt. Dreamshaper SDXL Turbo is a variant of SDXL Turbo that offers enhanced charting capabilities. I don't know if anyone also has this but I have the impression that very little differences are between renders, for example I have some prompt for 30 words, CFG scale 3, 7 steps. Too scared of a proper comparison eh. SDXL is still good for farming training data for your custom 1. Workflow is better than a video for others to diagnose issues or borrow concepts. Reply reply Competitive_Ad_5515 No. Install SD. While of course SDXL struggles a bit. One of the generated images needed to fix boobs so I back to sd1. After the SD1. Much appreciated if you can post the json workflow or a picture generated from this workflow so it can be easier to setup. 10K subscribers in the comfyui community. The next night I made a custom SDXL model based on JuggernautXL_v8 with the same 38 SDXL-Lightning is spectacular! Is not a new model, but a new method! For anyone who wants to know more, I've written an article explaining how it works, what improvements it brings and what is the best way to use it to get the most out of it. Look prompts and see how well each one following 1st DreamBooth vs 2nd LoRA 3rd DreamBooth vs 3th LoRA Raw output, ADetailer not used, 1024x1024, 20 steps, DPM++ 2M SDE Karras Same training dataset For the rest of the world who want to expand their horizon, SDXL is a more versatile model that offer many advantages (see SDXL 1. On some of the SDXL based models on Civitai, they work fine SDXL was trained using negative prompts, all the test they did was using negative prompts. co) Thanks for sharing this setup. Also note that the biggest difference between SDXL and SD1. This stops each checkpoint from having to reload for every generation. It’s power hungry and time consuming to train, but some of the prompting I’ve seen in even the base has given some truly brilliant compositions. 5 you have to keep working with SD1 For my use case, SDXL has been monumentally better than SD1. 0: a semi-technical introduction/summary for beginners ). SDXL was trained on a lot of 1024x1024 images so this shouldn't happen on the recommended resolutions. This seemed to add more detail all the way up to 0. SDXL is certainly another big jump, but will the base model be able to compete with the already existing fine tuned models? Will people actually make the jump or will it get mostly ignored like 2. Someone else on here was able to upscale from 512 to 2048 in under a second You can speed this up for multiple images even more with dual gpu’s. 5 seconds so there is a significant drop in time but I am afraid, I won't be using it too much because it can't really gen at higher resolutions without creating weird duplicated artifacts. yaml - put them in the same folder w/ other checkpoints and A1111 will load it. 5 for bringing more quality and details. 1 seconds (about 1 second) at 2. 5 has very rich choice of checkpoints, loras, plugins and reliable workflows. Though I'm not sure about the SD 2. 5 is in where you'll be spending your energy. For SD1. SDXL Turbo Fine tune/Merging? I didn't found any method to training SDXL model. I need to try those setting again. When you post stuff like this, please add a link to the . 5 on resolutions higher than 512 pixels because the model was trained on 512x512. Will be interested to see all the SD1. This same pattern might apply to LoRAs as well. Download custom SDXL Turbo model. 9 to 1. I personally prefer sdxl, it seems better straight up. Turbo diffuses the image in one step, while Lightning diffuses the image in 2 - 8 steps usually (for comparison, standard SDXL models usually take 20 - 40 steps to diffuse the image completely). '. 5 it's ancient now and SDXL Turbo seems more promising and better efficiency that SD 1. I did try using SDXL 1. balianone. Step 1: Download SDXL Turbo checkpoint. SD XL is quite new. Anyone have ComfyUI workflows for img2img with SDXL Turbo? If so, could you kindly share some of your workflows please. On a related note, another neat thing is how SAI trained the model. Sea_Cookie2838 • 21 days ago. 0-2-g4afaaf8a Discussion. In addition to that, I checked out the CivitAI one too, and there it has a 'pruned' version that is 6. 93 seconds. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Maybe I'm just too much of an old-timer but I find that live real-time generation to be more of a distraction than a boost in productivity. I don’t even use any custom SDXL model as the base model produces what I want better than any of the custom trained 1. MODEL: SDXL BETA & DREAMSTUDIO AI BETA PROMPT: Photography of woman 80 years looking straight into the camera, scarf, dark hair, realistic, black and white, studio portrait, 50mm, f/5. This one feels like it starts to have problems before the effect can They are exactly the same weights as before. Today, we herald a superior and swifter checkpoint: SDXL Lightning. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt SDXL Turbo is part of the core SAI model set so my bet is on that. 2. 1. Fine-tuning/training on sdxl isn't worth it in most cases, too little benefit compared to sd-1. 0 sampler: DPM++ 2M SDE SGMUniform (these samplers give a softer look: LMS Karras, Euler A Turbo/SGMUniform) These Hyper LoRAs seem to work perfectly fine with other LoRAs and fine-tuned models. Question - Help. Add a Comment. You need to use --medvram (or even --lowvram) and perhaps even --xformers arguments on 8GB. But the SD1. Dale 3 understands prompts extremely well because the text is pre-parsed by GPT under the hood, I'm fairly certain. I'm blown away. 9 version should truely be recommended. json Image / Image2. ADMIN MOD. For general use I always find it hard to go back to using 1. safetensor version (it just wont work now) 2. 5 is superior at human subjects and anatomy, including face/body but SDXL is superior at hands. Instead, we need to focus on fully open-sourced models that fine-tuners truly own, such as SDXL, SD 1. 0 A1111 vs ComfyUI 6gb vram, thoughts. to use you need: Switch your A1111 to the dev branch (recomended use new or copy your A1111) - into your A1111 folder run CMD and write: "git checkout dev" and press ENTER. 17K subscribers in the comfyui community. I was unable to use SDXL with my 3070 in A1111, just like you. 5, 2. Let's make sure we grow the ecosystem around open-source models, not SDXL Turbo or SVD. To use it, you need to have the sdxl 1. One image takes about 8-12 seconds for me. 5 I used Dreamshaper 6 since it's one of the most popular and versatile models. 0, and 2. Feel free to post demonstrations of interesting mathematical phenomena, questions about what is happening in a graph, or just cool things you've found while playing with the calculator. 5 is still leagues better than sdxl. But still, lots of requests for SDXL despite its licensing. A subreddit dedicated to sharing graphs created using the Desmos graphing calculator. It is commonly asked to me that is Stable Diffusion XL (SDXL) DreamBooth better than SDXL LoRA? Here same prompt comparisons. I'm a long prompter, BREAK-between-75 - -token type of guy. 5 models doing even more superior job with photorealism and people in particular. ve st pi tn we go sy sx pq fd