Comfyui upscale workflow free reddit


Comfyui upscale workflow free reddit. If you want upscale to specific size. fix and other upscaling methods like the Loopback Scaler script and SD Upscale. LD2WDavid. py; Note: Remember to add your models, VAE, LoRAs etc. Put your folder in the top left text input. then i use the images from animatediff as my key frames. How the workflow progresses: Initial image generation Hands fix Watermark removal Ultimate SD Upscale Eye detailer Save image This workflow contains custom nodes from various sources and can all be found using comfyui manager. It's an 2x upscale workflow. The other approach is to use a locked upscaler. From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. For those also researching, Krea. on Linux: 8+ GB VRAM NVIDIA gpu and AMD gpu. Ugh. 5 txt>img workflow if anyone would like to criticize or use it. For now, I have to manually copy the right prompts. this creats a very basic image from a simple prompt and sends it as a source. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). Second, it is important to note, that for now it is working only with white background image inputs. You'll find upscale models here: https://openmodeldb. Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. It didn't work out. I have a custom image resizer that ensures the input image matches the output dimensions. Furthermore, I know there are probably already pre-made workflows for ComfyUI, but I'd rather not use them as I feel like I won't have any clue what anything really does. 5x-2x with either SDXL Turbo or SD1. They depend on complex pipelines and/or Mixture of Experts (MoE) that enrich the prompt in many different ways. null_hax. I am working on a 4 gb vram so it takes quite some time to load a checkpoint each time i load a workflow. I'll make this more clear in the documentation. ComfyUI doesn't have a mechanism to help you map your paths and models against my paths and models. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and Often times I just get meh results with not much interesting motion when I play around with the prompt boxes, so just trying to get an idea of your methodology behind setting up / tweaking the prompt composition part of the flow. Just curious if anyone knows of a workflow that could basically clean up/upscale screenshots from an animation from the late 90s (like Escaflowne or Ruroni Kenshin). Required: 1- on Windows: 8+ GB VRAM NVIDIA gpu only. info/ including some 1:1 models for the reduction of jpeg artefacts etc. I played for a few days with ComfyUI and SDXL 1. But let me know if you need help replicating some of the concepts in my process. AnimateLCM-I2V is also extremely useful for maintaining coherence at higher resolutions (with ControlNet and SD LoRAs active, I could easily upscale from 512x512 source to 1024x1024 in a single pass). In the workflow notes, you will find some recommendations as well as links to the model, LoRa, and upscalers. If you see a few red boxes, be sure to read the Questions section on the page. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. co) . resize down to what you want. Hope this helps. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now. The final node is where comfyui take those images and turn it into a video. I'm still looknig for ultimate workflow that have a all in one feature. 0, did some experiments, and came up with reasonably simple, yet pretty flexible and powerful workflow I use myself: MoonRide workflow v1. Has 5 parameters which will allow you to easily change the prompt and experiment. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. ago. 5. SD upscaler and upscale from that. * Dialog / Dialogue Editing * ADR * Sound Effects / SFX * Foley * Ambience / Backgrounds * Music for picture / Soundtracks / Score * Sound Design * Re-Recording / Mix * Layback * and more Audio-Post Audio Post Editors Sync Sound Pro Tools ProTools De-Noise DeNoise Here's the possible structure of that workflow: First Pass: SDXL Turbo for the initial image generation. There are also other upscale models that can upscale latents with less distortion, the standard ones are going to be bucubic, billinear, and bislerp. Looking forward to seeing your workflow. Also added a second part where I just use a Rand noise in Latent blend. 5. Do the same comparison with images that are much more detailed, with characters and patterns. I generate an image that I like then mute the first ksampler, unmute Ult. ( I am unable to upload the full-sized image. 5 models but i need some advice on my workflow. ☺️🙌🏼🙌🏼. 0 for ComfyUI - Now featuring SUPIR next-gen upscaler, IPAdapter Plus v2 nodes, a brand new Prompt Enricher, Dall-E 3 image generation, an advanced XYZ Plot, 2 types of automatic image selectors, and the capability to automatically generate captions for an image directory In WebUI settings, open ControlNet options and set 'Multi Controlnet: Max models amount' to 2 or more. ComfyUI Txt2Video. Based on Sytan SDXL 1. Step three: Feed your source into the compositional and your style into the style. thedyze. ai Edit: I realized that the workflow loads just fine, but the prompts are sometimes not as expected. Step one: Hook up IPAdapter x2. My question is i am not familiar with f8_unet, fp16, bf16, fp32, etc. The images were created with ComfyUI. Is there a version of ultimate SD upscale that has been ported to ComfyUI? I am hoping to find a way to implement image2image in a pipeline that includes multi controlnet and has a way that I can make it so that all generations automatically get passed through something like SD upscale without me having to run the upscaling as a separate step New SD_4XUpscale_Conditioning node VS Model Upscale (4x-UltraSharp. ZeonSeven. It's messy right now but does the job. 0, Starting 0. You end up with images anyway after ksampling so you can use those upscale node. Hi there. No attempts to fix jpg artifacts, etc. 4, but use a relevent to your image control net so you don't lose to much of your original image, and combining that with the iterative upscaler and concat a secondary posative telling the model to add detail or improve detail. So, when you download the AP Workflow (or any other workflow), you have to review each and every node to be sure that they point to your version of the model that you see in the picture. The initial Latents are randomized Fractal Noise (Custom Node named Perlin Power Fractal Noise). This is the workflow I use in ComfyUi to render 4k pictures with Dream shaper XL model. I've struggled with Hires. Sample again, denoise=0. The second workflow is called "advanced" and it uses an experimental way to combine prompts for the sampler. This workflow was created to automate the process of converting roughs generated by A1111's t2i to higher resolutions by i2i. If this can be solved, I think it would help lots of other people who might be running into this issue without knowing it. It's quite simple. Toggle if the seed should be included in the file name or not. These comparisons are done using ComfyUI with default node settings and fixed seeds. haha thanks. Also make sure you install missing nodes with ComfyUI Manager. I upscaled it to a resolution of 10240x6144 px for us to examine the results. The Initial Workflow with Unsampler: A Step-by-Step Guide The apply_ref_when_disabled can be set to True to allow the img_encoder to do its thing even when the end_percent is reached. Don't. . - now change the first sampler's state to 'hold' (from 'sample') and unmute the second sampler. Still working on the the whole thing but I got the idea down It depends on how large the face in your original composition is. Can you please explain your process for the upscale?? That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. View community ranking See how large this community is compared to the rest of Reddit. ckpt motion with Kosinkadink Evolved . Explore thousands of workflows created by the community. The workflow is kept very simple for this test; Load image Upscale Save image. P2 This looks great. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. Two workflows included. I think it was 3DS Max. Also, if this is new and exciting to you, feel free to post WorkFlow - Choose images from batch to upscale. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Im trying to upscale at this stage but i cant get it to work. Upscaling is done with iterative latent scaling and a pass with 4x-ultrasharp. You should insert ImageScale node. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. Through recommended youtube videos i learned that a good way to increase the size and quality of gens i can use iterative upscales first in latent and then iterative upscale for the itself image and also that you can generate pretty high resolutions images with kohyas deep shrink but Forget face swap. 5 ~ x2 - no need for model, can be a cheap latent upscale. A lot of people are just discovering this technology, and want to show off what they created. I'm using mm_sd_v15_v2. fix and Loopback Scaler either don't produce the desired output, meaning they change too much about the image (especially faces), or they don't increase the details enough which causes the end result to look too smooth (sometimes losing details) or even blurry and smeary with Flexible location photoshoot ComfyUI workflow. Generally a workflow like this gives good results: Generate initial image at 512x768. You are doing it wrong. Rather a complex workflow, I rebuild it with half the nodes. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. Launch ComfyUI by running python main. Step two: Set one to compositional and one to style weight. Agreeable-Grade-3807. comments 🎧 Royalty Free BEST Upscaler roundup and comparison. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to Ipadaptor for all. Look at this workflow : 5. That's because latent upscale turns the base image into noise (blur). You upload image -> unsample -> Ksampler advanced -> same recreation of the original image. From the Img2Video of Stable Video Diffusion, with this ComfyUI Workflow you can create an image with the prompt, negative prompt and checkpoint (and vae) that you want and then a video will be created automatically with that image. The best part is that it's all run entirely locally on your Mac so there's no usage caps or GPU costs and you can control the entire pipeline with custom Stable Diffusion models. 5x-2x using either SDXL Turbo or SD1. Also, if this is new and exciting to you, feel free to post, but don't spam all your work. 5 to get a 1024x1024 final image (512 *4*0. Merging two workflow (Please Help!!) I an new to comfyui and it has been really tough to find the perfect workflow to work with. If you see any red nodes, I recommend using comfyui manager's " install missing custom nodes " function. 0 is the first step in that direction. This will allow detail to be built in during the upscale. In the end, it was 30 steps using Heun and Karras that got the best results though. This will get to the low-resolution stage and stop. Configure as in Step 1. The reason I haven't raised issues on any of the repos is because I am not sure where the problem actually exists: ComfyUI, Ultimate Upscale, or some other custom node entirely. Breakdown of workflow content. Install the ComfyUI dependencies. Change sampler to Euler or DPM series (DDIM series is not recommended for this setup). 5, don't need that many steps. Forget face swap. ComfyUI-Workflow-Component The main features are: Works with SDXL, SDXL Turbo as well as earlier version like SD1. Yes, all-in-one workflows do exist, but they will never outperform a workflow with a focus. You may plug them to use with 1. Beginners' guide to ComfyUI 😊 We discussed the fundamental comfyui workflow in this post 😊 You can express your creativity with ComfyUI #ComfyUI #CreativeDesign #ImaginativePictures #Jarvislabs. Combining the images into a grid does have an interesting result though. Here is a workflow that I use currently with Ultimate SD Upscale. - adaptable, modular with tons of features for tuning your initial image. Pop the one you choose into models>upscale_models. Thanks tons! That's the one I'm referring Here is my current 1. Hey all, Pretty new to the whole comfyui thing with using 1. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. will output this resolution to the bus. If it’s a distant face then you probably don’t have enough pixel area to do the fix justice. 5 if you want to divide by 2) after upscaling by a model. TODO: add examples. 0 for ComfyUI - Now with Face Swapper, Prompt Enricher (via OpenAI), Image2Image (single images and batches), FreeU v2, XY Plot, ControlNet and ControlLoRAs, SDXL Base + Refiner, Hand Detailer, Face Detailer, Upscalers, ReVision, etc. To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. repeat until you have an image you like, that you want to upscale. My primary goal was to fully utilise 2-stage architecture of SDXL - so I have base and refiner models working as stages in latent space. Introducing ComfyUI Launcher! new. Third Pass: Further upscale 1. I really loved this workflow which i got from civitai, one The workflow and all mentions in the video (including potential errors and fixes proactively :)) are very informative, i want to thank you for your efforts. Hires. So instead of one girl in an image you got 10 tiny girls stitch into one giant upscale image. 2. [Load VAE] and [Load Lora] are not plugged in this config for DreamShaper. You guys have been very supportive, so I'm posting here first. ai has 50 free uploads and unlimited for $24 compared to $40 for 200 upscale by Magnific. It'll be perfect if it includes upscale too (though I can upscale it in an extra step in the extras tap of automatic1111). AP Workflow 5. If it’s a close up then fix the face first. Please share your tips, tricks, and workflows for using this software to create your AI art. Release: AP Workflow 9. Text to image using a selection from initial batch. If you give me a workflow, I can cook something up quickly. Belittling their efforts will get you banned. Input sources-. Workflow(Beware if OCD) P1. Along with normal image preview other methods are: Latent Upscaled 2x. • 4 mo. For general upscaling of photos go: remacri 4x upscale. After borrowing many ideas, and learning ComfyUI. Opening the image in stable-diffusion-webui's PNG-info I can see that there are indeed two different sets of prompts in that file and for some reason the wrong one is being chosen. Use IP Adapter for face. Just my two cents. I uploaded the workflow in GH . Welcome to the unofficial ComfyUI subreddit. 0, Ending 0. . The processing time will clearly depend on the image resolution and the power of your computer. •. Reply. The problem with simply upscaling them is that they are kind of 'dirtier', so simply upscale doesn't really clean them up around the lines, and colors are a bit dimmer/darker. Overall: - image upscale is less detailed, but more faithful to the image you upscale. json · cmcjas/SDXL_ComfyUI_workflows at main (huggingface. And above all, BE NICE. In my case, with an Nvidia RTX 2060 with 12 GB, the processing time to scale an image from 768x768 pixels to 16k was approximately 12 minutes. • 9 days ago. One of the most powerful features of ComfyUI is that within seconds you can load an appropriate workflow for the task at hand. 5=1024). Upscale x1. Upscale and then fix will work better here. We are sound for picture - the subreddit for post sound in Games, TV / Television , Film, Broadcast, and other types of production. Yes i am following this channel on yt, and already have watched the workflow but will give a try once more may be it helps. to the corresponding Comfy folders, as discussed in ComfyUI manual installation. 1. First it is not pasting back the original image (like with your product workflow). Here is details on the workflow I created: This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. This workflow was built using the following custom nodes. It uses CN tile with ult SD upscale. My long-term goal is to use ComfyUI to create multi-modal pipelines that can reach results as good as the ones from the AI systems mentioned above without human intervention. I send the output of AnimateDiff to UltimateSDUpscale Jan 11, 2024 · Even though the previous tests had their constraints Unsampler adeptly addresses this issue delivering an user experience within ComfyUI. If it's the best way to install control net because when I tried manually doing it . - queue the prompt again - this will now run the upscaler and second pass. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale You just have to use the node "upscale by" using bicubic method and a fractional value (0. Please keep posted images SFW. You could try to pp your denoise at the start of an iterative upscale at say . will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion -. Because upscale amount is determined by upscale model itself. First I generate an Image with TXT2IMG. Can you maybe help me with my workflow? animatediff to get the starting file, this is 512x512 then ebsynth untility sage 1. 2. GFPGAN. Can someone guide me to the best all-in-one workflow that includes base model, refiner model, hi-res fix, and one LORA. 4 alpha 0. 5 - workflow in 1st comment. I made one (FaceDetailer > Ultimate SD Upscale > EyeDetailer > EyeDetailer). I haven't really shared much and want to use other's ideas as One alternate method is to use the api and a python script. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". Simple ComfyUI Img2Img Upscale Workflow . An example of the images you can generate with this workflow: I know there is the ComfyAnonymous workflow but it's lacking. So Input image change quite a bit. 4) Then you can cut out face and redo-it with IP Adapter. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. Better than the abomination Disney is cooking. doomndoom. 3. Additionally, I need to incorporate FaceDetailer into the process. If you want more resolution you can simply add another Ultimate SD Upscale node. I liked the ability in MJ, to choose an image from the batch and upscale just that image. If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever you want to do next; the image upscale is pretty much the only distortion-“free” way to do it. I like building my own things and seeing how they work out, then work with the tips of others to improve on the design. - latent upscale looks much more detailed, but gets rid of the detail of the original image. 5 base models, and modify latent image dimensions and upscale values to work Welcome to the unofficial ComfyUI subreddit. Hires fix 2x (two pass img) Welcome to the unofficial ComfyUI subreddit. pth) So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. safetensors ( SD 4X Upscale Model ) I decided to pit the two head to head, here are the results, workflow pasted WORKFLOW (first image): To achieve that style you have to generate the initial images with a cartoon-esque model and use a more realistic model during upscaling. Members Online Ultimate Starter setup. ) These images are zoomed-in views that I created to examine the details of the upscaling process, showing how much detail Depending on the noise and strength it end up treating each square as an individual image. 6. For Controlnet Unit 1, set Model to "tile" and parameters: Weight 1. This is an example of an image that I generated with the advanced workflow. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. Second Pass: Upscale 1. The first one is very similar to the old workflow and just called "simple". Members Online FAST - Text to Video - LCM AnimDiff SD 1. HTH Welcome to the unofficial ComfyUI subreddit. Allows you to choose the resolution of all output resolutions in the starter groups. Do you have ComfyUI manager. 0 ComfyUI workflow with a few changes, here's the sample json file for the workflow I was using to generate these images: sdxl_4k_workflow. May be somewhere i will point out the issue. His embeds node does much the same thing but is more controllable as you can weight each influence and even save the embeds and reuse. This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. 0. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. 1 but I resize with 4x-Ultrasharp set to x2 and in ComfyUI this workflow uses a nearest/exact latent upscale. After you can use the same latent and tweak start and end to manipulate it. In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). sharpen (radius 1 sigma 0. If you have another Stable Diffusion UI you might be able to reuse the dependencies. The yellow nodes are componentized nodes, which are simply a collection of Loader, ClipTextEncode, and Upscaler, respectively. They work as a single node without a sampler, but of course can be part of a larger comfy workflow. It enables users to tweak aspects, like hair color, facial expressions and more highlighting its flexibility and range of capabilities. 5 based model and 30 seconds using 30 steps/SD 2. Press go 😉. My nonscientific answer is that A1111 can do it around 60 seconds at 30 steps using a 1. Please feel free to criticize and tell me what I may be doing silly. image saving and postprocess need was-node-suite-comfyui to be installed. 0 Alpha + SD XL Refiner 1. r/comfyui. The batching is the key though, I wonder if Matteo tried this. Thank There is an imposter among us. Now i am trying different start-up parameters for comfyui like disabling smarty memory, etc. kj dr ed tc nl nw xt es tb ug