Comfyui upscale example reddit


Comfyui upscale example reddit. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. com/ltdrdata/ComfyUI-Manager 5 days ago · 31 Aug 2024 76:17. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). Instead, I use Tiled KSampler with 0. All hair strands are super thick and contrasty, the lips look plastic and the upscale couldn't deal with her weird mouth expression because she was singing. ComfyUI Manager issue. Sure, it comes up with new details, which is fine, even beneficial for 2nd pass in t2i process, since the miniature 1st pass often has some issues due to imperfec A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) 10 upvotes · comments Welcome to the unofficial ComfyUI subreddit. Image Processing A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. 5 are usually a better idea than going 2+ here because latent upscale introduces noise which requires an offset denoise value be added in the following ksampler) a second ksampler at 20+ steps set to probably over 0 My comfyUI backend is an API that can be used by other apps if they want to do things with stable diffusion so chainner could add support for the comfyUI backend and nodes if they wanted to. Again, would really appreciate any of your Comfy 101 materials, resources, and creators, as well as your advice re. I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. - now change the first sampler's state to 'hold' (from 'sample') and unmute the second sampler - queue the prompt again - this will now run the upscaler and second pass. The downside is that it takes a very long time. And you may need to do some fiddling to get certain models to work but copying them over works if you are super duper uper lazy. if I feel I need to add detail, ill do some image blend stuff and advanced samplers to inject the old face into the process. 19K subscribers in the comfyui community. Currently the extension still needs some improvement, for example you can only do resolution which can be divided by 256. Appreciate just looking into it. If your image changes drastically on the second sample after upscaling, it's because you are denoising too much. 3 in order to get rid of jaggies, unfortunately it will diminish the likeness during the Ultimate Upscale. After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. Newcomers should familiarize themselves with easier to understand workflows, as it can be somewhat complex to understand a workflow with so many nodes in detail, despite the attempt at a clear structure. safetensors -- makes it easier to remember which one to choose where you're stringing together workflows. That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. Feature/Version Flux. 22, the latest one available). Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. A lot of people are just discovering this technology, and want to show off what they created. Try immediately VAEDecode after latent upscale to see what I mean. ComfyUI Examples. This is not the case. then plug the output from this into 'latent upscale by' node set to whatever you want your end image to be at (lower values like 1. To install it as ComfyUI custom node using ComfyUI Manager (Easy Way) : Make sure you already have ComfyUI Manager (it's like an extension manager) SECOND UPDATE - HOLY COW I LOVE COMFYUI EDITION: Look at that beauty! Spaghetti no more. The best method I Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. You can use folders too, so eg cascade/clip_model. You can easily utilize schemes below for your custom setups. Flux is a family of diffusion models by black forest labs. 5, euler, sgm_uniform or CNet strength 0. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. . You'll notice that with SAG the city in the background makes more sense and also the sky doesn't have any city parts in it. What is the best workflow you know of? Examples of ComfyUI workflows. ComfyUI Fooocus Inpaint with Segmentation Workflow started to use comfyui/SD local a few days ago und I wanted to know, how to get the best upscaling results. Hires fix with add detail lora. Thanks for your help Aug 29, 2024 · Upscale Model Examples Here is an example of how to use upscale models like ESRGAN. 1 Pro Flux. If I wanted any enhancements/details that latent upscaling could provide, I limit the upscale to around 1. You can find the workflows and more image examples below: ComfyUI SUPIR Upscale Workflow. I would output my image and keep the resolution down while any non-tiled sampler is going to be working on it. The resolution is okay, but if possible I would like to get something better. (Change the Pos and Neg Prompts in this method to match the Primary Pos and Neg Prompts). My sample pipeline has three sample steps, with options to persist controlnet and mask, regional prompting, and upscaling. This is why I want to add ComfyUI support for this technique. You end up with images anyway after ksampling so you can use those upscale node. "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. However, I am curious about how A1111 handles various processes at the latent level, which ComfyUI does extensively with its node-based approach. 5 with lcm with 4 steps and 0. Both these are of similar speed. 2 Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. In ComfyUI, we can break their approach into components and make adjustments at each part to find workflows that get rid of artifacts. You can construct an image generation workflow by chaining different blocks (called nodes) together. X values) if you want to benefit from the higher res processing. This is what I have so far (using the custom nodes to reduce the visual clutteR) . The video demonstrates how to integrate a large language model (LLM) for creative image results without adapters or control nets. This repo contains examples of what is achievable with ComfyUI. I created this workflow to do just that. Is there any actual point to your example about the 6 different models? This seems to inherently defeat the entire purpose of the 6 models and would likely end up making the end result effectively quite random and uncontrollable, at least without extensive testing though you could also simply train or find a model/lora that has similar result more easily. 9 , euler Welcome to the unofficial ComfyUI subreddit. For example, it's like performing sampling with the A model for onl You just have to use the node "upscale by" using bicubic method and a fractional value (0. TLDR In this tutorial, Seth introduces ComfyUI's Flux workflow, a powerful tool for AI image generation that simplifies the process of upscaling images up to 5. this breaks the composition a little bit, because the mapped face is most of the time to clean or has a slightly different lighting etc. how much upscale it needs to get that final resolution (both normal upscaler or upscaler value that have been 4x scaled by upscale model) Example Workflow of usage in ComfyUI: JSON / PNG. I hope this is due to your settings or because this is a WIP, since otherwise I'll stay away. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. 9, end_percent 0. Just remember for best result you should use detailer after you do upscale. You can run AnimateDiff at pretty reasonable resolutions with 8Gb or less - with less VRAM, some ComfyUI optimizations kick in that decrease VRAM required. The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. Latent quality is better but the final image deviates significantly from the initial generation. Thanks! Latent upscale is different from pixel upscale. This one is with SAG: Both are after two latent upscales. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. AH, I KNEW I was missing something that should be obvious! The upscale not being latent creating minor distortion effects and/or artifacts makes so much sense! And latent upscaling takes longer for sure, no wonder why my workflow was so fast. 55 I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. For videos of celebrities just going undercover and not doing the activity they are known for please submit to /r/UndercoverCelebs. com/comfyanonymous/ComfyUI; ComfyUI Manager – https://github. 5 denoise. To find the downscale factor in the second part, calculate by: factor = desired total upscale / fixed upscale factor = 2. On my 4090 with no optimizations kicking in, a 512x512 16 frame animation takes around 8GB of VRAM. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). For now I got this: A gorgeous woman with long light-blonde hair wearing a low cut tanktop, standing in the rain on top of a mountain, highly detailed, artstation, concept art, sharp focus, illustration, art by artgerm and alphonse mucha, trending on Behance, very detailed, by the best painters Examples of ComfyUI workflows. It does not work with SDXL for me at the moment. Adding LORAs in my next iteration. Until now I was launching a pipeline on each image one by one, but is it possible to have an automatic iterative task to do this? That might me a great upscale if you want semi-cartoony output, but it's nowhere near realistic. Does anyone have any suggestions, would it be better to do an ite Check comfyUI image examples in the link. You can find examples and workflows in his github page, for example, txt2img w/ latent upscale (partial denoise on upscale) - 48 frame animation with 16 frame window. Is there a way to copy normal webUI parameters ( the usual PNG info) into ComfyUI directly with a simple ctrlC ctrlV? Dragging and dropping 1111 PNGs into ComfyUI works most of the time. Is there a workflow to upscale an entire folder of images as is easily done in A1111 in the img2img module? Basically I want to choose a folder and process all the images inside it. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. Explore its features, templates and examples on GitHub. I have a 4090 rig, and i can 4x the exact same images at least 30x faster than using ComfyUI workflows. I upscaled it to a… This is more of a starter workflow which supports img2img, txt2img, a second pass sampler, between the sample passes you can preview the latent in pixelspace, mask what you want, and inpaint (it just adds mask to the latent), you can blend gradients with the loaded image, or start with an image that is only gradient. Look at this workflow : 10 votes, 15 comments. TBH, I haven't used A1111 extensively, so my understanding of A1111 is not deep, and I don't know what doesn't work in A1111. 25 to keep the process and VRAM usage lower. A working ComfyUI installation – https://github. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. Then I would do a model upscale>resize or instead, tiled upscaling approach. Visit their github for examples. This will get to the low-resolution stage and stop. repeat until you have an image you like, that you want to upscale. I am so sorry but my video is outdated now because ComfyUI has officially implemented the a SVD natively, update ComfyUI and copy the previously downloaded models from the ComfyUI-SVD checkpoints to your comfy models SVD folder and just delete the custom nodes ComfyUI-SVD. We are sound for picture - the subreddit for post sound in Games, TV / Television , Film, Broadcast, and other types of production. Because the SFM uses the same assets as the game, anything that exists in the game can be used in the movie, and vice versa. 1 Schnell; Overview: Cutting-edge performance in image generation with top-notch prompt following, visual quality, image detail, and output diversity. 15-1. It’s not very fancy, but it gets the job done. 0 = 0. Nevertheless, I found that when you really wanna get rid of artifacts, you cannot run a low denoising. Flux Examples. Still working on the the whole thing but I got the idea down That's because of the model upscale. * Dialog / Dialogue Editing * ADR * Sound Effects / SFX * Foley * Ambience / Backgrounds * Music for picture / Soundtracks / Score * Sound Design * Re-Recording / Mix * Layback * and more Audio-Post Audio Post Editors Sync Sound Pro Tools ProTools De-Noise DeNoise Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP Welcome to the unofficial ComfyUI subreddit. the good thing is no upscale needed. You can use mklink to link to your existing models, embeddings, lora and vae for example: F:\ComfyUI\models>mklink /D checkpoints F:\stable-diffusion-webui\models\Stable-diffusion We would like to show you a description here but the site won’t allow us. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. 5 if you want to divide by 2) after upscaling by a model. 0 Alpha + SD XL Refiner 1. It's not that case in ComfyUI - you can load different checkpoints and LoRAs for each KSampler, Detailer and even some upscaler nodes. Ugh. Then I upscale with 2xesrgan and sample the 2048x2048 again, and upscale again with 4x esrgan. Requirements. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Upscale x1. In this guide, we are aiming to collect a list of 10 cool ComfyUI workflows that you can simply download and try out for yourself. It's why you need at least 0. I've played around with different upscale models in both applications as well as settings. Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. Haven't used it, but I believe this is correct. However, it is not for the faint hearted and can be somewhat intimidating if you are new to ComfyUI. ComfyUI is a popular tool that allow you to create stunning images and animations with Stable Diffusion. 1 or not. There is also a UltimateSDUpscale node suite (as an extension). g. [2]. ComfyUI is a powerful and modular GUI for diffusion models with a graph interface. For ComfyUI there should be a license information for each node in my opinion: "Commercial use: yes, no, needs license" and a workflow using non-commercial should show some warning in red. A reminder that you can right click images in the LoadImage node and edit them with the mask editor. I want to replicate the "upscale" feature inside "extras" in A1111, where you can select a model and the final size of the image. I gave up on latent upscale. Also, if this is new and exciting to you, feel free to post The upscale quality is mediocre to say the least. Examples below are accompanied by a tutorial in my YouTube video. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. Hands are still bad though. 5/clip_some_other_model. There are also "face detailer" workflows for faces specifically. Please share your tips, tricks, and… That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. I can only make a stab at some of these, as I'm still very much learning. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. You can encode then decode bck to a normal ksampler with an 1. Here is an example of how to use upscale models like ESRGAN. thats For example, a professional tennis player pretending to be an amateur tennis player or a famous singer smurfing as an unknown singer. The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. 1 Dev Flux. I usually take my first sample result to pixelspace, upscale by 4x, downscale by 2x, and sampling from step 42 to step 48, then pass it to my third sampler for steps 52 to 58, before going to post with it. My workflow runs about like this: [ksampler] [Vae decode] [Resize] [Vae encode] [Ksampler #2 thru #n] ^ I typically use the same or a closely related prompt for the addl ksamplers, same seed and most other settings, with the only differences among my (for example) four ksamplers in the #2-#n positions 2x upscale using Ultimate SD Upscale and TileE Controlnet. That said, Upscayl is SIGNIFICANTLY faster for me. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. 1-0. Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. Hello, For more consistent faces i sample an image using the ipadapter node (so that the sampled image has a similar face), then i latent upscale the image and use the reactor node to map the same face used in the ipadapter on the latent upscaled image. safetensors and 1. I was running some tests last night with SD1. 43 votes, 16 comments. The final node is where comfyui take those images and turn it into a video. Specfiically, the padded image is sent to the control net as pixels as the "image" input , and the padded image is also sent as VAE encoded to the sampler as the latent image. So if you want 2. Hi all, the title says it all, after launching a few batches of low res images I'd like to upscale all the good results. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. Reply reply More replies More replies More replies Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. I do a first pass at low-res (say, 512x512), then I use the IterativeUpscale custo The Source Filmmaker (SFM) is the movie-making tool built and used by Valve to make movies inside the Source game engine. Welcome to the unofficial ComfyUI subreddit. For the easy to use single file versions that you can easily use in ComfyUI see below: FP8 Checkpoint Version Depending on the noise and strength it end up treating each square as an individual image. The only issue is that it requieres more VRAM, so many of us will probably be forced to decrease the resolutions bellow 512x512. 2x, upscale using a 4x model (e. You could add a latent upscale in the middle of the process then a image downscale in pixel space at the end (use upscale node with 0. 6 denoise and either: Cnet strength 0. We would like to show you a description here but the site won’t allow us. This is done after the refined image is upscaled and encoded into a latent. safetensors vs 1. This ComfyUI nodes setup lets you use Ultimate SD Upscale custom nodes in your ComfyUI AI generation routine. 5 to get a 1024x1024 final image (512 *4*0. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. [3]. Belittling their efforts will get you banned. Please keep posted images SFW. The workflow images become the workflow itself. Please share your tips, tricks, and workflows for using this software to create your AI art. ATM I start the first sampling in 512x512, upscale with 4x esrgan, downscale the image to 1024x1024, sample it again, like the docs tell. 2 and resampling faces 0. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. The 16GB usage you saw was for your second, latent upscale pass. 5/clip_model_somemodel. The "Upscale and Add Details" part splits the generated image, upscales each part individually, adds details using a new sampling step and after that stiches the parts together Thanks. 0. And when purely upscaling, the best upscaler is called LDSR. The img2img pipeline has an image preprocess group that can add noise and gradient, and cut out a subject for various types of inpainting. There are a lot of upscale variants in ComfyUI. I liked the ability in MJ, to choose an image from the batch and upscale just that image. So instead of one girl in an image you got 10 tiny girls stitch into one giant upscale image. I'm still learning so any input on how I could improve these workflows is appreciated, though keep in mind my goal is to balance the complexity with the ease of use for end users. ultrasharp), then downscale. Basically it doesn't open after downloading (v. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. 4x using consumer-grade hardware. My problem is that my generation produce a 1 pixel line at the right/bottom of the image which is weird/white. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. Hello! I am hoping to find find a ComfyUI workflow that allows me to use Tiled Diffusion + Controlnet Tile for upscaling images~ can anyone point me toward a comfy workflow that does a good job of this? I am now just setting up ComfyUI and I have issues (already LOL) with opening the ComfyUI Manager from CivitAI. safetensors (SD 4X Upscale Model) Jan 5, 2024 · I needed a workflow to upscale and interpolate the frames to improve the quality of the video. I have to push around 0. For example, you might prompt the model differently when it's rendering the smaller patches, removing the "kangaroo" entirely. No matter what, UPSCAYL is a speed demon in comparison. 5=1024). Point the install path in the automatic 1111 settings to the comfyUI folder inside your comfy ui install folder which is probably something like comfyui_portable\comfyUI or something like that. This could lead users to increase pressure to developers. While I was kicking around in LtDrData's documentation today, I noticed the ComfyUI Workflow Component, which allowed me to move all the mask logic nodes behind the scenes. Due to the complexity of the workflow, a basic understanding of ComfyUI and ComfyUI Manager is recommended. Like 1024, 1280, 2048, 1536. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. Makeing a bit of progress this week in ComfyUI. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. My postprocess includes a detailer sample stage and another big upscale. 5 and I was able to get some decent images by running my prompt through a sampler to get a decent form, then refining while doing an iterative upscale for 4-6 iterations with a low noise and bilinear model, negating the need for an advanced sampler to refine the image. I have been generally pleased with the results I get from simply using additional samplers. Ty i will try this. This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. It's not that case in ComfyUI - you can load different checkpoints and LoRAs for each KSampler, Detailer and even some upscaler nodes. Second, you will need Detailer SEGS or Face Detailer nodes from ComfyUI-Impact Pack. I'm not entirely sure what ultimate SD upscale does, so I'll answer generally as to how I do upscales. By applying both a prompt to improve detail and to increase resolution (indicating as percentage, for example 200% or 300%). 2 / 4. And above all, BE NICE. They are images of workflows, if you download those workflow images and drag them to comfyUI, it will display the workflow. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. 5 "Upscaling with model" and then denoising 0. I also combined ELLA in the workflow to make it easier to get what I want. I might do an issue in ComfyUI about that. After borrowing many ideas, and learning ComfyUI. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. I thought it was cool and wanted to do that too. - run your prompt. The workflow has different upscale flow that can upscale up to 4x and in my recent version I added a more complex flow that is meant to add details to a generated image. watoxe gnbe hmxxnr wjjhbg alse fkwpg aygh kxpg ldnons hggd

© 2018 CompuNET International Inc.