Decorative
students walking in the quad.

Comfyui inpaint nodes reddit

Comfyui inpaint nodes reddit. Welcome to the unofficial ComfyUI subreddit. When using the Impact Pack's detailer, you can mask the area to inpaint and use MaskToSEGS with DetailerForEach to crop only the masked area and the surrounding area specified by crop_factor for inpainting. This is useful to get good faces. If there were a switch node like the one in the image, it would be easy to switch between workflows with just a click. 20 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4080 Laptop GPU Using xformers cross attention Total VRAM 12282 MB, total RAM 32394 MB xformers version: 0. u/Auspicious_Firefly I spent a couple of days testing this node suite and the model. use the WAS suite number counter node its the shiz primitive nodes arent fit for purpose, they need to be remade as they are buggy anyway. And the parameter "force_inpaint" is, for example, explained incorrectly. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and Has anyone seen a workflow / nodes that detail or inpaint the eyes only? I know facedetailer, but hoping there is some way of doing this with only the eyes If there is no existing workflow/ custom nodes that address this, would love any tips on how I could potentially build this You were so close! As it was said, there is one node that shouldn't be here, the one called "Set Latent Noise Mask". Only the custom node is a problem. The thing that is insane is testing face fixing (used SD 1. They enable setting the right amount of context from the image for the prompt I've been working really hard to make lcm work with ksampler, but the math and code are too complex for me I guess. ComfyUI impact pack, Inspire Pack and other auxiliary packs have some nodes to control mask behaviour. 1 at main (huggingface. Thanks, already have that but run into the same issue I had earlier where the Load Image node is missing the Upload button, fixed it earlier by doing Update All in Manager and then running the ComfyUI and Python dependencies batch files but that hasn't worked this time, so only going top be able to do prompts from text until I've figured it out. I just recorded this video tutorial that explains, in just ten minutes, how to do very fast inpainting only on masked areas in ComfyUI. I can get comfy to load. I've watched a video about resizing and outpainting an image with inpaint controlnet on automatic1111. It enables setting the right amount of context from the image for the prompt to be more accurately represented in the generated picture. Inpainting with an inpainting model. 20 Set vram state to: NORMAL_VRAM Device: cuda:0 We would like to show you a description here but the site won’t allow us. Impact packs detailer is pretty good. Inpainting a cat with the v2 inpainting model: Inpainting a woman with the v2 inpainting model: It also works with non inpainting models. diffusers/stable-diffusion-xl-1. upscale the masked region to do an inpaint and then downscale it back to the original resolution when pasting it back in. EDIT: There is something already like this built in to WAS. An example is FaceDetailer / FaceDetailerPipe. If you set guide_size to a low value and force_inpaint to true, inpainting is done at the original size. Posted by u/Sensitive-Paper6812 - 48 votes and 8 comments Don't install ALL the suggested nodes from ComfyUI Manager's "install missing nodes" feature!!! It will lead to conflicted nodes with the same name and a crash. but mine do include workflows for the most part in the video description. Its not the nodes. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. Modified PhotoshopToComfyUI nodes by u/NimaNrzi. 0. I work with node based scripting and 3d material systems in game engines like Unreal all the time. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. Thank you. I'm looking to do the same but I don't have an idea how automatic implementation of said controlnet is correlating with comfy nodes. I also didn't know about the CR Data Bus nodes. While working on my inpainting skills with comfyUI, I read up the documentation about the node "VAE Encode (for inpainting)". Jan 20, 2024 ยท The resources for inpainting workflow are scarce and riddled with errors. I can't figure out this node, it does some generation but there is no info on how the image is fed to the sampler before denoising, there is no choice between original, latent noise/empty, fill, no resizing options or inpaint masked/whole picture choice, it just does the faces whoever it does them, I guess this is only for use like adetailer in A1111 but I'd say even worse. Please repost it to the OG question instead. Node based editors are unfamiliar to lots of people, so even with the ability to have images loaded in people might get lost or just overwhelmed to the point where it turns people off even though they can handle it (like how people have an ugh reaction to math). I did not know about the comfy-art-venture nodes. Any good options you guys can recommend for a masking node? ya been reading and playing with it for few days. if you want to upscale all at the same time, then you may as well just inpaint on the higher res image tbh. ControlNet inpainting. People who use nodes say that SD 1. I don’t think “if you’re too newb to figure it out try again later” is a productive way to introduce a technique. Hi, is there an analogous workflow/custom nodes for WebUI's "Masked Only" inpainting option in ComfyUI? I am trying to experiment with Animatediff + inpainting but inpainting in ComfyUI always generates on a subset of pixels on my original image so the inpainted region always ends up low quality. Plug the VAE Encode latent output directly in the KSampler. I just published these two nodes that crop before impainting and re-stitch after impainting while leaving unmasked areas unaltered, similar to A1111's inpaint mask only. . 0 + other_model If you are familiar with the “Add Difference” option in other UIs this is how to do it in ComfyUI. Supporting a modular Inpaint-Mode extracting mask information from Photoshop and importing in ComfyUI original nodes: A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. Yes, current SDXL version is worse but it is the step forward and even in current state perform quite well. The workflow offers many features, which requires some custom nodes (listed in one of the info boxes and available via the ComfyUI manager), models (also listed with link) and - especially with activated upscaler - may not work on devices with limited VRAM. you can right click a node in comfy ui and break out any input into different nodes, we use multi purpose nodes for certain things because they are more flexible and can be cross linked into multiple nodes. The nodes are called "ComfyUI-Inpaint-CropAndStitch" in ComfyUI-Manager or you can download manually by going to the custom_nodes Since Loras are a patch on the model weights they can also be merged into the model: You can also subtract models weights and add them like in this example used to create an inpaint model from a non inpaint model with the formula: (inpaint_model - base_model) * 1. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. There is only one thing wrong with your workflow: using both VAE Encode (for Inpainting) and Set Latent Noise Mask. ) And having a different color "paint" would be great. Please share your tips, tricks, and workflows for using this software to create your AI art. true. 5ms to generate. What those nodes are doing is inverting the mask to then stitch the rest of the image back into the result from the sampler. Blender Geometry Node Some nodes might be called "Mask Refinement" or "Edge Refinement. I can’t seem to get the custom nodes to load. comfyui is so fun (inpaint workflow) Im having a bunch of issues to get results, still a total newcomer but it looks hella fun. So, in order to get a higher resolution inpaint into a lower resolution image, you would have to scale it up before sampling for inpaint. The main advantages these nodes offer are: They make it much faster to inpaint than when sampling the whole image. Belittling their efforts will get you banned. I will start using that in my workflows. Promptless Inpaint/Outpaint in ComfyUI made easier with canvas (ipadapter+cn inpaint+reference only) I would take it a step further, and in Manager before installing the entire node package, expose all of the nodes to be selected individually with a checkbox or all of course. Good luck out there! With Masquerades nodes (install using comfyui node manager), you can maskToregion, cropByregion (both the image and the large mask), inpaint the smaller image, pasteByMask into the smaller image, then pasteByRegion into the bigger image. This speeds up inpainting by a lot and enables making corrections in large images with no editing. and 9 seconds total to refine it. Every workflow author uses an entirely different suite of custom nodes. Please keep posted images SFW. - Composite Node: Use a compositing node like "Blend," "Merge," or "Composite" to overlay the refined masked image of the person onto the new background. Just install these nodes: Fannovel16 ComfyUI's ControlNet Auxiliary Preprocessors Derfuu Derfuu_ComfyUI_ModdedNodes EllangoK ComfyUI-post-processing-nodes BadCafeCode Masquerade Nodes Excellent tutorial. co) I am very well aware of how to inpaint/outpaint in comfyui - I use Krita. downscale a high-resolution image to do a whole image inpaint, and the upscale only the inpainted part to the original high resolution. This was not an issue with WebUI where I can say, inpaint a cert Is there a switch node in ComfyUI? I have an inpaint node setup and a lora setup, but when I switch between node workflows, I have to connect the nodes each time. The Inpaint Crop and Stitch nodes can be downloaded using ComfyUI-Manager, just look for "Inpaint-CropAndStitch". Inpainting with a standard Stable Diffusion model. Maybe it will get fixed later on, it works fine with the mask nodes. Of course this can be done without extra nodes or by combining some other existing nodes, or in A1111, but this solution is the easiest, more flexible, and fastest to set up you'll see in ComfyUI (I believe :)). If inpaint regenerates the entire boxed area near the mask, instead of just the mask, then pasting the old image over the new one means that the inpainted region won't mesh well with the old image--there will be a layer of disconnect. Also the ability to unload via checkbox later. Total VRAM 12282 MB, total RAM 32394 MB xformers version: 0. The following images can be loaded in ComfyUI to get the full workflow. Well not entirely, although they still require more knowledge of how the AI "flows" when it works. The description of a lot of parameters is "unknown". Reply reply I have a ComfyUI inpaint workflow set up based on SDXL, but it seems to go for maximum deviation from the source image. 5 just to see to compare times) the initial image took 127. See for yourself: visible square of the cropped image with "Change channel count" to "mask" or "RGB". Coincidentally, I am trying to create an inpaint workflow right now. The number of unnecessary overlapping functions in the node packages is outrageous. the amount of control you can have is frigging amazing with comfy. A lot of people are just discovering this technology, and want to show off what they created. you should read the document data in Github about those nodes and see what could do the same of what are you looking for. I recently published a couple of nodes that automate and significantly improve inpainting by enabling the sampling to take place only on the masked area. Its a good idea to use the 'set latent noise mask' node instead of vae inpainting node. LaMa (2021), the inpainting technique that is the basis of this preprocessor node came before LLaMa (2023), the LLM. As a backend, ComfyUI has some advantages over Auto1111 at the moment, but it never implemented the image-guided ControlNet mode (as far as I know), and results with just regular inpaint ControlNet are not good enough. If your image is in pixel world (as it is in your workflow), you should only use the former, if in latent land, only the latter. So this is perfect timing. Since a few days there is IP-Adapter and a corresponding ComfyUI node which allow to guide SD via images rather than text There is a ton of misinfo in these comments. But its more the requirement of knowing how the AI model actually "thinks" in order to guide it with your nodegraph. It works great with an inpaint mask. The strength of this effect is model dependent. Reply reply More replies More replies More replies Trobinou Forgot to mention, you will have to download this inpaint model from huggingface and put it in your comfyUI "Unet" folder that can be found in the models folder. I'm not at home so I can't share a workflow. The one you use looks especially useful. The main advantages of inpainting only in a masked area with these nodes are: It's much faster than sampling the whole image. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. Unfortunately, I think the underlying problem with inpaint makes this inadequate. The nodes on the top for the mask shenanigan are necessary for now, the efficient ksampler seems ignore the mask for the VAE part. If I increase the start_at_step, then the output doesn't stay close to the original image; the output looks like the original image with the mask drawn over it. 15 votes, 14 comments. Select all nodes: Alt + C: Collapse/uncollapse selected nodes: Ctrl + M: Mute/unmute selected nodes: Ctrl + B: Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) Delete/Backspace: Delete selected nodes: Ctrl + Backspace: Delete the current graph: Space: Move the canvas around when held and ComfyUI inpaint/outpaint/img2img made easier (updated GUI, more functionality) Workflow Included "Masked content" and "Inpaint area" from Automatic1111 on ComfyUI This question could be silly but since the launch of SDXL I stopped using Automatic1111 and transitioned to ComfyUI, wasn't hard but I'm missing some config from Automatic UI, for example when inpainting in Automatic I usually used the "latent nothing" on masked content option ComfyUI Inpaint Color Shenanigans (workflow attached) In a minimal inpainting workflow, I've found that both: The color of the area inside the inpaint mask does not match the rest of the 'no-touch' (not masked) rectangle (the mask edge is noticeable due to color shift even though content is consistent) comfy uis inpainting and masking aint perfect. In fact, it works better than the traditional approach. " - Background Input Node: In a parallel branch, add a node to input the new background you want to use. It includes an option called "grow_mask_by" which is described as the following in ComfyUI documentation : The default mask editor in Comfyui is a bit buggy for me (if I'm needing to mask the bottom edge for instance, the tool simply disappears once the edge goes over the image border, so I can't mask bottom edges. Also, if this is new and exciting to you, feel free to post I checked the documentation of a few nodes and I found that there is missing as well as wrong information, unfortunately. The workflow goes through a KSampler (Advanced). Basically the author of lcm (simianluo) used a diffusers model format, and that can be loaded with the deprecated UnetLoader node. Link: Tutorial: Inpainting only on masked area in ComfyUI. 0 denoising, but set latent denoising can use the original background image because it just masks with noise instead of empty latent. And above all, BE NICE. 0-inpainting-0. ComfyUI also has a mask editor that can be accessed by right clicking an image in the LoadImage node and “Open in MaskEditor”. Anyone who wants to learn ComfyUI, you'll need these skills for most imported workflows. vae inpainting needs to be run at 1. Is there a discord or something to talk with experimented people? Now, if you inpaint with "Change channel count" to "mask" or "RGBA" the inpaint is fine, however you get this square outline because of the inpaint having a slightly duller tone. This post hopes to bridge the gap by providing the following bare-bone inpainting examples with detailed instructions in ComfyUI. 5 BrushNet is the best inpainting model at the moment. ixlvro mvz ryfe rezh ufzp ytuvzrg rohrvk glbwu aqcaie uzctcru

--