Image size comfyui reddit

Image size comfyui reddit. A transparent PNG in the original size with only the newly inpainted part will be generated. I first get the prompt working as a list of the basic contents of your image. The first branch has: Txt to Image and then Image to SDVID with the new SD vid models that came out. eg: batch index 2, Length 2 would send image number 3 and 4 to preview img in this example. To open ComfyShop, simply right click on any image node that outputs an image and mask and you will see the ComfyShop option much in the same way you would see MaskEditor. Generated images automatic1111 image. Welcome to the unofficial ComfyUI subreddit. (207) ComfyUI Artist Inpainting Tutorial - YouTube Welcome to the unofficial ComfyUI subreddit. In truth, 'AI' never stole anything, any more than you 'steal' from the people who's images you have looked at when their images influence your own art; and while anyone can use an AI tool to make art, having an idea for a picture in your head, and getting any generative system to actually replicate that takes a considerable amount of skill and effort. Want 10 images? Click that button till the Queue size is 10 (or select Extra options and put in 10 in Batch count). This youtube video should help answer your questions. I can obviously pick a size when doing Text2Image but when prompting off an existing image my final image will always just be the same size as the inspiration image. 8 so that some of the structure of the original image generated is retained. New users of civitai should be aware the PNG (which contains the metadata) can only be downloaded from the "image view". 5, then uses Grounding Dino to mask portions of the image to animate with AnimateLCM. How to Magically Resize Your Images: The 1024px Rule That Will Change Everything. I would like to know if that is due to some reason other than images that large take a long time. Learn how to master inpainting on large images using ComfyUI and Stable Diffusion. . comfy-multiline-input { font-size: 10px; } ComfyShop has been introduced to the ComfyI2I family. Works great. Also, if this is new and exciting to you, feel free to post Posted by u/tobi1577 - 216 votes and 49 comments Welcome to the unofficial ComfyUI subreddit. And above all, BE NICE. comfyui image. Probably not what you want but, the preview chooser\image chooser node is a custom node that pauses the flow while you choose which image (or latent) to pass on to the rest of the workflow. It animates 16 frames and uses the looping context options to make a video that loops. Stable diffusion has a bad understanding of relative terms, try prompting: "a puppy and a kitten, the puppy on the left and the kitten on the right" to see what I mean. Enjoy a comfortable and intuitive painting app. You set the height and the width to change the image size in pixel space. I have a workflow I use fairly often where I convert or upscale images using ControlNet. Its solvable, ive been working on a workflow for this for like 2 weeks trying to perfect it for comfyUI but man no matter what you do there are usually some kind of artifacting, its a challenging problem to solve, unless you really want to use this process, my advice would be to generate subject smaller and then crop in and upscale instead. In this case, the image from comfy has some extra glitches. Input your batched latent and vae. Im instead going to just try to work around it but trying to downscale the size of the image. You probably still want an Exif Viewer/Remover/Cleaner to double check images since you haven't been using this setting and presumably have prior work to sanitize of metadata. Save the new image. So you have the preview and a button to continue the workflow, but no mask and you would need to add a save image after this node in your workflow. I have a workflow that is basically two user branches. I have tried to push down the sampling step count as low as possible. I've built many ComfyUI web apps for personal business purposes and have helped others on Reddit as well. As an input I use various image sizes and find I have to manually enter the image size in the Empty Latent Image node that leads to the KSampler each time I work on a new image. The one that is shown in the "post view" is a "preview JPEG" (even though it looks as if it is full size) which does not have the metadata. The option has been around for a long time with other UIs like Automatic1111 and Visions of Chaos. You can't enter a latent image size larger than 8192. Is there a way to pull this off within ComfyUI? Welcome to the unofficial ComfyUI subreddit. It is not a problem in the seed, because I tried different seeds. Insert the new image in again in the workflow and inpaint something else rinse and repeat until you loose interest :-) and no workflow metadata will be saved in any image. 5 is trained on images 512 x 512. If I were to make some type of custom node or modify the core node and allow a larger latent image size, would that break the whole process and there is some larger reason that 8192 is the hard Welcome to the unofficial ComfyUI subreddit. Belittling their efforts will get you banned. i do that alot. Ignore the LoRA node that makes the result look EXACTLY like my girlfriend. In an effort the generate images faster on my potato pc. I have a ComfyUI workflow that produces great results. so I would assume generating 4 images (with the `batch_size` property) would give me four images with seeds `1`, `2 I think the intended workflow here is to just press several times on the Queue Prompt button. This comprehensive tutorial covers 10 vital steps, including cropping, mask detection, sampler erasure, mask fine-tuning, and streamlined inpainting for incredible results. Batch index counts from 0 and is used to select a target in your batched images Length defines the ammount of images after the target to send ahead. Please share your tips, tricks, and workflows for using this software to create your AI art. Increasing the tile size to half the image's dimensions (1536) does improve image quality, but the speed benefit diminishes. First we calculate the ratios, or we use a text file where we Mar 22, 2024 · This simple checkbox in the Automatic1111 WebUI interface allows you to generate high-resolution images that look much better than the default output. Also the exact same position of the body. you wont get obvious seams or strange lines Welcome to the unofficial ComfyUI subreddit. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) When I generate an image with the prompt "attractive woman" in ComfyUI, I get the exact same face for every image I create. I want to upscale my image with a model, and then select the final size of it. How do I do the same with ComfyUI? Welcome to the unofficial ComfyUI subreddit. ComfyShop phase 1 is to establish the basic painting features for ComfyUI. This way its an end-to-end txt to animation. Here’s how you can do it: Automatic1111 May 14, 2024 · A basic description on a couple of ways to resize your photos or images so that they will work in ComfyUI. Automatic1111 would let you pick the final image size no matter what and give you options for crop, just resize, etc. Layer copy & paste this PNG on top of the original in your go to image editing software. can prettymuch be scaled to whatever batch size by repetition. A lot of people are just discovering this technology, and want to show off what they created. To then view the generated images click on View History and go through your generations by loading them. Stable Diffusion 1. In the process, we also discuss SDXL architecture, how it is supp Welcome to the unofficial ComfyUI subreddit. The denoise on the video generation KSampler is at 0. (using SD webUI before) I am getting blurry image when using "Realities Edge XL ⊢ ⋅ LCM+SDXLTurbo" model in ComfyUI I got the same issue in SD webUI but after using sdxl-vae-fp16-fix, images are good But when I try to use the same to fix this issue, not working. Or add the Image Gallery extension. Hello, Stable Diffusion enthusiasts! We decided to create a new educational series on SDXL and ComfyUI (it's free, no paywall or anything). It's based on the wonderful example from Sytan, but I un-collapsed it and removed upscaling to make it very simple to understand. You will need to launch comfyUI with this option each time, so modify your bat file or launch script. Here, you can also set the batch size , which is how many images you generate in each run. I think the bare minimum would be the following but having the rest of the defaults next to it could be handy if you want to make other changes. - comfyanonymous/ComfyUI Copy that into user. The hard part is knowing when the image is ready to be retreived and getting the image. Aug 21, 2023 · If we want to change the image size of our ComfyUI Stable Diffusion image generator, we have to type the width and height. /* Put custom styles here */ . However, my goal is to recreate the exact same image, I understand that the DPM++2M model can do this, at least in auto11 it does repeat the same image all the time. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. and see if you can get the image size to be used for the empty latent (converted) height and width (later on - empty Welcome to the unofficial ComfyUI subreddit. you can just plug the width and height from get image size directly into nodes where you need it too. Howdy! I'm not too advanced with ComfyUI for SD generation yet, but I've made a lot of progress thanks to your help. So I can't give a simple answer but I'd say if you're still interested and need some help we can join a discord call or something and I can help. A bit of an obtuse take. I have managed to push it down to 3 steps with some nifty tricks I found The demo images aren't curated, all images just use the seed "3" with a basic prompt, so this is really useful for experimenting. No, you don't erase the image. A mask adds a layer to the image that tells comfyui what area of the image to apply the prompt too. Please keep posted images SFW. If you just want to see the size of an image you can open an image in a seperate tab of your browser and look up top to find the resolution too. In the process, we also discuss SDXL architecture, how it is supp During my img2img experiments with 3072x3072 images, I noticed a quality drop using Hypertile with standard settings (tile size 256, swap size = 2, max depth = 0). Hey everyone, I've been exploring the possibility of using an image as input and generating an output image that retains the original input's dimensions. This will be follow-along type step-by-step tutorials where we start from an empty ComfyUI canvas and slowly implement SDXL. I know i can run the img to vid portion with 512 x 512 input image but im struggling trying to downscale the image by 2. The only way I can think of is just Upscale Image Model (4xultrasharp), get my image to 4096 , and then downscale with nearest-extact back to 1500. With Masquerades nodes (install using comfyui node manager), you can maskToregion, cropByregion (both the image and the large mask), inpaint the smaller image, pasteByMask into the smaller image, then pasteByRegion into the bigger image. Stable Diffusion XL is Jul 6, 2024 · So, if you want to change the size of the image, you change the size of the latent image. When I do the same in Automatic1111, I get completely different people and different compositions for every image. I started with ComfyUI 3 days ago. css and change the font-size to something higher than 10px and you should see a difference. This workflow generates an image with SD1. mswqin vik prin ddhpmhu nsyyjy encx zkoize nayr etwsn gmbshl