comfyui sdxl refiner. 5 and always below 9 seconds to load SDXL models. comfyui sdxl refiner

 
5 and always below 9 seconds to load SDXL modelscomfyui sdxl refiner please do not use the refiner as an img2img pass on top of the base

1min. py I've successfully run the subpack/install. Readme file of the tutorial updated for SDXL 1. For me, this was to both the base prompt and to the refiner prompt. For a purely base model generation without refiner the built-in samplers in Comfy are probably the better option. 0 Base and Refiner models An automatic calculation of the steps required for both the Base and the Refiner models A quick selector for the right image width/height combinations based on the SDXL training set Text2Image with Fine-Tuned SDXL models (e. 9 fine, but when I try to add in the stable-diffusion-xl-refiner-0. The workflow should generate images first with the base and then pass them to the refiner for further. For an example of this. 9 and Stable Diffusion 1. 0. SDXL Models 1. AI_Alt_Art_Neo_2. 0 on ComfyUI. That way you can create and refine the image without having to constantly swap back and forth between models. 10. 0 base and refiner models with AUTOMATIC1111's Stable Diffusion WebUI. 99 in the “Parameters” section. 1. 130 upvotes · 11 comments. The joint swap system of refiner now also support img2img and upscale in a seamless way. 9. Refiner > SDXL base > Refiner > RevAnimated, to do this in Automatic1111 I would need to switch models 4 times for every picture which takes about 30 seconds for each switch. refinerモデルを正式にサポートしている. I just wrote an article on inpainting with SDXL base model and refiner. base model image: . ·. 5 models for refining and upscaling. ComfyUI and SDXL. I've been working with connectors in 3D programs for shader creation, and the sheer (unnecessary) complexity of the networks you could (mistakenly) create for marginal (i. 20:57 How to use LoRAs with SDXL. Example workflow can be loaded downloading the image and drag-drop on comfyUI home page. Searge-SDXL: EVOLVED v4. 0 through an intuitive visual workflow builder. Step 3: Download the SDXL control models. Closed BitPhinix opened this issue Jul 14, 2023 · 3. Your results may vary depending on your workflow. safetensors and sd_xl_refiner_1. Set the base ratio to 1. My research organization received access to SDXL. Then inside the browser, click “Discover” to browse to the Pinokio script. 9 testing phase. In the Comfyui SDXL workflow example, the refiner is an integral part of the generation process. You will need a powerful Nvidia GPU or Google Colab to generate pictures with ComfyUI. It does add detail but it also smooths out the image. Now in Comfy, from the Img2img workflow, let’s duplicate Load Image and Upscale Image Nodes. I've a 1060 GTX, 6gb vram, 16gb ram. Step 2: Install or update ControlNet. Here are some examples I did generate using comfyUI + SDXL 1. 5对比优劣ComfyUI installation. Save the image and drop it into ComfyUI. Drag & drop the . A EmptyLatentImage specifying the image size consistent with the previous CLIP nodes. 17:18 How to enable back nodes. Using the SDXL Refiner in AUTOMATIC1111. x for ComfyUI . You can disable this in Notebook settings sdxl-0. 25-0. Thanks for this, a good comparison. Today, I upgraded my system to 32GB of RAM and noticed that there were peaks close to 20GB of RAM usage, which could cause memory faults and rendering slowdowns in a 16gb system. and After 4-6 minutes until the both checkpoints are loaded (SDXL 1. A good place to start if you have no idea how any of this works is the:with sdxl . best settings for Stable Diffusion XL 0. 🧨 DiffusersExamples. Updated with 1. x for ComfyUI. But if SDXL wants a 11-fingered hand, the refiner gives up. 0. 0 for ComfyUI | finally ready and released | custom node extension and workflows for txt2img, img2img, and inpainting with SDXL 1. Explain the Ba. With SDXL I often have most accurate results with ancestral samplers. You can download this image and load it or. Here's what I've found: When I pair the SDXL base with my LoRA on ComfyUI, things seem to click and work pretty well. I feel like we are at the bottom of a big hill with Comfy, and the workflows will continue to rapidly evolve. Fine-tuned SDXL (or just the SDXL Base) All images are generated just with the SDXL Base model or a fine-tuned SDXL model that requires no Refiner. Does it mean 8G VRAM is too little in A1111? Anybody able to run SDXL on 8G VRAM GPU in A1111 at. It is highly recommended to use a 2x upscaler in the Refiner stage, as 4x will slow the refiner to a crawl on most systems, for no significant benefit (in my opinion). Working amazing. I suspect most coming from A1111 are accustomed to switching models frequently, and many SDXL-based models are going to come out with no refiner. Second picture is base SDXL, then SDXL + Refiner 5 Steps, then 10 Steps and 20 Steps. 9 the latest Stable. Question about SDXL ComfyUI and loading LORAs for refiner model. Sometimes I will update the workflow, all changes will be on the same link. 0. 你可以在google colab. 0! Usage17:38 How to use inpainting with SDXL with ComfyUI. If you use ComfyUI and the example workflow that is floading around for SDXL, you need to do 2 things to resolve it. in subpack_nodes. It didn't work out. Hi buystonehenge, I'm trying to connect the lora stacker to a workflow that includes a normal SDXL checkpoint + a refiner. Comfy UI now supports SSD-1B. Updated ComfyUI Workflow: SDXL (Base+Refiner) + XY Plot + Control-LoRAs + ReVision + ControlNet XL OpenPose + Upscaler . The workflow should generate images first with the base and then pass them to the refiner for further. 0. Final Version 3. Refiner same folder as Base model, although with refiner i can't go higher then 1024x1024 in img2img. safetensors. With SDXL I often have most accurate results with ancestral samplers. 5 of the report on SDXLAlthough SDXL works fine without the refiner (as demonstrated above) you really do need to use the refiner model to get the full use out of the model. Fooocus and ComfyUI also used the v1. 0 BaseYes it’s normal, don’t use refiner with Lora. sdxl sdxl lora sdxl inpainting comfyui. Natural langauge prompts. x for ComfyUI ; Table of Content ; Version 4. Step 4: Copy SDXL 0. 5 fine-tuned model: SDXL Base + SD 1. Also, you could use the standard image resize node (with lanczos or whatever it is called) and pipe that latent into sdxl then refiner. 35%~ noise left of the image generation. 0 model files. . It now includes: SDXL 1. png","path":"ComfyUI-Experimental. Drag the image onto the ComfyUI workspace and you will see the SDXL Base + Refiner workflow. 9 base & refiner, along with recommended workflows but I ran into trouble. Outputs will not be saved. 0 ComfyUI. There’s also an install models button. The prompt and negative prompt for the new images. 详解SDXL ComfyUI稳定工作流程:我在Stability使用的AI艺术内部工具接下来,我们需要加载我们的SDXL基础模型(改个颜色)。一旦我们的基础模型加载完毕,我们还需要加载一个refiner,但是我们会稍后处理这个问题,不用着急。 The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. Is the best balanced I could find between image size (1024x720), models, steps (10+5 refiner), samplers/schedulers, so we can use SDXL on our laptops without those expensive/bulky desktop GPUs. Requires sd_xl_base_0. Cũng nhờ cái bài trải nghiệm này mà mình phát hiện ra… máy tính mình vừa chết một thanh RAM, giờ chỉ còn có 16GB. I wanted to share my configuration for ComfyUI, since many of us are using our laptops most of the time. You really want to follow a guy named Scott Detweiler. 5 + SDXL Refiner Workflow : StableDiffusion Continuing with the car analogy, ComfyUI vs Auto1111 is like driving manual shift vs automatic (no pun intended). SDXL Refiner model 35-40 steps. 5. comfyui 如果有需求之后开坑讲。. There is an SDXL 0. Then this is the tutorial you were looking for. The refiner is entirely optional and could be used equally well to refine images from sources other than the SDXL base model. ComfyUIでSDXLを動かす方法まとめ. Creating Striking Images on. It's a LoRA for noise offset, not quite contrast. +Use SDXL Refiner as Img2Img and feed your pictures. I wanted to see the difference with those along with the refiner pipeline added. 本机部署好 A1111webui以及comfyui共用同一份环境和模型,可以随意切换使用。. Fixed SDXL 0. Embeddings/Textual Inversion. If you want to open it. latent to avoid this) Do the opposite and disable the nodes for the base model and enable the refiner model nodes. . 5 model (directory: models/checkpoints) Install your loras (directory: models/loras) Restart. Always use the latest version of the workflow json file with the latest version of the custom nodes! Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). 5 base model vs later iterations. A second upscaler has been added. 1. Sign up Product Actions. 0. ComfyUI is a powerful modular graphic interface for Stable Diffusion models that allows you to create complex workflows using nodes. 0 almost makes it. I wanted to share my configuration for ComfyUI, since many of us are using our laptops most of the time. Sample workflow for ComfyUI below - picking up pixels from SD 1. Start with something simple but that will be obvious that it’s working. For me it makes a huge difference the refiner, since I have only have a laptop to run SDXL with 4GB vRAM, I manage to get as fast as possible by using very few steps, 10+5 refiner steps. At that time I was half aware of the first you mentioned. All images were created using ComfyUI + SDXL 0. safetensors + sdxl_refiner_pruned_no-ema. import json from urllib import request, parse import random # this is the ComfyUI api prompt format. ComfyUI is also has faster startup, and is better at handling VRAM, so you can generate. , Realistic Stock Photo)ComfyUI also has a mask editor that can be accessed by right clicking an image in the LoadImage node and "Open in MaskEditor". {"payload":{"allShortcutsEnabled":false,"fileTree":{"ComfyUI-Experimental/sdxl-reencode":{"items":[{"name":"1pass-sdxl_base_only. png . 0 refiner checkpoint; VAE. So if ComfyUI / A1111 sd-webui can't read the image metadata, open the last image in a text editor to read the details. 0 performs. 9vae Refiner checkpoint: sd_xl_refiner_1. SDXL Base 1. 0. if you find this helpful consider becoming a member on patreon, subscribe to my youtube for Ai applications guides. There is an initial learning curve, but once mastered, you will drive with more control, and also save fuel (VRAM) to boot. Usually, on the first run (just after the model was loaded) the refiner takes 1. g. Searge-SDXL: EVOLVED v4. Per the announcement, SDXL 1. safetensors. 9 - How to use SDXL 0. So I created this small test. 20:43 How to use SDXL refiner as the base model. I wonder if I have been doing it wrong -- right now, when I do latent upscaling with SDXL, I add an Upscale Latent node after the refiner's KSampler node, and pass the result of the latent upscaler to another KSampler. 9 and Stable Diffusion 1. download the SDXL VAE encoder. json: sdxl_v0. History: 18 commits. I don't want it to get to the point where people are just making models that are designed around looking good at displaying faces. 0 or higher. Restart ComfyUI. 9 + refiner (SDXL 0. Updating ControlNet. Two Samplers (base and refiner), and two Save Image Nodes (one for base and one for refiner). I trained a LoRA model of myself using the SDXL 1. I also used the refiner model for all the tests even though some SDXL models don’t require a refiner. 0 Comfyui工作流入门到进阶ep. 0 in both Automatic1111 and ComfyUI for free. If you don't need LoRA support, separate seeds,. This UI will let. i'm probably messing something up im still new to this but you put the model and clip output nodes of the checkpoint loader to the. 5 method. 9 the latest Stable. Here are the configuration settings for the SDXL models test: I've been having a blast experimenting with SDXL lately. ZIP file. Allows you to choose the resolution of all output resolutions in the starter groups. 0下载公布,本机部署教学-A1111+comfyui,共用模型,随意切换|SDXL SD1. 0 ComfyUI Workflow With Nodes Use Of SDXL Base & Refiner Model In this tutorial, join me as we dive into the fascinating world of Stable Diffusion XL 1. Part 4 - we intend to add Controlnets, upscaling, LORAs, and other custom additions. It fully supports the latest Stable Diffusion models including SDXL 1. 0 and refiner) I can generate images in 2. , width/height, CFG scale, etc. 5 for final work. The following images can be loaded in ComfyUI to get the full workflow. json: 🦒. install or update the following custom nodes. Otherwise, I would say make sure everything is updated - if you have custom nodes, they may be out of sync with the base comfyui version. Now that you have been lured into the trap by the synthography on the cover, welcome to my alchemy workshop! 现在,你已经被封面上的合成图所吸引. 24:47 Where is the ComfyUI support channel. To download and install ComfyUI using Pinokio, simply go to and download the Pinokio browser. AP Workflow 3. Hand-FaceRefiner. If you look for the missing model you need and download it from there it’ll automatically put. ), you’ll need to activate the SDXL Refinar Extension. Pastebin. I have updated the workflow submitted last week, cleaning up a bit the layout and adding many functions I wanted to learn better. SD1. 5 refiner tutorials into your ComfyUI browser and the workflow is loaded. Control-Lora: Official release of a ControlNet style models along with a few other. This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. The question is: How can this style be specified when using ComfyUI (e. You can disable this in Notebook settingsYesterday, I came across a very interesting workflow that uses the SDXL base model, any SD 1. . 9 - How to use SDXL 0. With Tiled Vae (im using the one that comes with multidiffusion-upscaler extension) on, you should be able to generate 1920x1080, with Base model, both in txt2img and img2img. Basic Setup for SDXL 1. You can Load these images in ComfyUI to get the full workflow. json: sdxl_v1. So I used a prompt to turn him into a K-pop star. It works best for realistic generations. Part 2 - We added SDXL-specific conditioning implementation + tested the impact of conditioning parameters on the generated images. 下载此workflow的json文件并把他Load加载到comfyUI里,即可以开始你的sdxl模型的comfyUI作图之旅了。 . For reference, I'm appending all available styles to this question. 1/1. If the refiner doesn't know the LoRA concept any changes it makes might just degrade the results. This stable. o base+refiner model) Usage. Sytan SDXL ComfyUI: Very nice workflow showing how to connect the base model with the refiner and include an upscaler. The creator of ComfyUI and I are working on releasing an officially endorsed SDXL workflow that uses far less steps, and gives amazing results such as the ones I am posting below Also, I would like to note you are not using the normal text encoders and not the specialty text encoders for base or for the refiner, which can also hinder results. +Use Modded SDXL where SD1. If you have the SDXL 1. ComfyUI with SDXL (Base+Refiner) + ControlNet XL OpenPose + FaceDefiner (2x) ComfyUI is hard. 1. 0 for ComfyUI, today I want to compare the performance of 4 different open diffusion models in generating photographic content: SDXL 1. To simplify the workflow set up a base generation and refiner refinement using two Checkpoint Loaders. Some custom nodes for ComfyUI and an easy to use SDXL 1. Download . 2、Emiを追加しました。Refiners should have at most half the steps that the generation has. 5 models and I don't get good results with the upscalers either when using SD1. Try DPM++ 2S a Karras, DPM++ SDE Karras, DPM++ 2M Karras, Euler a and DPM adaptive. This uses more steps, has less coherence, and also skips several important factors in-between I recommend you do not use the same text encoders as 1. Custom nodes extension for ComfyUI, including a workflow to use SDXL 1. In researching InPainting using SDXL 1. The difference between basic 1. Or how to make refiner/upscaler passes optional. ComfyUI is a powerful and modular GUI for Stable Diffusion that lets you create advanced workflows using a node/graph interface. With Vlad releasing hopefully tomorrow, I'll just wait on the SD. 2 noise value it changed quite a bit of face. Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool. 9版本的base model,refiner model. Part 2 ( link )- we added SDXL-specific conditioning implementation + tested the impact of conditioning parameters on the generated images. Google Colab updated as well for ComfyUI and SDXL 1. VRAM settings. To simplify the workflow set up a base generation and refiner refinement using two Checkpoint Loaders. 15:22 SDXL base image vs refiner improved image comparison. A good place to start if you have no idea how any of this works is the: ComfyUI Basic Tutorial VN: All the art is made with ComfyUI. Two Samplers (base and refiner), and two Save Image Nodes (one for base and one for refiner). The refiner improves hands, it DOES NOT remake bad hands. SEGSPaste - Pastes the results of SEGS onto the original. The I cannot use SDXL + SDXL refiners as I run out of system RAM. Hi all, As per this thread it was identified that the VAE on release had an issue that could cause artifacts in fine details of images. 0. • 3 mo. In summary, it's crucial to make valid comparisons when evaluating the SDXL with and without the refiner. 0 in ComfyUI, with separate prompts for text encoders. Source. 9. Lý do là ComfyUI tải toàn bộ mô hình refiner của SD XL 0. The workflow should generate images first with the base and then pass them to the refiner for further refinement. safetensors. 这一期呢我们来开个新坑,来讲sd的另一种打开方式,也就是这个节点化comfyUI。那熟悉我们频道的老观众都知道,我一直是用webUI去做演示和讲解的. The refiner refines the image making an existing image better. We are releasing two new diffusion models for research purposes: SDXL-base-0. Sytan SDXL ComfyUI. What I have done is recreate the parts for one specific area. 9 and Stable Diffusion 1. This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. 5 model, and the SDXL refiner model. 5-38 secs SDXL 1. 20:57 How to use LoRAs with SDXL. The sample prompt as a test shows a really great result. During renders in the official ComfyUI workflow for SDXL 0. 0 (26 July 2023)! Time to test it out using a no-code GUI called ComfyUI!. Voldy still has to implement that properly last I checked. To simplify the workflow set up a base generation and refiner refinement using two Checkpoint Loaders. 動作が速い. 0 with refiner. 9. All the list of Upscale model is. FromDetailer (SDXL/pipe), BasicPipe -> DetailerPipe (SDXL), Edit DetailerPipe (SDXL) - These are pipe functions used in Detailer for utilizing the refiner model of SDXL. Technically, both could be SDXL, both could be SD 1. Be patient, as the initial run may take a bit of. IDK what you are doing wrong to wait 90 seconds. I found that many novice users don't like ComfyUI nodes frontend, so I decided to convert original SDXL workflow for ComfyBox. Note that in ComfyUI txt2img and img2img are the same node. Generate SDXL 0. Given the imminent release of SDXL 1. update ComyUI. md. 78. Adjust the workflow - Add in the. 0 with ComfyUI. How to get SDXL running in ComfyUI. 1. 0 Download Upscaler We'll be using. you can use SDNext and set the diffusers to use sequential CPU offloading, it loads the part of the model its using while it generates the image, because of that you only end up using around 1-2GB of vram. SDXL_LoRA_InPAINT | SDXL_With_LoRA | SDXL_Inpaint | SDXL_Refiner_Inpaint . ComfyUI a model "Queue prompt"をクリック。. Misconfiguring nodes can lead to erroneous conclusions, and it's essential to understand the correct settings for a fair assessment. SDXL works "fine" with just the base model, taking around 2m30s to create a 1024x1024 image (SD1. For my SDXL model comparison test, I used the same configuration with the same prompts. I will provide workflows for models you find on CivitAI and also for SDXL 0. 0 and Refiner 1. 9. After an entire weekend reviewing the material, I think (I hope!) I got the implementation right: As the title says, I included ControlNet XL OpenPose and FaceDefiner models. If you get a 403 error, it's your firefox settings or an extension that's messing things up. It will destroy the likeness because the Lora isn’t interfering with the latent space anymore. Also, use caution with. I created this comfyUI workflow to use the new SDXL Refiner with old models: Basically it just creates a 512x512 as usual, then upscales it, then feeds it to the refiner. 0, I started to get curious and followed guides using ComfyUI, SDXL 0. 0_comfyui_colab (1024x1024 model) please use with. 0 You'll need to download both the base and the refiner models: SDXL-base-1. ago. 17. SECourses. NOTE: You will need to use linear (AnimateDiff-SDXL) beta_schedule. 6B parameter refiner. 0 ComfyUI. 🧨 DiffusersThe way to use refiner, again, I compared this way (from on of the similar workflows I found) and the img2img type - imo quality is very similar, your way is slightly faster but you can't save image without refiner (well of course you can but it'll be slower and more spagettified). 75 before the refiner ksampler. Functions. . 9. The base doesn't - aesthetic score conditioning tends to break prompt following a bit (the laion aesthetic score values are not the most accurate, and alternative aesthetic scoring methods have limitations of their own), and so the base wasn't trained on it to enable it to follow prompts as accurately as possible. GTM ComfyUI workflows including SDXL and SD1. sd_xl_refiner_0. The result is a hybrid SDXL+SD1. An automatic mechanism to choose which image to upscale based on priorities has been added. 0 is “built on an innovative new architecture composed of a 3. . WAS Node Suite. 5 and always below 9 seconds to load SDXL models. could you kindly give me. 1. Model Description: This is a model that can be used to generate and modify images based on text prompts. SDXL requires SDXL-specific LoRAs, and you can’t use LoRAs for SD 1. 9 the base and refiner models. 4. Two Samplers (base and refiner), and two Save Image Nodes (one for base and one for refiner).