comfyui sdxl refiner. 9 and Stable Diffusion 1. comfyui sdxl refiner

 
9 and Stable Diffusion 1comfyui sdxl refiner  This node is explicitly designed to make working with the refiner easier

0! Usage17:38 How to use inpainting with SDXL with ComfyUI. When I run them through 4x_NMKD-Siax_200k upscaler for example, the. Your image will open in the img2img tab, which you will automatically navigate to. 0 base checkpoint; SDXL 1. ComfyUI_00001_. WAS Node Suite. Contribute to markemicek/ComfyUI-SDXL-Workflow development by creating an account on GitHub. It MAY occasionally fix. I don't want it to get to the point where people are just making models that are designed around looking good at displaying faces. He puts out marvelous Comfyui stuff but with a paid Patreon and Youtube plan. SDXL Workflow for ComfyBox - The power of SDXL in ComfyUI with better UI that hides the nodes graph Resource | Update I recently discovered ComfyBox, a UI fontend for ComfyUI. It provides workflow for SDXL (base + refiner). If this is. Think of the quality of 1. Basic Setup for SDXL 1. 0. So in this workflow each of them will run on your input image and. The speed of image generation is about 10 s/it (10241024 batch size 1), refiner works faster up to 1+ s/it when refining at the same 10241024 resolution. These ports will allow you to access different tools and services. This uses more steps, has less coherence, and also skips several important factors in-between I recommend you do not use the same text encoders as 1. NOTE: You will need to use linear (AnimateDiff-SDXL) beta_schedule. SDXL works "fine" with just the base model, taking around 2m30s to create a 1024x1024 image (SD1. 0. 9. If you want to open it. In part 1 ( link ), we implemented the simplest SDXL Base workflow and generated our first images. It also lets you specify the start and stop step which makes it possible to use the refiner as intended. 0 base and have lots of fun with it. Txt2Img is achieved by passing an empty image to the sampler node with maximum denoise. You can find SDXL on both HuggingFace and CivitAI. 0已更新!遥遥领先~快来看看更新内容及使用体验~,免费开源AI音乐:文本生成音乐,使用Riffusion实现音乐实时生成,【AI绘画】SDXL进阶篇:如何生成不同艺术风格的优质画面SDXL VAE: Optional as there is a VAE baked into the base and refiner model, but nice to have is separate in the workflow so it can be updated/changed without needing a new model. 16:30 Where you can find shorts of ComfyUI. I'ma try to get a background fix workflow goin, this blurry shit is starting to bother me. Then inside the browser, click “Discover” to browse to the Pinokio script. When trying to execute, it refers to the missing file "sd_xl_refiner_0. Step 2: Install or update ControlNet. Now that you have been lured into the trap by the synthography on the cover, welcome to my alchemy workshop! 现在,你已经被封面上的合成图所吸引. This checkpoint recommends a VAE, download and place it in the VAE folder. 0 in both Automatic1111 and ComfyUI for free. 0 Base and Refiners models downloaded and saved in the right place, it. Stable Diffusion + Animatediff + ComfyUI is a lot of fun. 9 refiner node. Part 3 ( link ) - we added the refiner for the full SDXL process. Workflow for ComfyUI and SDXL 1. and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. This node is explicitly designed to make working with the refiner easier. The disadvantage is it looks much more complicated than its alternatives. SDXL generations work so much better in it than in Automatic1111, because it supports using the Base and Refiner models together in the initial generation. The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. png . History: 18 commits. The Tutorial covers:1. I also used the refiner model for all the tests even though some SDXL models don’t require a refiner. 5, or it can be a mix of both. 因为A1111刚更新1. 1 Click Auto Installer Script For ComfyUI (latest) & Manager On RunPod. 0. Comfy UI now supports SSD-1B. this creats a very basic image from a simple prompt and sends it as a source. 5 Model works as Refiner. 9" what is the model and where to get it? Reply reply Adventurous-Abies296 After gathering some more knowledge about SDXL and ComfyUI, and experimenting few days with both I've ended up with this basic (no upscaling) 2-stage (base + refiner) workflow: It works pretty well for me - I change dimensions, prompts, and samplers parameters, but the flow itself stays as it is. But this only increased the resolution and details a bit since it's a very light pass and doesn't change the overall. Thanks for this, a good comparison. The base doesn't - aesthetic score conditioning tends to break prompt following a bit (the laion aesthetic score values are not the most accurate, and alternative aesthetic scoring methods have limitations of their own), and so the base wasn't trained on it to enable it to follow prompts as accurately as possible. It also works with non. Fix. 5 (acts as refiner). Reply replyYes, 8Gb card, ComfyUI workflow loads both SDXL base & refiner models, separate XL VAE, 3 XL LoRAs, plus Face Detailer and its sam model and bbox detector model, and Ultimate SD Upscale with its ESRGAN model and input from the same base SDXL model all work together. 9 - How to use SDXL 0. It fully supports the latest. install or update the following custom nodes. 75 before the refiner ksampler. Fooocus and ComfyUI also used the v1. import json from urllib import request, parse import random # this is the ComfyUI api prompt format. 最後のところに画像が生成されていればOK。. There are other upscalers out there like 4x Ultrasharp, but NMKD works best for this workflow. x for ComfyUI; Table of Content; Version 4. AP Workflow 3. License: SDXL 0. 5 and 2. Fine-tuned SDXL (or just the SDXL Base) All images are generated just with the SDXL Base model or a fine-tuned SDXL model that requires no Refiner. Step 1: Download SDXL v1. 0: An improved version over SDXL-refiner-0. stable-diffusion-webui * old favorite, but development has almost halted, partial SDXL support, not recommended. Update README. 1024 - single image 25 base steps, no refiner 1024 - single image 20 base steps + 5 refiner steps - everything is better except the lapels Image metadata is saved, but I'm running Vlad's SDNext. json: sdxl_v1. Is the best balanced I could find between image size (1024x720), models, steps (10+5 refiner), samplers/schedulers, so we can use SDXL on our laptops without those expensive/bulky desktop GPUs. 0. 1s, load VAE: 0. 今天,我们来讲一讲SDXL在comfyui中更加进阶的节点流逻辑。第一、风格控制第二、base模型以及refiner模型如何连接第三、分区提示词控制第四、多重采样的分区控制comfyui节点流程这个东西一通百通,逻辑正确怎么连都可以,所以这个视频我讲得并不仔细,只讲搭建的逻辑和重点,这东西讲太细过于. I feel like we are at the bottom of a big hill with Comfy, and the workflows will continue to rapidly evolve. json. It is a Latent Diffusion Model that uses a pretrained text encoder ( OpenCLIP-ViT/G ). a closeup photograph of a korean k-pop. json. safetensors. However, the SDXL refiner obviously doesn't work with SD1. An automatic mechanism to choose which image to upscale based on priorities has been added. それ以外. High likelihood is that I am misunderstanding how I use both in conjunction within comfy. For me, it has been tough, but I see the absolute power of the node-based generation (and efficiency). 9 comfyui (i would prefere to use a1111) i'm running a rtx 2060 6gb vram laptop and it takes about 6-8m for a 1080x1080 image with 20 base steps & 15 refiner steps edit: im using Olivio's first set up(no upscaler) edit: after the first run i get a 1080x1080 image (including the refining) in Prompt executed in 240. 9 Tutorial (better than. The question is: How can this style be specified when using ComfyUI (e. Here Screenshot . json: sdxl_v0. I use A1111 (comfyui is installed but I don’t know how to connect advanced stuff yet) and I am not sure how to use the refiner with img2img. g. 15:22 SDXL base image vs refiner improved image comparison. SDXL requires SDXL-specific LoRAs, and you can’t use LoRAs for SD 1. For example: 896x1152 or 1536x640 are good resolutions. This was the base for my. import torch from diffusers import StableDiffusionXLImg2ImgPipeline from diffusers. Designed to handle SDXL, this ksampler node has been meticulously crafted to provide you with an enhanced level of control over image details like never before. 5 tiled render. I'm not sure if it will be helpful to your particular use case because it uses SDXL programmatically and it sounds like you might be using the ComfyUI? Not totally. 20:43 How to use SDXL refiner as the base model. 0. 5 for final work. Feel free to modify it further if you know how to do it. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion -. ComfyUI * recommended by stability-ai, highly customizable UI with custom workflows. For my SDXL model comparison test, I used the same configuration with the same prompts. 9. Create a Load Checkpoint node, in that node select the sd_xl_refiner_0. Hires isn't a refiner stage. Hi, all. 0 was released, there has been a point release for both of these models. For a purely base model generation without refiner the built-in samplers in Comfy are probably the better option. If you have the SDXL 1. Drag the image onto the ComfyUI workspace and you will see the SDXL Base + Refiner workflow. Upscale the. Google colab works on free colab and auto downloads SDXL 1. So if ComfyUI / A1111 sd-webui can't read the image metadata, open the last image in a text editor to read the details. utils import load_image pipe = StableDiffusionXLImg2ImgPipeline. Do I need to download the remaining files pytorch, vae and unet? also is there an online guide for these leaked files or do they install the same like 2. download the SDXL models. One of the most powerful features of ComfyUI is that within seconds you can load an appropriate workflow for the task at hand. make a folder in img2img. To simplify the workflow set up a base generation and refiner refinement using two Checkpoint Loaders. 1 Base and Refiner Models to the ComfyUI file. 99 in the “Parameters” section. WAS Node Suite. Can anyone provide me with a workflow for SDXL ComfyUI r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. just using SDXL base to run a 10 step dimm ksampler then converting to image and running it on 1. How to get SDXL running in ComfyUI. SDXL clip encodes are more if you intend to do the whole process using SDXL specifically, they make use of. best settings for Stable Diffusion XL 0. The refiner improves hands, it DOES NOT remake bad hands. 0 Base SDXL 1. You want to use Stable Diffusion, use image generative AI models for free, but you can't pay online services or you don't have a strong computer. Custom nodes extension for ComfyUI, including a workflow to use SDXL 1. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. 0. Here's what I've found: When I pair the SDXL base with my LoRA on ComfyUI, things seem to click and work pretty well. 24:47 Where is the ComfyUI support channel. SDXL-OneClick-ComfyUI . SD XL. A good place to start if you have no idea how any of this works is the: ComfyUI Basic Tutorial VN: All the art is made with ComfyUI. You can Load these images in ComfyUI to get the full workflow. Reply reply Comprehensive-Tea711 • There’s a custom node that basically acts as Ultimate SD Upscale. sdxl 1. Outputs will not be saved. 10:05 Starting to compare Automatic1111 Web UI with ComfyUI for SDXL. A EmptyLatentImage specifying the image size consistent with the previous CLIP nodes. 9 Tutorial (better than Midjourney AI)Stability AI recently released SDXL 0. i've switched from a1111 to comfyui for sdxl for a 1024x1024 base + refiner takes around 2m. 9-usage This repo is a tutorial intended to help beginners use the new released model, stable-diffusion-xl-0. 5 + SDXL Refiner Workflow : StableDiffusion. 0 BaseYes it’s normal, don’t use refiner with Lora. Searge-SDXL: EVOLVED v4. But the clip refiner is built in for retouches which I didn't need since I was too flabbergasted with the results SDXL 0. 1. Then refresh the browser (I lie, I just rename every new latent to the same filename e. 你可以在google colab. 61 To quote them: The drivers after that introduced the RAM + VRAM sharing tech, but it creates a massive slowdown when you go above ~80%. He linked to this post where We have SDXL Base + SD 1. If you want a fully latent upscale, make sure the second sampler after your latent upscale is above 0. Click “Manager” in comfyUI, then ‘Install missing custom nodes’. If you haven't installed it yet, you can find it here. 以下のサイトで公開されているrefiner_v1. Learn how to download and install Stable Diffusion XL 1. Use the Refiner as a checkpoint in IMG2IMG with low denoise (0. plus, it's more efficient if you don't bother refining images that missed your prompt. Cũng nhờ cái bài trải nghiệm này mà mình phát hiện ra… máy tính mình vừa chết một thanh RAM, giờ chỉ còn có 16GB. 9 the base and refiner models. Yes 5 seconds for models based on 1. 5. But that's why they cautioned anyone against downloading a ckpt (which can execute malicious code) and then broadcast a warning here instead of just letting people get duped by bad actors trying to pose as the leaked file sharers. 0 with both the base and refiner checkpoints. It works best for realistic generations. Software. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. . I tried using the default. Explain the Ba. from_pretrained (. Just wait til SDXL-retrained models start arriving. ComfyUI with SDXL (Base+Refiner) + ControlNet XL OpenPose + FaceDefiner (2x) ComfyUI is hard. The workflow should generate images first with the base and then pass them to the refiner for further. at least 8GB VRAM is recommended. SDXL Offset Noise LoRA; Upscaler. Just training the base model isn't feasible for accurately generating images of subjects such as people, animals, etc. 5 512 on A1111. An SDXL refiner model in the lower Load Checkpoint node. 23:06 How to see ComfyUI is processing the which part of the. ( I am unable to upload the full-sized image. These files are placed in the folder ComfyUImodelscheckpoints, as requested. Save the image and drop it into ComfyUI. These are examples demonstrating how to do img2img. SDXL consists of a two-step pipeline for latent diffusion: First, we use a base model to generate latents of the desired output size. 0 ComfyUI. The refiner refines the image making an existing image better. Part 2 ( link )- we added SDXL-specific conditioning implementation + tested the impact of conditioning parameters on the generated images. CR Aspect Ratio SDXL replaced by CR SDXL Aspect Ratio ; CR SDXL Prompt Mixer replaced by CR SDXL Prompt Mix Presets Multi-ControlNet methodology . A dark and stormy night, a lone castle on a hill, and a mysterious figure lurking in the shadows. For those of you who are not familiar with ComfyUI, the workflow (images #3) appears to be: Generate text2image "Picture of a futuristic Shiba Inu", with negative prompt "text,. I can tell you that ComfyUI renders 1024x1024 in SDXL at faster speeds than A1111 does with hiresfix 2x (for SD 1. This is a simple preset for using the SDXL base with the SDXL refiner model and correct SDXL text encoders. ComfyUI allows setting up the entire workflow in one go, saving a lot of configuration time compared to using base and. . Now in Comfy, from the Img2img workflow, let’s duplicate Load Image and Upscale Image Nodes. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. You can use this workflow in the Impact Pack to. In any case, we could compare the picture obtained with the correct workflow and the refiner. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. 35%~ noise left of the image generation. 0 + LoRA + Refiner With Comfy UI + Google Colab fot FREEExciting news! Introducing Stable Diffusion XL 1. I created this comfyUI workflow to use the new SDXL Refiner with old models: Basically it just creates a 512x512 as usual, then upscales it, then feeds it to the refiner. After an entire weekend reviewing the material, I think (I hope!) I got. Use "Load" button on Menu. please do not use the refiner as an img2img pass on top of the base. 120 upvotes · 31 comments. Having previously covered how to use SDXL with StableDiffusionWebUI and ComfyUI, let’s now explore SDXL 1. AP Workflow 6. I also automated the split of the diffusion steps between the Base and the. ComfyUI got attention recently because the developer works for StabilityAI and was able to be the first to get SDXL running. The hands from the original image must be in good shape. SDXL Models 1. And the refiner files here: stabilityai/stable. Also, use caution with. Updated with 1. Then move it to the “ComfyUImodelscontrolnet” folder. 5GB vram and swapping refiner too , use --medvram-sdxl flag when startingSDXL Prompt Styler Advanced: New node for more elaborate workflows with linguistic and supportive terms. 🚀LCM update brings SDXL and SSD-1B to the game 🎮photo of a male warrior, modelshoot style, (extremely detailed CG unity 8k wallpaper), full shot body photo of the most beautiful artwork in the world, medieval armor, professional majestic oil painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, trending on CGSociety, Intricate, High. This repo contains examples of what is achievable with ComfyUI. The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. 17:38 How to use inpainting with SDXL with ComfyUI. Other than that, same rules of thumb apply to AnimateDiff-SDXL as AnimateDiff. To simplify the workflow set up a base generation and refiner refinement using two Checkpoint Loaders. ago. 0 links. ComfyUI with SDXL (Base+Refiner) + ControlNet XL OpenPose + FaceDefiner (2x) ComfyUI is hard. Requires sd_xl_base_0. SDXL1. 9 - Pastebin. png","path":"ComfyUI-Experimental. In ComfyUI this can be accomplished with the output of one KSampler node (using SDXL base) leading directly into the input of another KSampler node (using SDXL. This is an answer that someone corrects. 1: Support for Fine-Tuned SDXL models that don’t require the Refiner. Drag & drop the . The result is a hybrid SDXL+SD1. I wanted to share my configuration for ComfyUI, since many of us are using our laptops most of the time. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples Installing ComfyUI Features. I wanted to share my configuration for ComfyUI, since many of us are using our laptops most of the time. Or how to make refiner/upscaler passes optional. 0 ComfyUI. 20:43 How to use SDXL refiner as the base model. 5 and 2. For instance, if you have a wildcard file called. An SDXL base model in the upper Load Checkpoint node. BNK_CLIPTextEncodeSDXLAdvanced. SDXL Models 1. 1. You will need ComfyUI and some custom nodes from here and here . The SDXL 1. Run ComfyUI with colab iframe (use only in case the previous way with localtunnel doesn't work) You should see the ui appear in an iframe. Basic Setup for SDXL 1. 1. Searge-SDXL: EVOLVED v4. I just uploaded the new version of my workflow. Searge-SDXL: EVOLVED v4. 5 base model vs later iterations. Maybe all of this doesn't matter, but I like equations. I think the issue might be the CLIPTextenCode node, you’re using the normal 1. There’s also an install models button. ComfyUI and SDXL. 1. SDXL Refiner: The refiner model, a new feature of SDXL; SDXL VAE: Optional as there is a VAE baked into the base and refiner model, but nice to have is separate in the workflow so it can be updated/changed without needing a new model. Re-download the latest version of the VAE and put it in your models/vae folder. 5/SD2. Custom nodes extension for ComfyUI, including a workflow to use SDXL 1. 5x), but I can't get the refiner to work. Okay, so it's complete test out and refiner is not used as img2img inside ComfyUI. The refiner is although only good at refining noise from an original image still left in creation, and will give you a blurry result if you try. I upscaled it to a resolution of 10240x6144 px for us to examine the results. Click Queue Prompt to start the workflow. and After 4-6 minutes until the both checkpoints are loaded (SDXL 1. Experiment with various prompts to see how Stable Diffusion XL 1. I also used the refiner model for all the tests even though some SDXL models don’t require a refiner. . 0. I wanted to share my configuration for ComfyUI, since many of us are using our laptops most of the time. Pastebin. x for ComfyUI; Table of Content; Version 4. comments sorted by Best Top New Controversial Q&A Add a Comment More posts you may like. 12:53 How to use SDXL LoRA models with Automatic1111 Web UI. We name the file “canny-sdxl-1. Also how to organize them when eventually end up filling the folders with SDXL LORAs since I cant see thumbnails or metadata. refiner_output_01033_. Observe the following workflow (which you can download from comfyanonymous , and implement by simply dragging the image into your Comfy UI workflow. Restart ComfyUI. 9. A hub dedicated to development and upkeep of the Sytan SDXL workflow for ComfyUI he workflow is provided as a . 0 ComfyUI workflow with a few changes, here's the sample json file for the workflow I was using to generate these images:. Description: ComfyUI is a powerful and modular stable diffusion GUI with a graph/nodes interface. I tried Fooocus yesterday and I was getting 42+ seconds for a 'quick' generation (30 steps). Using SDXL 1. x and SDXL; Asynchronous Queue systemI was using A1111 for the last 7 months, a 512×512 was taking me 55sec with my 1660S, SDXL+Refiner took nearly 7minutes for one picture. There are settings and scenarios that take masses of manual clicking in an. g. Misconfiguring nodes can lead to erroneous conclusions, and it's essential to understand the correct settings for a fair assessment. Google Colab updated as well for ComfyUI and SDXL 1. . e. For an example of this. ), you’ll need to activate the SDXL Refinar Extension. Part 4 (this post) - We will install custom nodes and build out workflows with img2img, controlnets, and LoRAs. latent to avoid this) Do the opposite and disable the nodes for the base model and enable the refiner model nodes. Next support; it's a cool opportunity to learn a different UI anyway. BRi7X. All images were created using ComfyUI + SDXL 0. 5 comfy JSON and import it sd_1-5_to_sdxl_1-0. So I created this small test. VAE selector, (needs a VAE file, download SDXL BF16 VAE from here, and VAE file for SD 1. To download and install ComfyUI using Pinokio, simply go to and download the Pinokio browser. Hi buystonehenge, I'm trying to connect the lora stacker to a workflow that includes a normal SDXL checkpoint + a refiner. 9 in ComfyUI, with both the base and refiner models together to achieve a magnificent quality of image generation. 5 refined model) and a switchable face detailer. It detects hands and improves what is already there. main. 0 - Stable Diffusion XL 1. 8s)SDXL 1. x for ComfyUI . source_folder_path = '/content/ComfyUI/output' # Replace with the actual path to the folder in th e runtime environment destination_folder_path = f '/content/drive/MyDrive/ {output_folder_name} ' # Replace with the desired destination path in you r Google Drive # Create the destination folder in Google Drive if it doesn't existMy Links: discord , twitter/ig . I wanted to see the difference with those along with the refiner pipeline added. All models will include additional metadata that makes it super easy to tell what version is it, if it's a LORA, keywords to use with it, and if the LORA is compatible with SDXL 1. Hires. g. SDXL Base 1. One interesting thing about ComfyUI is that it shows exactly what is happening. For example, see this: SDXL Base + SD 1. Pull requests A gradio web UI demo for Stable Diffusion XL 1. Most UI's req. safetensors and sd_xl_base_0. x, SD2. 5 and the latest checkpoints is night and day. The denoise controls the amount of noise added to the image. Generate an image as you normally with the SDXL v1. 0. that should stop it being distorted, you can also switch the upscale method to bilinear as that may work a bit better. 9 base+refiner, my system would freeze, and render times would extend up to 5 minutes for a single render. 51 denoising. 20:57 How to use LoRAs with SDXL. My advice, have a go and try it out with comfyUI, its unsupported but its likely to be the first UI that works with SDXL when it fully drops on the 18th. Links and instructions in GitHub readme files updated accordingly. Those are two different models. ago. 5 renders, but the quality i can get on sdxl 1. With usable demo interfaces for ComfyUI to use the models (see below)! After test, it is also useful on SDXL-1. ·. 20 Steps shouldn't wonder anyone, for Refiner you should use maximum the half amount of Steps you used to generate the picture, so 10 should be max. 5 works with 4GB even on A1111 so you either don't know how to work with ComfyUI or you have not tried it at all. The idea is you are using the model at the resolution it was trained. It fully supports the latest Stable Diffusion models including SDXL 1. After testing it for several days, I have decided to temporarily switch to ComfyUI for the following reasons:. . Commit date (2023-08-11) I was having very poor performance running SDXL locally in ComfyUI to the point where it was basically unusable. Selector to change the split behavior of the negative prompt. sdxl sdxl lora sdxl inpainting comfyui. 0 seed: 640271075062843 To make full use of SDXL, you'll need to load in both models, run the base model starting from an empty latent image, and then run the refiner on the base model's output to improve detail. 2 noise value it changed quite a bit of face. But actually I didn’t heart anything about the training of the refiner. 0 with new workflows and download links. I know a lot of people prefer Comfy. in subpack_nodes. The impact pack doesn't seem to have these nodesThis workflow is meticulously fine tuned to accommodate LORA and Controlnet inputs, and demonstrates interactions with embeddings as well. 0 and it will only use the base, right now the refiner still needs to be connected but will be ignored. 0 or higher. As soon as you go out of the 1megapixels range the model is unable to understand the composition. Activate your environment. AI_Alt_Art_Neo_2. The refiner model works, as the name suggests, a method of refining your images for better quality.