Stable diffusion face refiner online reddit It's "Upscaling > Hand Fix > Face Fix" If you upscale last, you partially destroy your fixes again. I've tried changing the samplers, CFG, and the number of steps, but the results aren't coming out correctly. You can do 768x512 or 512x768 to get specific orientations, but don't stray too far from those 3 resolutions or you'll start getting very weird results (people tend to come out horribly deformed for example) /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Skip to content. I've been having some good success with anime characters, so I wanted to share how I was doing things. Among the models for face, I found face_yolov8n, face_yolov8s, face_yolov8n_v2 and the similar for hands. 3 - 1. This simple thing made me a fan of Stable Diffusion. 7 in the Refiner Upscale to give a little room in the image to add details. I assume you would have generated the preview for maybe every 100 steps. is anyone else experiencing this? what am i missing to make the refiner extension to work? Honestly! Currently trying to fix bad hands using face refiner, but it seems that it is doing something bad. it works ok with adetailer as it has option to use restore face after adetailer has done detailing and it can work on but many times it kinda do more damage to the face as it Well, the faces here are mostly the same but you're right, is the way to go if you don't want to mess with ethnics loras. On a 1. Restore face makes face caked almost and looks washed up in most cases it's more of a band-aid fix . Experimental Functions. X based models), since that's what the dataset is trained on. (depending on the degree of refinement) with a denoise strength of 0. Your Face Into Any Custom Stable Diffusion Model By Web UI. Please share your tips, tricks, and workflows for using this software to create your AI art. People using utilities like Textual Inversion and DreamBooth have been able to solve the problem in narrow use cases, but to the best of my knowledge there isn't yet a reliable solution to make on-model characters without just straight up hand-holding the AI. 5 version, losing most Stable Diffusion is a model architecture (or a class of model architectures, there is SD1, SDXL and others) and there are many applications that support it and also many different finetuned model checkpoints. face, set ONLY MASKED and generate. Make sure to select inpaint area as "Only Masked". This brings back memories of the first time that I use Stable Diffusion myself. As a tip: I use this process (excluding refiner comparison) to get an overview of which sampler is best suited for my prompt, and also to refine the prompt, for example if you notice the 3 consecutive For artists, writers, gamemasters, musicians, programmers, philosophers and scientists alike! The creation of new worlds and new universes has long been a key element of speculative fiction, from the fantasy works of Tolkien and Le Guin, to the science-fiction universes of Delany and Asimov, to the tabletop realm of Gygax and Barker, and beyond. Getting a single sample and using a lackluster prompt will almost always result in a terrible result, even with a lot of steps. Take your two models and do a weighted sum merge in the merge checkpoints tab and create a checkpoint at . I am trying to find a solution. 0 and upscalers Stable Diffusion XL - Tipps & Tricks - 1st Week. 5 excels in texture and lighting realism compared to later stable diffusion models, although it struggles with hands. //lemmy. No need to install anything. 5. Are there online Stable diffusion sites that do img2img? as he said he did change other things. 1, 2022) Web app Stable Diffusion Multi Inpainting (Hugging Face) by multimodalart. If the problem still persists I will do the refiner-retraining. Far from perfect, but I got a couple generations that looked right. The two keys to getting what you want out of Stable Diffusion are to find the right seed, and to find the right prompt. 2), well lit, illustration, beard, colored glasses /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. A list of helpful things to know what model you are using for the refiner (hint, you don't HAVE to use stabilities refiner model, you can use any model that is the same family as the base generation model - so for example a SD1. That is colossal BS, don't get fooled. pt" and place it in the "embeddings" folder Small faces look bad, so upscaling does help. The result image is good but not as I wanted, so next I want to tell the AI something like this "make the cat more hairy" so When I inpaint a face, it gives me slight variations on the same face. For faces you can use Facedetailer. 5, all extensions updated. Try the SD. 5-0. Navigation Menu AUTOMATIC1111 / stable-diffusion-webui Public. I initially tried using a large square image with a 3x3 arrangement of faces, but it would often read the lower rows of faces as the body for the upper row; spread out horizontally all of the faces remain well separated without sacrificing too much resolution to empty padding. My workflow and visuals of this behaviour is in the attached image. It's not hidden in the Hires. I experinted a lot with the "normal quality", "worst quality" stuff people often use. The Refiner very neatly follows the prompt and fixes that up. Try reducing the number of steps for the refiner. 7 in the Denoise for Best results. Stable Diffusion 3 will use this new I'm not really a fan of that checkpoint, but a tip to creating a consistent face is to describe it and name the "character" in the prompt. 0 base, vae, and refiner models. 5 model for upscaling and it seems to make a decent difference. I think i must be using stable diffusion too much. Is there a way to train stable diffusion on a particular persons face and then produce images with the trained face? Skip to main content. dbzer0 Posted by u/Hungry_Young_8498 - 4 votes and 11 comments /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. #what-is-going-on Discord: https://discord. How to download sdxl base and refiner model from hugging face to google colab using access token When I try to inpaint a face using the Pony Diffusion model, the image generates with glitches as if it wasn't completely denoised. Possibly through splitting the warp diffusion clip back into frames, running the frames through your method, then recompiling into video I found a solution for me use the cmd line settings :—na-half-vae —xformers (I removed the param —no-half ) Also install the latest WebUi 1. 0 Refine. 0 where hopefully it will be more optimized Good info man. Stable Diffusion right now doesn't use transformers. Same with my LORA, when the face are facing the camera it turns out good, but when i try to do something like that the face are ruined. You don't actually need to use the refiner. I was planning to do the same as you have already done 👍. Wᴇʟᴄᴏᴍᴇ ᴛᴏ ʀ/SGExᴀᴍs – the largest community on reddit discussing education and student life in Singapore! SGExams is also more than a subreddit - we're a registered nonprofit that organises initiatives supporting students' academics, career guidance, mental health and holistic development, such as webinars and mentorship programmes. the hand color does not see very healthy, I think the seeding took pixels from outfit. The issue with the refiner lies in its tendency to occasionally imbue the image with an overly "AI-look," achieved by adding an excessive amount of detail. It will allow you to make them for SDXL and SD1. 5, we're starting small and I'll take you along the entire journey. 0 includes the following experimental functions: Free Lunch (v1 and v2) AI researchers have discovered an optimization for Stable Diffusion models that improves the quality of the generated images. 5 of my wifes face works much better than the ones Ive made with sdxl so I enabled independent prompting(for highresfix and refiner) and use the 1. The original prompt was supplied by sersun Prompt: Ultra realistic photo, (queen elizabeth), young, stunning model, beautiful face, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I'm already using all the prompt words I can find to avoid this but photorealistic nsfw, the gold standard is BigAsp, with Juggernautv8 as refiner with adetailer on the face, lips, eyes, hands, and other exposed parts, with upscaling. What model are you using and what resolution are you generating at? If you have decent amounts of VRAM, before you go to an img2img based upscale like UltimateSDUpscale, you can do a txt2img based upscale by using ControlNet tile/or ControlNet inpaint, and regenerating your image at a higher resolution. Model: Anything v4. Then I fed them to stable diffusion and kind of figured out what it sees when it studies a photo to learn a face, then went to photoshop to take out anything it learned that I didn't like. It just doesn't automatically refine the picture. I haven't had any of the issues you guys are talking about, but I always use Restore Faces on renders of people and they come out great, even without the refiner step. Simply ran the prompt in txt2img with SDXL 1. Wait a minute this lady is real and she is like right here and her hand is still fucked up. Now both colab and PC installers are In summary, it's crucial to make valid comparisons when evaluating the SDXL with and without the refiner. I have a built in tiling upscaler and face restore in my workflow: https://civitai. You can refine how much the 'new' face gets upscaled to try to align the detail to the destination photo. More info: https://rtech. bad anatomy, disfigured, poorly drawn face, mutation, mutated, (extra_limb), (ugly), (poorly drawn hands), fused fingers, messy drawing, broken legs censor /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. This isn't just a picky point -- its to underline that larding prompts with "photorealistic, ultrarealistic" etc -- tend to make a generative AI image look _less_ like a photograph. 25, . All online. 1, 2022) Web app StableDiffusion-Img2Img (Hugging Face) by It may well have been causing the problem. Generate the image using the main lora (face will be somewhat similar but weird), then do inpaint on face using the face lora. A1111 and ComfyUI are the two most popular web interfaces for Where do you use Stable diffusion online for free? Not having a powerful PC I just rely on online services, here are mines . . My process is to get the face first, then the body. Use 0. You can do a model merge for sure. *PICK* (Added Oct. Ultimately you want to get to about 20-30 images of face and a mix of body. Visual transformers (for images, etc) have proven their worth the last year or so. It works perfectly with only face images or half body images. Here's a few I use. AP Workflow v5. The base model is perfectly capable of generating an image on its own. support/docs /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It can be used entirely offline. Similar to online services like DALL·E, Midjourney, and Bing, users can input text prompts, and the model will generate images based on said prompts. Sort by: Master Consistent Character Faces with Stable Diffusion! 4. This option zooms into the area and creates a really good face as a result, due to high correlation between the canvas and the dataset. More info: https://rtech I want to refine an image that has been already generated. Taking a good image with a poor face, then cropping into the face at an enlarged resolution of it's own, generating a new face with more detail then using an image editor to layer the new face on the old photo and using img2img again to combine them is a very common and powerful practice. Misconfiguring nodes can lead to erroneous conclusions, and it's essential to understand the correct settings for a fair Stable Diffusion is a text-to-image generative AI model. Next fork of A1111 WebUI, by Vladmandic. Hands work too with it, but I prefer the MeshGraphormer Hand Refiner controlnet. After Refiner is done I feed it to a 1. Do not use the high res fix section (can select none, 0 steps in the high res section), go to the refiner section instead that will be new with all your other extensions (like control net or whatever other extensions you have installed) below, enable it there (sd_xl_refiner_1. So far, LoRA's only work for me if you run them on the base and not the refiner, the networks seems to have unique architectures that would require a LoRA trained just for the the refiner, I may be mistaken though, so take this with a grain of salt. 0 faces fix QUALITY), recommend if you have a good I think the ideal workflow is a bit debateable. This simple thing also made my that friend a fan of Stable Diffusion. In your case you could just as easily refine with SDXL instead of 1. For example, I generate an image with a cat standing on a couch. A face that looks photorealistic in say 512x512 gets these I have very little experience here, but I trained a face with 12 photos using textual inversion and I'm floored with the results. 5 model in highresfix with denoise set in the . Same with SDXL, you can use any two SDXL models as the base model and refiner pair. should i train the refiner exactly as i trained the base model? Share Add a Comment. HandRefiner: Refining Malformed Hands in Generated Images by Diffusion-based Conditional Inpainting And don't forget the power of img2img. 5), (large breasts:1. So: base -> refiner -> 1. 5 but the parameters will need to be adjusted based on the version of Stable Diffusion you want to use (SDXL models require a Use 1. 5 and protogen 2 as models, everything works fine, I can access Sd just fine, I can generate but when I generate then it looks extremely bad, like it's usually a blurry mess of What happens is that SD has problems with faces. The main Step one - Prompt: 80s early 90s aesthetic anime, closeup of the face of a beautiful woman exploding into magical plants and colors, living plants, moebius, highly detailed, sharp attention to detail, extremely detailed, dynamic If you have a very small face or multiple small faces in the image, you can get better results fixing faces after the upscaler, it takes a few seconds more, but much better results (v2. ), but I have been able to generate the back views for the same character, it's likely that for a 360 view, once it's trying to show the other side of the character you'll need to change the prompt to try to force the back, with keywords like "lateral view" and "((((back view))))", in my experience this is not super consistent, you need to find /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. gg Transformers are the major building block that let LLMs work. Within this workflow, you will define a combination of three components: the "Face Detector" for identifying faces within an image, the "Face Processor" for adjusting the detected faces, and Dear Stability AI thank you so much for making the weights auto approved. Use a value around 1. Im using automatic1111 and I run the initial prompt with sdxl but the lora I made with sd1. I'll do my second post on the face refinement and then apply that face to a matching body style. The problem is I'm using a face from ArtBreeder, and img2img ends up changing the face too much when implementing a different style (eg: Impasto, oil painting, swirling brush strokes, etc). However, this also means that the beginning might be a bit rough ;) NSFW (Nude for example) is possible, but it's not yet recommended and can be prone to errors. safetensors) while using SDXL (Turn it off and use Hires. What would be great is if I could generate 10 images, and each one inpaints a different face all together, but keeps the pose, perspective, hair, etc the same. Restarted, did another pull and update. 5 model as the "refiner"). ) Automatic1111 Web UI - PC - Free How To Do Stable Diffusion LORA Training By Using Web UI On Different Models - Tested SD 1. The diffusion is a random seeded process and wants to do its own thing. Inpainting can fix this. #what-is-going-on /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 1 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It hasn't caused me any problems so far but after not using it for a while I booted it up and my "Restore Faces" addon isn't there anymore. having this problem as well Inpaint prompt: chubby male (action hero 1. Use at least 512x512, make several generations, choose best, do face restoriation if needed (GFP-GAN - but it overdoes the correction most of the time, so it is best to use layers in GIMP/Photoshop and blend the result with the original), I think some samplers from k diff are also better than others at faces, but that might be placebo/nocebo effect. Wait, does that mean that stable diffusion makes good hands but I don’t know what good hands look like? Am i asking too much of stable diffusion? It seems pretty clear: prototype and experiment with Turbo to quickly explore a large number of compositions, then refine with 1. 5 model as your base model, and a second SD1. 2) Set Refiner Upscale Value and Denoise value. 6), (nsfw:1. Put the base and refiner models in stable-diffusion-webui\models\Stable-diffusion. I think if there's something with sliders like facegen but with a decent result If you are running stable diffusion on your local machine, your images are not going anywhere. 5, SD 2. I had the same idea of retraining it with the refiner model and then load the lora for the refiner model with the refiner-trained-lora. (basically the same as fooocus minus all the magic) - and I'm wondering if i should use a refiner for it, and if so, which one - evidently i'm going for You can just use someone elses workflow of 0. However, that's pretty much the only place I'm actually seeing a refiner mentioned. Craft your prompt. 2), low angle, looking at Very nice. A regal queen of the stars, wearing a gown engulfed in vibrant flames, emanating both heat and light. I can't figure out how to properly use refiner in inpainting workflow. These settings will keep both the refiner and the base model you are using in VRAM, increasing the image generation speeds drastically. Prompt: An old lady posing in a bra for a picture, making a fist, bodybuilder, (angry:1. I got an issue with inpainting and controlnet, if i inpaint the background or something like a body the unpainted part the part that I don't want it to change alot example (face) it looks way different, if I add the face to inpainting I lose I had some mixed results putting the embedding name in parenthesis with 1girl token and then another with the other celeb name. I have my stable diffusion UI set to look for updates whenever I boot it up. "f32 stable-diffusion". What does the "refiner" do? Noticed a new functionality, "refiner", next to the "highres fix" What does it do, how does it work? Thx. (Added Oct. I have my VAE selection in the settings set to "Automatic". 6 More than 0. We note that this step is optional, but improves sample I'm having to disable the refiner for anything with a human face as a result, but then I lose out on other improvements it makes. 1. Been learning the ropes with stable diffusion, and I’m realizing faces are really hard. Auto Hand Refiner Workflow 4. just made this using epicphotogast and the negative embedding EpicPhotoGasm-colorfulPhoto-neg and lora more_details with these settings: Prompt: a man looks close into the camera, detailed, detailed skin, mall in background, photo, epic, artistic, complex background, detailed, realistic <lora:more_details:1. From SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis. What most people do is generate an image until it looks great and then proclaim this was what they intended to do. I am at Automatic1111 1. It can go even further with [start:end:switch] i was expecting more Consistent character faces, designs, outfits, and the like are very difficult for Stable Diffusion, and those are open problems. can anybody give me tips on whats the best way to do it? or what tools can help me refine the end result? i came across the "Refiner extension" in the comments here described as "the correct way to use refiner with SDXL" but i am getting the exact same image between checking it on and off and generating the same image seed a few times as a test. 5 model IMG 2 IMG, like realistic vision, can increase details, but destroy faces, remove details and become doll face/plastic face Share Add a Comment /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I will first try out the newest sd. This may help somewhat. I will try that as the facedeatailer nodes never worked and Just like Juggernaut started with Stable Diffusion 1. It's too bad because there's an audience for an interface like theirs. It used the source face for the target face I designated (0 or 1), which is what it's supposed to do, but it was also replacing the other face in the target with a random face. Activate the Face Swapper via the auxiliary switch in the Functions section of the workflow. If you're using Automatic webui, try ComfyUI instead. To recap what I deleted above, with one face in the source and two in the target, Reactor was changing both faces. 5 embedding: Bad Prompt (make sure to rename it to "bad_prompt. Wait till 1. I prompt "person sitting on a char" or "ridding a horse" or what ever non-portrait I receive nightmare fuel instead a face, other details seems to be okay on the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. fix tab or anything. Preferrable to use a person and photography lora as BigAsp how to use the refiner model? We’re on a journey to advance and democratize artificial intelligence through open source and open science. "normal quality" in negative certainly won't have the effect. X/1 instead of number of steps (dont know why but from several tests, it works better), Hi everybody, I have generated this image with following parameters: horror-themed , eerie, unsettling, dark, spooky, suspenseful, grim, highly /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. The example workflow has a base checkpoint and a refiner checkpoint, I think I understand how that's supposed to work. Hello everyone I use an anime model to generate my images with the refiner function with a realistic model ( at 0. 4 - 0. How to Inject Your Trained Subject e. Downloaded SDXL 1. 5 ) which gives me super interesting results. fix In my experiments, I've discovered that adding imperfections can be made manually in Photoshop using tools like liquify and painting texture and then in img2img Personally, it appears to me that stable diffusion 1. Her golden locks cascade in large waves, adding an element of mesmerizing allure to her appearance,the atmosphere is enveloped in darkness, accentuating the intensity of the flames, Behind her, lies a sprawling landscape of ruins, evoking a sense of desolation and mystery,full You want to stay as close to 512x512 as you can for generation (with SD1. com/models/119257/gtm-comfyui-workflows-including-sdxl-and-sd15. You don't really need that much technical knowledge to use these. The Refiner also seems to follow positioning and placement prompts without Region controls far Welcome to the unofficial ComfyUI subreddit. 5 to achieve the final look. 6 or too many steps and it becomes a more fully SD1. One of the weaknesses of stable diffusion is that it does not do faces well from a distance. 📷 8. I can seem to make a decent basic workflow with refiner alone, and one with face detail but when I try to combine them I can't figure it out. That said, Stable Diffusion usually struggles all full body images of people, but if you do the above the hips portraits, it performs just fine. The refiner is a separate model specialized for denoising of 0. "Inpaint Stable Diffusion by either drawing a mask or typing what to replace". 0 Base, moved it to img2img, removed the LORA and changed the checkpoint to SDXL 1. next version as it should have the newest diffusers and should be lora compatible for the first time. Can say, using ComfyUI with 6GB VRAM is not problem for my friend RTX 3060 Laptop the problem is the RAM usage, 24GB (16+8) RAM is not enough, Base + Refiner only can get 1024x1024, upscalling (edit: upscalling with KSampler I need to regenerate or make a refinement. I like any stable diffusion related project that's open source but InvokeAI seems to be disconnected from the community and how people are actually using SD. Hey, bit of a dumb issue but was hoping one of you might be able to help me. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. And after running the face refiner I think that ComfyUI should use SDXL refiner on face and hands, but how to encode a image to feed it in as latent? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 5 model doing the upscaling /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. SDXL models on civitai typically don't mention refiners and a search for refiner models doesn't turn up much. True for Midjourney, also true for Stable Diffusion /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Master Consistent Character Faces with Stable Diffusion! 4. An example: You impaint the face of the surprised person and after 20 generation it is just right - now that's it. Since the research release the community has started to boost XL's capabilities. So the trick here is adding expressions to the prompt (with weighting between them) and also found that it's better to use 0. Go to settings > stable diffusion > Maximum number of checkpoints loaded at the same time should be set to 2 > Only keep one model on device should be UNCHECKED. 75 then test by prompting the image you are looking for ie, "Dog with lake in the background" through run an X,Y script with Checkpoint name and list your checkpoints, it should print out a nice picture showing the I use Automatic 1111 so that is the UI that I'm familiar with when interacting with stable diffusion models. This subreddit is an unofficial community about the video game "Space Engineers", a sandbox game on PC, Xbox and PlayStation, about engineering, construction, exploration and survival in space and on planets. I mainly use img2img to generate full body portraits (think magic the gathering cards), and targeting specific areas (I think it’s called in painting?) works great for clothing, details, and even hands if I specify the number of fingers. Even the slightest bit of fantasy in there and even photo prompts start pushing a CGI like finish. In my understanding, their implementation of the SDXL Refiner isn't exactly as recommended by Stability AI, but if you are happy using just the Base model (or you are happy with their approach to the Refiner), you can use it today to generate SDXL images. 0. the idea was to get some initial depth\latent img but end with another model. Using a workflow of txt2image prompt/neg without the ti and then adding the ti into adetailer (with the same negative prompt), I get This was already answered on Discord earlier but I'll answer here as well so others passing through can know: 1: Select "None" in the install process when it asks what backend to install, then once the main interface is open, go to This is the best technique for getting consistent faces so far! Input image - John Wick 4: Output images: Input image - The Equalizer 3: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I didn't really try it (long story, was sick etc. 4, SD 1. Seems that refiner doesn't work outside the mask, it's clearly visible when "return with leftover noise" flag is enabled - everything outside mask filled with noise and artifacts from base sampler. 4), (panties:1. If you want to make a high quality Lora, I would recommend using Kohya and follow this video. i'm using roop, but the face turns out very bad (actually the photo is after my face swap try). Specifically, the output ends up looking So I installed stable diffusion yesterday and I added SD 1. The control Net Softedge is used to preserve the elements and shape, you can also use Lineart) 3) Setup Animate Diff Refiner /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 1, 2022) Web app stable-diffusion (Replicate) by cjwbw. Actually i have trained stable diffusion on my own images and now want to create pics of me in different places but SD is messing with face specially when I try to get full image. But I'm not sure what I'm doing wrong, in the controlnet area I find the hand depth model and can use it, I would also like to use it in the adetailer (as described in Git) but can't find or select the depth model (control_v11f1p_sd15_depth) there. gg /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. ) Automatic1111 Web UI - PC - /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 2 or less on "high-quality high resolution" images. cinematic photo majestic and regal full body profile portrait, sexy photo of a beautiful (curvy) woman with short light brown hair in (lolita outfit:1. If you're using some web service, then very obviously that web host has access to the pics you generate and the prompts you enter, and may be I'm trying to figure out a workflow to use Stable Diffusion for style transfer, using a single reference image. You can add things The difference in titles: "swarmui is a new ui for stablediffusion,", and "stable diffusion releases new official ui with amazing features" is HUGE - like a difference between a local notice board and a major newspaper publication. 📷 7. 1. 4), (mega booty:1. 9(just search in youtube sdxl 0. Depends on the program you use, but with Automatic 1111 on the inpainting tab, use inpaint with -only masked selected. Please keep posted images SFW. 30ish range and it fits her face lora to the image without 51 votes, 39 comments. I have updated the files I used in my below tutorial videos. At that moment, I was able to just download a zip, type something in webui, and then click generate. 5 model use resolution of 512x512 or 768 x 768. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. This speed factor is one reason I've mostly stuck with 1. true. Put the VAE in stable-diffusion-webui\models\VAE. It's an iterative process, unfortunately more iterative than a few images and done. I do it to create the sources for my MXAI embeddings, and I probably only have to delete about 10% of my If you're using ComfyUI you can right click on a Load Image node and select "Open in MaskEditor" to draw an inpanting mask. 2), (light For example, I wonder if there is an opportunity to refine the faces and lip syncing in this video. 2) face by (Yoji Shinkawa 1. after a long night of trying hard with prompts and negative prompts and a swap through several models Stable Diffusion generated a face that matches perfectly for Restore Faces only really works when the face is reasonably close to the "camera". g. To encode the image you need to use the "VAE Encode (for inpainting)" node which is under latent->inpaint. I started using one like you suggest, using a workflow based on streamlit from Joe Penna that was 40 steps total, first 35 on the base, remaining noise to the refiner. So far whenever I use my character lora and wish to apply the refiner, I will first mask the face and then have the model inpaint the rest. Stable Diffusion looks too complicated”. I made custom faces in a game, then fed them to Artbreeder to make them look realistic then bred them and bred them until they looked unique. 5, . I think prompt are not a good way and I tried control net "face only" option too. From what Refiner only helps under certain conditions. With experimentation and experience, you'll learn what each thing does. 7> Negative: EpicPhotoGasm-colorfulPhoto-neg Sure. ioq sgbotc fthut csffjpq lgnd pnflfeur khuvdcy zzetcuo rsznv lujlue