Comfyui lora strength reddit.
Comfyui lora strength reddit.
Comfyui lora strength reddit 75, which is used for a new txt2img generation of the same prompt at a standard 512 x 640 pixel size, using CFG of 5 and 25 steps with uni_pc_bh2 sampler, but this time adding the character LoRA for the woman featured (which I trained myself), and here I switch to Wyvern v8 In ComfyUI you use LoraLoader to use a Lora and it contains the strength parameter as well. Strength of the lora applied on the CLIP model vs the main MODEL. The reason you can tune both in ComfyUI is because the CLIP and MODEL/UNET part of the LoRA will most likely have learned different concepts so tweaking them separately As with lots of things in ComfyUI there are multiple ways to do this. Since I've 'got' you here and we're on the subject, I'd like your take on a small matter: 89 votes, 24 comments. Using only the trigger word in the prompt, you cannot control Lora. To my knowledge, Combine and Average work almost the same, but combine averages the weights based on the prompts, and average can average the StabilityAI just release new ControlNet LoRA for SDXL so you can run these on your GPU without having to sell a kidney to buy a new one. Idk if it is done like this but what I would do is generate few, let's say 6 images with the same prompt and LoRA intensity for each methodology to test and ask a random five people to give scores to each group of six. 000 and ControlNet strength 0. Simply adding detail to existing crude structures is the easiest and I mostly only use LORA. 5 and 0. Since adding endless lora nodes tends to mess the simplest workflow, I'm looking for a plugin with a lora stacker node. It would clutter less the workflow. People have been extremely spoiled and think the internet is here to give away free shit for them to barf on - instead of seeing it as a collaboration between human minds from different economic and cultural spheres binding together to create a global culture that elevates people and where we give Start with a full 1. Specifically changing the motion_scale or lora_strength values during the video to make the video move in time with the music. The extension also provides XY plot components to better evaluate merge settings. So my though is that you set the batch count to 3 for example and then you use a node that changes the weight for the lora on each bath. Or just skip the lora download python code and just upload the lora manually to the loras folder. 5 DreamBooths. So to use them in ComfyUI, load them like you would any other LoRA and change the strength to somewhere between 0. 5 version stable diffusion, however when i tried using it with other models, not all worked. If I have a lora at 0. The negative has a Lora loader. So just add 5/6/however many max loras you'll ever use, then turn them on/off as needed. And a few Lora’s require a positive weight in the negative text encode. 5 model I can then use it with many different other checkpoints within the WebUI to create many different styles of the face. I cannot find settings that work well for SDXL with LCM Lora. 4x KSampler - ELI5: We would like to show you a description here but the site won’t allow us. Even though it's a slight annoyance having to wire them up, especially more than one - that does come with some UI validation and cleaner prompts. Also, I heard at some point that the prompt weights are calculated differently in comfyui, so it may be that the non-lora parts of the prompt are applied more strongly in comfy than a1111. The output from the latter is a model with all the LoRAs included, which can then route into your KSampler Im quite new to ComfyUI. Download this extension: stable-diffusion-webui-composable-lora A quick step by step for installing extensions: Click the Extensions tab within the Automatic1111 web app> Click Available sub-tab > Load from > search composable LORA > install > Then restart the web app and reload the UI Until then, I've light a candle to the gods of Copy & Paste and created the Lora vs Lora plot in a workflow. 18K subscribers in the comfyui community. 3 weight and I have a trigger word at 0. 6 seconds in ComfyUI) and I cannot get TensorRT to work in ComfyUI as the installation is pretty complicated and I don't have 3 hours to burn doing it. 2 change in weight, so I can compare them and choose the best one, I use Efficiency Nodes for ComfyUI Version 2. Save some of the information (for example, name of LoRA with associated activation word) into a text file, which I can search easily. 01 at around 10k iterations. - If you set all ControlNet strength to 0. 7>, <lora:transformerstyle:0. From chatgpt: Guide to Enhancing Illustration Details with Noise and Texture in StableDiffusion (Based on 御月望未's Tutorial) Overview. ComfyUI only allows stacking LoRA nodes, as far as I know. In A111, my sdxl Lora is perfect at :1. Also, if this is new and exciting to you, feel free to post What does the LoRA strength clip function do? If the clip is the text or trigger word, isn't it the same to put (loratriggerword:1. What am I doing wrong? I played around a lot with lora strength, but the result always seems to have lot of artifacts. Is there a node that lets me decide the strength schedule for a lora? Or can I simply turn a Lora off by putting it in the negative prompts? I have a node called "Lora Scheduler" that lets you adjust weights throughout the steps, but unfortunately I'm not sure which node pack it's in. Just inpaint her face with lora + standard prompt. io/ComfyUI_examples/lora/ after the protest of Reddit For a slightly better UX, try a node called CR Load LoRA from Comfyroll Custom Nodes. X or something. 7>, scared, looking own, panic, screaming, a portrait of a ginger teen, blue eyes, short bob cut, ginger, black winter dress, fantasy art, 4K resolution, unreal engine, high resolution wallpaper, sharp focus” Final version. Its as if anything this lora is included in gets corrupted, regardless of strength? Then split out the images into seperate png's and use to create a Lora in Kohya_SS (optionally can upscale each image first with a low denoise strength for extra detail) Once the Lora was trained on the first 10 images, i went back into stable diffusion and created 24 new images using the Lora, at various angles and higher resolution (between Hello, I am new to stable diffusion and I tried fine-tuning using LoRa. Checkpoints --> Lora. 0 or 0. Base model "Model" and "Clip" outputs go to the respective "Model" and "Clip" inputs of the first Load Lora node. The only way I've found to not use a LORA, other than disconnecting the nodes each time, is to set the model strength to 0. 4-0. Dec 7, 2024 · From a user perspective, the delta (which I'm calling a ConDelta, for Conditioning Delta, or Concept Delta, if you prefer) can be used the same way a LoRA can, by loading it with a node and setting a strength (positive or negative). I've trained a LoRA with two different photo sets/modes, and different trigger (unique trained) words to distinguish them, but was using A1111 (or Vlad) at the time, and never have tried it in ComfyUI yet. When you have a Lora that accepts float strength values between -1 and 1, how can you randomize this for every generation? There is the randomized primitive INT and there are math nodes that convert integers to floats. You often need to reduce the CFG to let the system make it “nice” … at the cost of potentially losing the Lora “model” side of things When you mix Lora’s this can get compounded … though it depends on the type of Lora’s. Please share your tips, tricks, and… Assuming both Lora's have trigger words, the easiest thing to try is to use the BREAK keyword to separate character descriptions, with each sub prompt containing a different trigger word(it doesn't matter where in the prompt the Loras are called though). So if you have different LORAs applied to the base model, each pipeline will have a different model configuration. If we've got LoRA loader nodes with actual sliders to set the strength value, I've not come across them yet. 23K subscribers in the comfyui community. Ksampler takes only one model. 0 / 1. Where do I want to change the number to make it stronger or weaker? In the Loader, or in the prompt? Both? Thanks. I tested all of them which are now accompanied with a ComfyUI workflow that will get you started in no time. Reply reply aerilyn235 Isn't that the truth of the day. I feel like it works better if I put it in the prompt with <lora:name-of-LCM-lora-file:0. 0 but they kind of work at 2. All I see is (masterpiece) Blonde, 1girl Brunette, <lora:Redhead:0. 0> which is calling it for that particular image with it's standard strength applied. “I don’t even see the prompts anymore. So on X type select Prompt S/R, on X values type the name of your 1st lora, 2nd lora, 3rd lora etc. Model + Lora 100% Model + Lora 75% Model + Lora 50% And then tweak around as necessary PS: Also works for controlnet with ConditioningAverage node, especially considering high strength controlnet in low resolution will look jagged sometimes in higher res output so lowering the effect in the hiresfix steps can mitigate the issue. Before clicking the Queue Prompt, be sure that the LoRA in the LoRA Stack is Switched ON and you have selected your desired LoRA. 5> generates the same image as <lora:foobar:1>. It worked normally with the regular 1. I want to test some basic lora weight comparisons, like in WebUI where you do XYZ plot. 0 denoising strength and get amazing results We would like to show you a description here but the site won’t allow us. Take outputs of that Load Lora node and connect to the inputs of the next Lora Node if you are using more than one Lora model. Maybe try putting everything except the lora trigger word in ( prompt here:0. 5, from then on you can get very impressive results by playing with the strength. The issue has been that Automatic1111 didn't support this initially, so people ended up trying to set-up work arounds. appreciated - thanks! And certainly, though I need to put more thought into that for SDXL and how I'll differentiate it from what's already out there and from ClassipeintXL. 5-1. It gives very good results at around 0. , so that I can either cut and paste their metadata into Automatic1111 or open the PNG in ComfyUI to recover the workflow. BTW, SDXL LoRAs do not work in non-SDXL and the opposite also happens. But I've seen it enhance features with some loras. dose this "<lora:easynegative:1. I've developed a comfyui extension that offers a wide range of LoRA merge techniques (including dare). Tried a few combinations but, you know, ram is scarce while testing. 75, and and an end percent of 0. 49 votes, 21 comments. 4 LoRA to 0. 8> . If I lower the strength, it loses the characteristics of the original character. This is because the model's patch for Lora is applied regardless of the presence of the trigger word. I have tried sending the float output values from scheduler nodes into the input values for motion_scale or lora_strength but I get errors when I run the workflow. 0+for stacked Lora so making the change in the weight of the lora can make huge different in image but with stacked Lora it becomes time-consuming and tiring process so I We would like to show you a description here but the site won’t allow us. using the same ratios/weights,etc. (i don't need the plot just individual images so i can compare myself). It remains as "undefined" I am able to drag in sample files like the videos from the CivitAI page and it will update the "Lora_name" field as expected, but it will not run even if I have that LORA loaded. 1) using a Lineart model at strength 0. 5 not XL) I know you can do this by generating an image of 2 people using 1 lora (it will make the same person twice) and then inpainting the face with a different lora and use openpose / regional prompter. Please keep posted images SFW. It does work if connected with lcm lora, but the images are too sharp where it shouldn't be (burnt), and not sharp enough where it should be. I attached 2 images only inpainting and using the same lora, the white haired one is when i used a1111, the other is using comfyui (searge) . However, the image generated with Forge is quite different from the original A1111 webUI. Choose a weight between 0. Go to the Lora tab and use the LoRA named "ip-adapter-faceid-plus_sd15_lora" in the positive prompt. 0 and can't comment on how well it will work with various fine tunes. Ex. 0 LoRA strength and adjust down if you need. Heres’s mine: I use a couple of custom nodes -LoRA Stacker (from the Efficiency Nodes set) along feeding into CR Apply LoRA Stack node (from the Comfyroll set). 75) to weaken it in relation to the trigger word. Some may work from -1. I use it in the 2nd step of my workflow where I create the realistic image with the control net inputs. 9> where the number is the strength. In your case, i think it would be better to use controlnet and face lora. 5 lora and for a SDXL model you need a SDXL lora. Dec 17, 2024 · Rescale the LoRA Strength Finally test again the LoRA, consider that it might need a higher strength now. If you have a set model + Lora stack you want to save and reuse, you can use the Save Checkpoint node at the output of the model+lora stack merge to reuse it as a base model in the future. This may need to be adjusted on a drawing to drawing basis. - At the latest in the second step the golden CFG must be used. Never set Shuffle, Normal BAE to high or it is like an inpainting. Whereas a single wildcard prompt can range from 0 LoRAs to 10. Stopped linking my models here for that very reason. Do you experience the same? Is the syntax for Lora strength changed in Forge? We would like to show you a description here but the site won’t allow us. this prompt was 'woman, blonde hair, leather jacket, blue jeans, white t-shirt'. Not to mention ComfyUI just straight up crashes when there are too many options included. The LORA modifies the base model. I can however update the strength field as one would expect. For a 1. Some prompts which work great without the Lora produce terrible results. I have yet to see any switches allowing more than 2 options, which is the major limitation here. X:X. To facilitate the listing, you could start to type "<lora:" and then a bunch of lora appears to choose from and filter the request the more you type the name of the lora. decreasing the lora strength removing negative prompts decreasing/increasing steps messing with clip skip None of it worked and the outcome is always full of digital artifacts and is completely unusable. 0 to 1. /r/StableDiffusion is back open after the So far the only lora I used was either in a1111, or lcm lora, now I made my own, but it doesn't seem to work. Most of the time when you inpaint or use Adetailer, you will want to reduce CFG and lora weight, sometimes prompt weight, because they will overcook the image at lower values than in txt2img. There you have it! I hope this helps 5. The intended way to use SDXL is that you use the Base model to make a "draft" image and then you use the Refiner to make it better. 20K subscribers in the comfyui community. g. 5 8 steps CFG LoRA strength: 1. 8 strength. It then applies ControlNet (1. <lora:foobar:0. CLIP Strength: Most LoRAs don't contain any text token training (classification labels for image concepts in the LoRA data set). 0 Scheduler settings: CFG Scale: 1. Take a Lora of person A and a Lora of Person B, place them into the same photo (SD1. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Share Sort by: Also the IPAdapter strength sweet spot seems to be between 0. I’m starting to dream in prompts. Option a) t2i + low denoising strength + controlnet tile resample b) i2i inpaint + controlnet tile resample (if you want to maintain all texts) We would like to show you a description here but the site won’t allow us. A little late to this post, but I have the solution for Automatic1111 users. Lora weights I typically divide in half, and tweak from that starting point. 6 strength value. Please share your tips, tricks, and workflows for using this software to create your AI Lora usage is confusing in ComfyUI. Used the same as other lora loaders (chaining a bunch of nodes) but unlike the others it has an on/off switch. Not sure how to configure the Lora strengths in ComfyUI. When you use Lora stacker, Lora weight & Clip weight of the Lora are the same, when you load a lora in the lora loader, you can use 2 differents values. e. Even just SDXL with celebs doesn't seem to work that well, but then I don't generate boring portrait photos, it's all more "involved" and complex and celeb loras often ruin whole result, I'll have otherwise perfect image, right position, right composition, details, all, add celeb lora Adding the LoRA stack node in ComfyUI Adding the LoRA stack node in ComfyUI. 000 means it is disabled and will be bypassed. 8> Red head” Oh, another Lora tip to throw on the bonfire: Since everything is a mix of a mix of a mix… watch out for Lora ‘overfitting’ that makes your images look like deep fried memes. To test, render once with 1024x1024 at strength 0. The process was: Create a 4000 * 4000 grid with pose positions (from open pose or Maximo etc), then use img2img in comfyUI with your prompt, e. 7> which would use the LCM at 70% strength. the classipeint LoRA actually does a really great job of overlapping with that style, if you throw in artist names and aesthetic terms with a slightly lower LoRA strength. 5 model you need a 1. And also, value the generations with the same LoRA strength from 1 to 5 according to how well the concept is represented. 0 (I should probably have put the clip_strength to 0 but I did not) sampler: Euler scheduler: Normal steps: 16 My favorite recipe was with the Restart KSampler though, at 64 steps, but it had its own limitations (no SGM_Uniform scheduler for AnimateDiff). 3 weight, it doesn't mean that the trigger word is only applied to the lora. What is your Lora strength in comfy sdxl? My Lora doesn’t appear in the images at 1. It will pick a random LoRa each time you queue a prompt to generate. I recommend the DPM samplers, but use your favorite. 0 and 2. I had some success using stuff like position/concept loras from SDXL in Pony, but celebs? Characters? Nope. Once it's working, then start fiddling with the resolution. For the Controlnet, I use t2i-adapter_xl_sketch, initially set to strength of 0. In the Lora Loader, I set the strength to "1" essentially turning it "on" In the Prompt, I'm calling the Lora <lora:whatever:1. I've made a few loras now of a person (ball park about 70 photos each). Apr 17, 2025 · Welcome to the unofficial/community-run ComfyUI subreddit. 8> would set that LoRA to 0. 8 without reading the settings. 0>, " if written in the -prompt without any other lora loading do its job ? in efficiency nodes if i load easynegative and give it a -1 weight dose it work like a -prompt imbed ? do i have to use the trigger word for loras i imbed like this "<lora:easynegative:1. This is unnecessary but hardcoding a 1. An added benefit is if I train the LoRA with a 1. when I start a trainingsession and I don't see the downtrend in loss that I'm hoping for I abort the process to save time and retry with new values. I'm still experimenting and figuring out a good workflow. I was using it successfully for SD1. Previously I used to train LoRAs with Kohya_ss, but I think it would be very useful to train and test LoRAs directly in ComfyUI. 0, and some may support values outside that range. 2 and go to town. <lora:LORANAMEHERE:0. - Lora strength_model 0. I found I can send the clip to negative text encode …. 0. Please share your tips, tricks, and workflows for using this software to create your AI art. Lowering the strength of the trigger word doesn't fix this problem. Right click on your LoRa loader node then > convert widget to input > lora name add a primitive node and plug it to the lora name input then on control after generate chooe randomize. (Same image takes 5. Any advice or resource regarding the topic would be greatly appreciated! Welcome to the unofficial ComfyUI subreddit. github. When I use this LORA it always messes up my image. The only reason I'm needing to get into actual LoRA training at this pretty nascent stage of its usability is that Kohya's DreamBooth LoRA extractor has been broken since Diffusers moved things around a month back; and the dev team are more interested in working on SDXL than fixing Kohya's ability to extract LoRAs from V1. 0, again at 1. Or do something even more simpler by just paste the link of the loras in the model download link and then just change the files to the different folders. Seems like it's busted. 2 seconds, with TensorRT. As usually animateDiff has trouble keeping consistency, i tried making my first Lora. Right: Increased smooth step strength No lora applied, scaled down 50%. Generate your images! Hope you liked this guide! Edit: Added more example images. Welcome to the unofficial ComfyUI subreddit. this will then be replaced by the next on your list when you run the script. 25. If I have a chain of Loras and I… This slider is the only setting you have access to in A1111. For the LoRA I prefer to use one that focuses on lineart and sketches, set to near full strength. 8 might be beneficial if sharing on Civitai, as users often default to 1. I'm starting to believe, it isn't on my end and the loras are just completely broken, but if anyone else could test them, that would be awesome. You adjust the weights in the prompt, like: <lora:catpics_lora:0. And some LoRA do not play well with some checkpoint models. you can by using Prompt S/R, where one lora will be replaced by the next. I don't find ComfyUI faster, I can make an SDXL image in Automatic 1111 in 4 . 97 votes, 17 comments. Comfy does the same just denoting it negative (I think it's referring to the Python idea that uses negative values in array indices to denote last elements), let's say ComfyUI is more programmer friendly; then 1(a111)=-1(ComfyUI) and so on (I mean the clip skip values and no LoRA has no concept of precedence (where it appears in the prompt order makes no difference), so the standard ComfyUI workflow of not injecting them into prompts at all actually makes sense. I have so far just tested with base SDXL1. Prompt: “<lora:skatirFace:0. So to replicate the same workflow in ComfyUI, insert a LoRA, set the strength via the loader's slider, and do not insert anything special in the prompt. Sorry if this has been asked before but i can't seem to find answers. The Lora works in A1111. I usually txt2img at CFG 5-7, and inpaint around 3-5. I follow his stuff a lot trying to learn. Because a LoRA places a layer in the currently selected checkpoint. However, the prompt makes a lot of difference. Please share your tips, tricks, and… I can already use wildcards in ComfyUI via Lilly Nodes, but there's no node I know of that makes it possible to call one or more LoRAs from a text prompt. LoRA: Hyper SD 1. In A1111, each LoRA you are using should have an entry for it in the prompt box. high clip strength makes your prompt activate the features in the training data that were captioned, and also the trigger word. Edit: Thank you everyone especially u/VeryAngrySquirrel for mentioning Mikey Nodes ! The "Wildcard And Lora Syntax Processor" is exactly what I'm looking for! If I click on "Lora_name" literally nothing happens. I use SD Library Notes, and copy everything -- EVERYTHING!-- from the model card into a text file, and make sure to use Markdown formatting. Works well, but stretches my RAM to the absolute limit. . Currently I have the Lora Stacker from efficiency nodes, but it works only with the propietary Efficient KSampler node, and to make it worse the repository has been archived on Jan 9, 2024, meaning it could permanently stop working with the next comfyui update any minute now. The "Model" output of the last Load Lora node goes to the "Model" input of the sampler node. Reply reply AmericanKamikaze Lmao. The leftmost column is only the lora, Down: Increased lora strength. I'd say one image in each batch of 8 is a keeper. After some investigation, I found that Forge seems to ignore the Lora strength. As you can see, it's not simply scaling strength, the concept can change as you increase the smooth step. Reddit user, _roblaughter_, discovered a severe security issue in the ComfyUI_LLMVISION node created by user u/AppleBotzz. In Automatic1111, for example, you load and control the strength by simply typing something like this: <lora:Dragon_Ball_Backgrounds_XL:0. The image below is the workflow with LoRA Stack added and connected to the other nodes. So using the same type of prompts like he is doing for pw_a, pw_b, etc. 5 with the following settings: LCM lora strength 1. Beneath the main part there are three modules: LORA, IP-adapter and controlnet. 75. To prevent the application of Lora that is not used in the prompt, you need to directly connect the model that does not have Lora applied. In practice, both are usually highly correlated, but there are situations where you want high model strength to capture a style but low clip strength to avoid a certain keyword in the captions. If the denoising strength must be brought up to generate something interesting, controlnet can help to retain composition. 0 to +1. In Comfy UI, you don't need to use the trigger word (especially if it's only one for the entire LoRA), mess with the strength_model setting in the LoRA loader instead. The image comes out looking dappled and fuzzy, not nearly as good as ddim for example. 0>, " ? is there a comfyui discord server ? Put in the same information as you would with conditioning 1 and conditioning 2, but you can control the weight of the first conditioning (conditioning to) with the conditioning_to_strength variable. This guide, inspired by 御月望未's tutorial, explores a technique for significantly enhancing the detail and color in illustrations using noise and texture in StableDiffusion. Showing the LoRA stack connected to other nodes. , set your lora loader to allow strength input, and just direct that type of scheduling prompt to the strength of the Lora, it works just with the adjusted code in the node. 0 and the impact should be obvious. If I set the strength high, and start step at a higher value like 0. If you have installed and used this node, your sensitive data, including browser passwords, credit card information, and browsing history, may have been compromised and sent to a Discord server via webhook. Most LoRAs also need one or more keywords to trigger. Generate a set of "sample image" for commonly used models, LoRA etc. 5 you can easily prompt a background and other things. I’m pretty sure the LoRA file has to go under models/lora to work in a prompt instead of the Additional Networks LoRA Welcome to the unofficial ComfyUI subreddit. 5 for converting an anime image of a character into a photograph of the same character while preserving the features? I am struggling hell just telling me some good controlnet strength and image denoising values would already help a lot! Not sure why one can't lower the strength of the trigger word itself if they do need to add extra stuff to a prompt. In A1111 they are placed in models/LoRA and called like this: <lora:loraname:0. But I can’t seem to figure out how to pass all that to a ksampler for model. But what do I do with the model? The positive has a Lora loader. Tested a bunch of others of that author, now also in comfyui, and they all produce the same image, no matter the strength, too. 0+1. py. I tried IPAdapter, but if I set the strength too high, it tries to be too close to the original image. It works for all Checkpoints, Loras, Textual Inversionss, Hypernetworkss, and VAEs. Jul 29, 2023 · Eventually add some more parameter for the clip strength like lora:full_lora_name:X. 5>, and play around with the weight numbers until it looks how you want. Don't know how comfy behaves if it's not the case but you have to have a lora Thursday m that's compatible with your checkpoint. A lot of people are just discovering this technology, and want to show off what they created. I can select the LoRA I want to use and then select Anythingv3 or Protogen 2. Adding a Lora that was trained on anime and simple 2d drawing … with an “add detail” Lora …. 4, it renders the co Welcome to the unofficial ComfyUI subreddit. Also, I’ve had bad luck with using the LCM LoRA from the Additional Networks plug-in. But it's not really predictable how it's changing. We would like to show you a description here but the site won’t allow us. Atleast for me it was like that, but i can't say for you since we don't have the workflow you use We would like to show you a description here but the site won’t allow us. 8. Hi everyone, I am looking for a way to train LoRA using ComfyUI. The Lora has improved with the step increases. Please share your tips, tricks, and… The leftmost column is only the lora, Down: Increased lora strength. Try changing that or use a lora stacker that can allow lora/clip weight. What am I doing wrong here in ComfyUI? The Lora is an Asian woman Now I want to use a video game character lora. 8 for example is the same as setting both strength_model and strength_clip to 0. 2) or something? I'm new to ComfyUI and using stable diffusion in general. Is this workflow at all possible in ComfyUI? I want to automate the weight adjustment of the Lora weight, I would like to generate multiple images for every 0. I find it starts to look weird if you have more than three LoRA at the same time. I load the models fine and connect the proper nodes, and they work, but I'm not sure how to use them properly to mimic other webuis behavior. 2) or something? If the clip is the text or trigger word, isn't it the same to put (loratriggerword:1. For example I have portrait of someone, and want to put them into different scenes like playing basketball, or driving a car. My best lora tensorfile hit a lossrate at around 0. And wait for ControlNet reference 👀 You can do img2img at 1. You can also decrease the lenght by reducing the batch size (number of frames) regardless what says the prompt schedule (useful for doing quick tests) Has anyone gotten a good simple ComfyUI workflow for 1. In your prompt put your 1st lora. And above all, BE NICE. Belittling their efforts will get you banned. I've settled on simple prompts that include some of face and body features I'm aiming for. In most UIs adjusting the LoRA strength is only one number and setting the lora strength to 0. It has a clear effect in a minimal workflow (like the default one), but only if you set the strength to relatively high values, like the hand-fix had little to no effect below 4. 5 Steps: 4 Scheduler: LCM On a1111 the positive "clip skip" value is indicated, going to stop the clip before the last layer of the clip. 0 for all of the loaders you have chained in. gtcdsa dmp lunyk fprhsj slfa jqw wdvy zcz giocy eeifdxg