Comfyui workflows reddit github. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. I have a client who has asked me to produce a ComfyUI workflow as backend for a front-end mobile app (which someone else is developing using React) He wants a basic faceswap workflow. If you are not interested in having an upscaled image completely faithful to the original you can create a draft with the base model in just a bunch of steps, then upscale the latent and apply a second pass with the base and a third pass with the refiner. Note that when inpaiting it is better to use checkpoints trained for the purpose. It provides workflow for SDXL (base + refiner). There is a ton of stuff here and may be a bit overwhelming but worth exploring. ComfyUI Workspace manager v1. It was hard to have a quick view of the workflow to get sense of what was used. This works on all images generated by ComfyUI, unless the image was converted to a different format like jpg or webp. Portable ComfyUI Users might need to install the dependencies differently, see here. You should submit this to comfyanon as a pull request. Txt-to-img, img-to-img, Inpainting, Outpainting, Image Upcale, Latent Upscale, multiple characters at once, LoRAs, ControlNet, IP-Adapter, but also video generation, pixelization, 360 image generation, and even Live painting! Welcome to the unofficial ComfyUI subreddit. it has backwards compatibility with running existing workflow. They are generally The Workflow will have a trigger e. For legacy purposes the old main branch is moved to the legacy -branch All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. The sample prompt as a test shows a really great result. ComfyUI could have workflow screenshots like example repo has to demonstrate possible usage and also variety of extensions. Ending Workflow. Saving/Loading workflows as Json files. Hi guys, I wrote a ComfyUI extension to manage outputs and workflows. For your all-in-one workflow, use the Generate tab. Prevents your workflows from suddenly breaking when updating custom nodes, ComfyUI, etc. however, you can also run any workflow online, the GPUs are abstracted so you don't have to rent any GPU manually, and since the site is in beta right now, running workflows online is free, and, unlike simply running ComfyUI on some arbitrary cloud GPU, our cloud sets up everything automatically so that there are no missing files/custom nodes The workflows are meant as a learning exercise, they are by no means "the best" or the most optimized but they should give you a good understanding of how ComfyUI works. ControlNet and T2I-Adapter For demanding projects that require top-notch results, this workflow is your go-to option. Deploy ComfyUI and ComfyFlowApp to cloud services like RunPod/Vast. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. - Releases · comfyanonymous/ComfyUI Heads up: Batch Prompt Schedule does not work with the python API templates provided by ComfyUI github. Thanks also to u/tom83_be on Reddit who posted his installation and basic settings tips. Each workflow runs in its own isolated environment. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. I want a ComfyUI workflow that's compatible with SDXL with base model, refiner model, hi-res fix, and one LORA all in one go. It is highly recommended that you feed it images straight out of SD (prior to any saving) - unlike the example above - which shows some of the common artifacts introduced on compressed images. It includes literally everything possible with AI image generation. It comes the time when you need to change a detail on an image, or maybe you want to expand on a side. The same concepts we explored so far are valid for SDXL. The heading links directly to the JSON workflow. A good place to start if you have no idea how any of this works is the: An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. I believe it's due to the syntax within the scheduler node breaking the syntax of the overall prompt JSON load. Two workflows included. . Belittling their efforts will get you banned. Something like this would really put a huge dent in the patreon virus that's occurring in the custom workflow space. Note, this site has a lot of NSFW content. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values Hi Antique_Juggernaut_7 this could help me massively. The process of building and rebuilding my own workflows with the new things I've learned has taught me a lot. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. Yay or nay? Current limitations: It works by converting your workflow. For more details on using the workflow, check out the full guide AuraSR v1 (model) is ultra sensitive to ANY kind of image compression and when given such image the output will probably be terrible. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. You signed out in another tab or window. g. Civitai has a ton of examples including many comfyui workflows that you can download and explore. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to Jul 28, 2024 · Over the last few months I have been working on a project with the goal of allowing users to run ComfyUI workflows from devices other than a desktop as ComfyUI isn't well suited to run on devices with smaller screens. Please read the AnimateDiff repo README and Wiki for more information about how it works at its core. Flux Schnell is a distilled 4 step model. Work on multiple ComfyUI workflows at the same time. You can then load or drag the following image in ComfyUI to get the workflow: This section contains the workflows for basic text-to-image generation in ComfyUI. If you install custom nodes, keep an eye on comfyui PRs. I am very interested in shifting from automatic1111 to working with ComfyUI I have seen a couple templates on GitHub and some more on civitAI ~ can anyone recommend the best source for ComfyUI templates? Is there a good set for doing standard tasks from automatic1111? Is there a version of ultimate SD upscale that has been ported to ComfyUI? Noticed everyone was getting on the ComfyUI train lately but sharing the workflows was kind of hassle, most posted it on pastebin. : on: push and define a list of Jobs. Please keep posted images SFW. It'll create the workflow for you. This is an example of an image that I generated with the advanced workflow. It uses the built-in ComfyUI API to send data back and forth between the comfyui instance and the interface. 60 votes, 16 comments. AnimateDiff workflows will often make use of these helpful 👏 欢迎来到我的 ComfyUI 工作流集合地! 为了给大家提供福利,粗糙地搭建了一个平台,有什么反馈优化的地方,或者你想让我帮忙实现一些功能,可以提交 issue 或者邮件联系我 theboylzh@163. ComfyUI Academy - a series of courses designed to help you master ComfyUI and build your own workflows upvotes · comments Share Add a Comment Share, discover, & run thousands of ComfyUI workflows. The first one is very similar to the old workflow and just called "simple". Save one of the images and drag and drop onto the ComfyUI interface. net. json files into an executable Python script that can run without launching the ComfyUI server. ckpt model For ease, you can download these models from here. Upcoming tutorial - SDXL Lora + using 1. Try to restart comfyui and run only the cuda workflow. Some tasks never change and don't need complicated all in one workflows with a dozen different custom nodes each. Basic workflows should be stock and available for all users. I had sometime to burn this weekend and the domain was available for $3 lol. I just reworked the workflow and wrote a user-guide . Please share your tips, tricks, and workflows for using this software to create your AI art. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. So quickly cooked up this. Area Composition; Inpainting with both regular and inpainting models. Here are the models that you will need to run this workflow:- Loosecontrol Model ControlNet_Checkpoint v3_sd15_adapter. And above all, BE NICE. 5 manage workflows, generated images gallery, saving versions history, tags, insert subwokflow 可调参数: face_sorting_direction:设置人脸排序方向,可选值为 "left-right"(从左到右)或 "large-small"(从大到小)。 A ComfyUI workflows and models management extension to organize and manage all your workflows, models in one place. Building your own is the best advice there is when starting out with ComfyUI imo. hi u/Critical_Design4187, it's definitely an active work in progress, but the goal of the project is to be able to support/run all types of workflows. Adjust the face_detect_batch size if needed. Workflows exported by this tool can be run by anyone with ZERO setup. This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. A lot of people are just discovering this technology, and want to show off what they created. json files saved via comfyui, but the launcher itself lets you export any project in a new type of file format called "launcher. With this workflow you can train LoRA's for FLUX on ComfyUI . ai/AWS, and map the server ports for public access, such as https://{POD_ID}-{INTERNAL_PORT}. om。 说明:这个工作流使用了 LCM Posted by u/Ok-Mobile5227 - 122 votes and 21 comments I think perfect place for them is Wiki on GitHub. Creators develop workflows in ComfyUI and productize these workflows into web applications using ComfyFlowApp. Loading full workflows (with seeds) from generated PNG, WebP and FLAC files. Seamlessly switch between workflows, as well as import, export workflows, reuse subworkflows, install models, browse your models in a single workspace - 11cafe/comfyui-workspace-manager Grab the ComfyUI workflow JSON here. Not enough VRAM/RAM Using these nodes you should be able to run CRM on GPUs with 8GB of VRAM and above, and at least 16GB of RAM. The workflows are designed for readability; the execution flows from left to right, from top to bottom and you should be able to easily follow the "spaghetti" without moving Automatically installs custom nodes, missing model files, etc. A1111 has great categories like Features and Extensions that simply show what repo can do, what addon out there and all that stuff. But it separates LORA to another workflow (and it's not based on SDXL either). The second workflow is called "advanced" and it uses an experimental way to combine prompts for the sampler. Many end up in the UI Rework of almost the whole thing that's been in develop is now merged into main, this means old workflows will not work, but everything should be faster and there's lots of new features. All credits go to them. Jul 14, 2023 · You signed in with another tab or window. Each job can be composed or one or multiple Steps, and each step is generally speaking a Github Action. Reload to refresh your session. This usually happens if you tried to run the cpu workflow but have a cuda gpu. proxy. I also use the comfyUI manager to take a look at the various custom nodes available to see what interests me. json", which is designed to have 100% reproducibility Improved AnimateDiff integration for ComfyUI, as well as advanced sampling options dubbed Evolved Sampling usable outside of AnimateDiff. Some more information on installing custom nodes and extensions in basics Most have instructions in their repositories or on civit. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. You switched accounts on another tab or window. Adjust the node settings according to your requirements: Set the mode to "sequential" or "repetitive" based on your video processing needs. That's the one I'm referring to. I found it very helpful. In a base+refiner workflow though upscaling might not look straightforwad. I will keep updating the workflow too here. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. These are the scaffolding for all your future node designs. Join the largest ComfyUI community. runpod. ckpt model v3_sd15_mm. Starting workflow. Inpainting is very effective in Stable Diffusion and the workflow in ComfyUI is really simple. Welcome to the unofficial ComfyUI subreddit. It combines advanced face swapping and generation techniques to deliver high-quality outcomes, ensuring a comprehensive solution for your needs. The term Github Action, however, is more frequently used refer to the reusable, predefined actions you can get from the marketplace. Connect the input video frames and audio file to the corresponding inputs of the Wav2Lip node. You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. Nodes interface can be used to create complex workflows like one for Hires fix or much more advanced ones. TLDR: THE LAB EVOLVED is an intuitive, ALL-IN-ONE workflow. You could sync your workflows with your team by Git… Welcome to the unofficial ComfyUI subreddit. Add the Wav2Lip node to your ComfyUI workflow. elx blynl yvojp vwbxbr mwxor urlnf damc ythhx mcn nzaw