Thanks for posting this, @hihihi! It’s really helpful to see with pics.
You’re right, this type of grammar has no meaning in the standard diffusers pipelines. However, there is a diffusers community pipeline that supports it. I have a flight coming up next week but let’s see if I can try integrate it before then I’ve been wanting to do the same for https://kiri.art/ for some time now so it’s nice to have a push.
P.S., there’s no good way to run AUTOMATIC1111 webui as a serverless process. However, it would be possible to extract parts of it (and / or other SD projects), and it is indeed a path I’ve considered numerous times before. But in the end, diffusers always catches up, and there is much wider development happening there. So I’ve stopped looking into those other solutions and am focusing all my efforts here too, and so far, every time my patience has paid off.
lpw_stable_diffusion pipeline works well!
If there’s chance, I will try other community pipelines too.
It returns slightly different outcome compares to webui, but not a big deal.
Maybe webui uses latent diffusion?
It is message when webui loaded
And there’s DPM ++ 2M Karras scheduler on webui which have great performance, but huggingface diffusers don’t have.
Is there way to add this scheduler on repo?
It is not a important thing, because I can use other schedulers.
Great news!! Thanks for reporting back (and so quickly!). It’s fun to squeeze in a new feature before bed and wake up to usage feedback already
Not possible yet without modifying the code (but all you have to do is add the name of the pipeline here in app.py). This is going to change so that instead of initting all pipelines at load time, they’re only initted (and cached) when they’re first used. I’ll also have better error handling for misspelled / unavailable pipelines, but it’s a little further down the line.
Looks like it does indeed, but not sure where and for what. I see diffusers has latent diffusion support but it’s not specific to the stable diffusion pipelines. Maybe you can look into this more and report back
Unfortunately adding schedulers is quite difficult… but if you manage, I’m sure the whole diffusers community will love you I don’t really understand the differences between all the schedulers, however, there’s a nice comparison here:
and also, did you see the DPMSolverMultistepScheduler that’s been in diffusers for about two weeks (and works just fine in docker-diffusers-api)? I’m not sure how exactly or if it’s related to DPM ++ 2M Karras but you get excellent results in just 20 steps!! (same quality as 50 steps on the older schedulers).
Not yet, but I indeed have some stuff planned here! Just wish Banana had on-prem S3-compatible storage. I’m looking forward to see how this compares to their current optimization stuff… the only thing is, there’s no GPU in banana’s build stage (their optimizations step transfers the built docker image to different machines to do the optimization), so we’ll have to get creative here… but I’m up to the challenge
Thank you very much! will test DPMSolverMultistepScheduler!
By the way, I’m building images at banana which works well in gpu server, but optimization doesn’t finished for 6 hours. It seems there’s some problem at banana side now.
Haven’t tried recently but optimization has been a big and constant pain point for me. I plan to experiment with some homegrown alternatives and - if it works - hope we’ll get a way to opt-out of banana’s optimization completely for faster builds. But do make sure you report it on the discord if you haven’t already, and even if others have too… Also, it can be worth trying to push a new dummy commit to trigger a new rebuild; sometimes (but not always) it will just start working again on its own (or after they fix something that didn’t affect existing stuck builds).
My pleasure. If you’ve done any speed tests let us know, I haven’t had a chance yet (but I do have this planned… just working on a few related other fun things ).
Also, I missed it before but in latest dev commit I’ve set TENSORS_FAST_GPU=1 which should result in even faster loads.
it seems this is now on main, so you can remove the reference to the dev branch
can you copy this post into your docs folder? It’s a bit hard to find
Is it now required to use stabilityai/stable-diffusion-2? When I use 1.5 I get an error that the container only contains v2
if you’re still searching for a solution to Banana’s awful log handling: How about offering to send it to a log service? E.g. I’m using https://cloud.axiom.co/ - that’s just a simple POST
There’s an initial early release in main from when I last merged dev but a lot of work was / is still happening in dev so I haven’t advertised it on main yet I would still only use the dev release for dreambooth but the next merge is planned soon (as soon as I can debug a banana optimization issue).
That’s the plan… right now it’s purposefully only here as was being updated very frequently through user input… happy to say that things do seem to have stabilised now and yeah, it’s totally going to be moved to its on doc. If anything is still unclear though please let me know It’s improved a loooot through feedback in this thread, as intended.
You can build it with any model (just set MODEL_ID build arg). The container will always assert that the container is running the requested model id, however, in the latest dev, you can now leave out the MODEL_ID call_input and it will just run with whatever you built it with (and return a $meta object in the result showing which defaults it used in case you want to assert on your side).
Oh and actually there was an issue at some point where the MODEL_ID in the Dockerfile and test.pydidn’t match… maybe I only fixed that in dev Will be merged to main soon!
Thanks, that’s probably a great solution. I do all my dev locally but this is mostly an issue with people who can’t dev locally and are trying for the first time on banana, and then things are failing and they have no idea why Any chance you’d like to create a post about it? No pressure, and thanks for raising it either way!
Also, in dev, we now try...except EVERYTHING, so those unexplained 500s are a thing of the past.
In any event, thanks for kind words and all the feedback… agree on your points and this is very close to being merged to main with docs Wishing you some happy diffusing!
I got some very nightmarish results and was wondering if the v1.5 vs v2.0 thing might be the culprit. But I guess I’ll just try the dev branch then. It’s really a shame you can’t select a branch in Banana…
I can definitely write something up once I added Axiom to my container (currently it’s only my other code).
Trying to think back to “nighmarish” results lol, what comes to mind is:
It’s possible in main we’re still using the diffusers release from right around when SDv2 came out, where some schedulers would produce really bad results.
Be careful of resolutions… asking a 768 native model for 512 results, and vice versa, can produce very poor results.
That’s all that comes to mind but you might find stuff I don’t recall in the thread above. But yeah first just try dev and see how it compares (and let us know, I’m interested too).
In all my banana deploys I just have docker-diffusers-api repo set as upstream, and git merge upstream/dev etc as needed. Not sure if you’re familiar with this flow but I made a post about it at [HOWTO] Keeping your fork up-to-date. But yeah, choosing branches in banana would be a big help too (as would ability to redeploy after changing build vars without needing to push another commit), and a bunch of other things
That would be amazing… thanks so much. It’s definitely a… sticky point… for a lot of banana users