![]() ![]() The result is a model capable of doing portraits like I wanted, but also great backgrounds and anime-style characters. This started as a model to make good portraits that do not look like cg or photos with heavy filters, but more like actual paintings. Inpainting models are only for inpaint and outpaint, not txt2img or mixing.Īfter a lot of tests I'm finally releasing my mix model. ![]() You should use 3.32 for mixing, so the clip error doesn't spread. I get no money from any generative service, but you can buy me a coffee. Results of version 3.32 "clip fix" will vary from the examples (produced on 3.31, which I personally prefer). Overall is like a "fix" of V3 and shouldn't be too much different. V4 is also better with eyes at lower resolutions. IT might be harder to control if you're used to caption style, so you might still want to use version 3.31. Version 4 is much better with anime (can do them with no LoRA) and booru tags. Version 5 is the best at photorealism and has noise offset. It should also be better at generating directly at 1024 height (but be careful with it). Version 6 adds more lora support and more style in general. If you're interested in "absolute" realism, try AbsoluteReality. Version 7 improves lora support, NSFW and realism. Might be harder to do photorealism compared to realism focused models, as it might be hard to do anime compared to anime focused models, but it can do both pretty well if you're skilled enough. Version 8 focuses on improving what V7 started. IT WORKS ONLY WITH LCM SAMPLER (as of December 2023, Auto1111 requires an external plugin for it). However it's MUCH faster and perfect for video and real time applications. ![]() LCMīeing a distilled model it has lower quality compared to the base one. (not used for any example but works great). (the girls with glasses or if it says wanostyle) For version 3 I suggest to use one of these LoRA networks at 0.35 weight: Versions >4 require no LoRA for anime style. Careful with that tho, as it tends to make all faces look the same. All of them had highres.fix or img2img at higher resolution. I have ENSD set to 31337, in case you need to reproduce some results, but it doesn't guarantee it. I had CLIP skip 2 on some pics, the model works with that too. PS: the primary goal is still towards art and illustrations. And thank you for all the support you've given me in the recent months. ![]() But what about all the resources built on top of SD1.5? Or all the users that don't have 10GB of vram? It might just be a bit too early to let go of DreamShaper.Īnd here it is, I hope you enjoy. That model architecture is big and heavy enough to accomplish that the pretty easily. With SDXL (and, of course, DreamShaper XL □) just released, I think the " swiss knife" type of model is closer then ever. The purpose of DreamShaper has always been to make "a better Stable Diffusion", a model capable of doing everything on its own, to weave dreams. Look at all the tools we have now from TIs to LoRA, from ControlNet to Latent Couple. I didn't like how MJ was handled back when I started and how closed it was and still is, as well as the lack of freedom it gives to users compared to SD. Did you like the cover with the ∞ symbol? This version holds a special meaning for me.ĭreamShaper started as a model to have an alternative to MidJourney in the open source world. New Negative Embedding for this: Bad Dream. Live demo available on HuggingFace (CPU is slow but free). DreamShaper - V∞! Please check out my other base models, including SDXL ones!Ĭheck the version description below (bottom right) for more info and add a ❤️ to receive future updates.ĭo you like what I do? Consider supporting me on Patreon □️ to get exclusive tips and tutorials, or feel free to buy me a coffee ☕Īvailable on the following websites with GPU acceleration: ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |