This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/centuryglass on 2024-10-23 03:39:39+00:00.


I’ve been working on upgrading my GLID-3-XL-based inpainting software ever since Stable-Diffusion first came out, and it’s gradually evolved into a full-featured image editor in its own right. I’ve been using it for quite a while, but I’ve only just reached the level of stability, documentation, and polish that would justify a public release.

As a demonstration, I’ve uploaded a short narrated time-lapse video of my artistic process on YouTube here: . Download links, instructions, and tutorials are on the GitHub page.

AI features:

  • Easy and fully configurable inpainting, text-to-image, and image-to-image within a movable image area, to make it as easy as possible to make arbitrarily-large images.
  • Integrated ControlNet panel, LORA selection, and access to prompt styles you’ve saved in the WebUI
  • AI upscaling support, including support for ControlNet tiled upscaling combined with the Ultimate SD upscaling script.
  • All AI features are powered by the API mode of Automatic1111 or Forge. If you’ve already installed one of those two, you won’t need to deal with any more tedious Python dependency management.

Digital art and image editing features:

  • A full layer stack implementation, with support for transformations, groups, compositing and blending modes, and more.
  • An advanced and versatile brush engine with drawing tablet support, thanks to libmypaint.
  • All the usual tools you’d expect from an image editor: Text, shape creation, smudge, blur, filters, etc., all extensively documented.