This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/jmellin on 2024-09-25 10:51:45+00:00.


I just completed my custom node for ComfyUI. It’s a GLM-4 prompt enhancing and inference tool.

I was inspired by the prompt enhancer under THUDM CogVideoX-5b HF space.

The prompt enhancer is based on THUDM’s convert_demo.py but since that example only works through OpenAI API, I felt that there was a need for a local option.

The vision model glm-4v-9b has completely blown my mind and the fact that is runnable on consumer-grade GPUs is incredible.

Example workflows included in the repo.

Link to repo in comments.

Also available in ComfyUI-Manager.