Texture Diffusion leverages pre-trained diffusion models to bring texture generation capabilities to Blender, allowing users to integrate custom textures into their 3D models. This add-on enhances the creative workflow by giving control to the user by supporting features such as inpainting, LoRAs and IP-Adapters. This is a project I have worked on on my spare time after discovering Stable ProjectorZ and I am excited to share it with the community. Feel free to provide feedback and suggestions for improvements.
If you like the project, you can star the project on GitHub or share it with your friends. I would greatly appreciate it!
This add-on is using ComfyUI to run diffusion models (as it's a popular installation) you need to have ComfyUI installed and running on your machine for the add-on to work properly. Follow the Installation instructions first if it's not the case.
You need to have installed ComfyUI IPAdapter Plus extension to your ComfyUI installation. Make sure to do so before-hand as well (even if you are not using IPAdapters)
[!NOTE] The addon works using a depth ControlNet under the hood, you also need to make sure you have the right name for it. Default name is 'diffusers_depth_controlnet_pytorch_model.fp16.safetensors' for the SDXL model and 'flux-depth-controlnet-v3.safetensors' for flux models.
- Downdload the
zipfile link - Ensure you have Blender version 4.2 or above installed
- Import the add-on into Blender via the
Edit > Preferences > Add-onsmenu
Pillow is bundled with the add-on, so no additional installation is required.
- Clone the repository
git clone https://github.com/Shaamallow/texture-diffusion.git- Make sure you have Blender version 4.2 or above installed
- Import the add-on into Blender by coping the repository in the
user addondirectory of your blender install
If you run into issue, try to install pillow on the python environment blender has access to
Should work similar to Linux, but not tested.
To see Texture Diffusion in action, follow the instructions below. Video and image examples are provided for each major feature to guide you through different capabilities.
Caution
Make sure you have specified an Output Path before running the add-on on Windows Properties > Output > Output > Path
You can generate textures from the current view by entering a prompt and selecting a mesh. The texture will take some time to be created depending on the power of your ComfyUI machine (either local or remote). In this video I'm using the Flux-dev model from Black-Forest Labs.
generation_demo.mp4
As the texture is only from a single view, the projection doesn't look great from an other point of view. You can render a new texture from a different angle using the Image-to-Image workflow. Make sure to use the different toggles. Moreover, I'm using the IP-Adapter capabilities to make sure the textures looks like the 1st generation. Try to experiment a bit with the different parameters.
inpainting_demo.mp4
If you want to edit small details by hand, edit the different masks to add feathering, blending... You can do so by vertex painting and texture painting. Make sure to select the right attributes !
painting_demo.mp4
To render a texture that follows a specific 3D, I'm using different Depth ControlNet to generate images that follow the geometry of the 3D model.
The add-on is compatible with all SDXL and Flux type model. To make sure the right workflow.json is used. Make sure your model name has FLUX or SDXL in it.
The add-on allows to use for LoRAs as long as they are placed in the right ComfyUI folder. You have to make sure that the right LoRA is selected for your model type.
To use a control-net, the add-on will render a Depth Map from the current that you can also access in the Image data in blender. The depth map is rendered using the Z depth data attribute of blender but post processing is done to make sure the color distribution is normalized to look great. This can be useful if you just want to generate some depth render and use ComfyUI separately.
For Image-to-image workflow, you need to have a starting image. This will be rendered using the given viewpoint with the opengl render of blender. It will remove the blender UI and render the image. Make sure you are in the correct composing mode.
The UV Mesh will be projected onto the texture instead of the texture projected on the UV-map like in the popular Dream-texture add-on. If you want to do the opposite and have a nice solution, feel free to submit a PR. Following IanHubert's method.
The selected vertex in Edit Mode will have a new Vertex Attribute set to 0 to use a different Texture. You check how it's done in the next section and also edit the mask to blend the edges or edit the texture for small details.
I'm applying the idea from this video but automatically.
The following features are planned for future development:
- Flux IP-Adapter – Improved integration with IP-Adapter for enhanced image blending. (see Issue)
- Multi-Object Projection
- Extend single-object replacement to multi-object selection.
- Toggle option for rendering entire scenes.
- Multi Projection Generation (Search grid trick for flux generation, it's very powerful)
- Rework the current Camera workflow (no camera collections, single camera to move, generate without getting out of camera view)
- Allow for multiple view to be rendered at once to generate a grid for a given object (like sprites)
- Texture Projection project the texture on the UV instead of the UV on the texture
- ComfyUI API Integration – Create a standalone API to eliminate the need for Comfy installation (planned as a premium feature).
- Default Controlnet Allow to use different depth controlnet (no default name, different providers...)
Feel free to submit PRs or open issues for bugs or feature requests. I'm always looking for ways to improve the add-on and make it more user-friendly. I've attached a excalidraw diagram that I made to better architect the project. Feel free to check the code and the diagram in docs.
If you want to update the code, make sure to install the requirements with the following commands :
I recommand using mamba
conda create -n texture-diffusion python=3.10Then
pip install '.[dev]'- Blender – Core software for 3D modeling and rendering.
- BlenderAPI – Documentation for Blender scripting.
- ComfyUI – Diffusion model integration.
- ComfyUI IPAdapter Plus – IPAdapter functionalities.





