Asset Creation Pipeline Design

Pablo Dobarro shares an outline of the design for a new modern asset creation pipeline to be developed during the following years. During the past months I’ve been working on the design of what I call the “Asset Creation Pipeline”. This project will tackle all Blender functionality related to how you create characters, props or […]

Pablo Dobarro shares an outline of the design for a new modern asset creation pipeline to be developed during the following years.

During the past months I’ve been working on the design of what I call the “Asset Creation Pipeline”. This project will tackle all Blender functionality related to how you create characters, props or environments with Blender. Here we will refer as asset any object that is going to be rendered in a final scene, not the asset definition of a datablock used by the Asset Browser project. The goal is to have a design and a technical implementation plan on how to tackle long standing limitations of Blender like painting or retopology, making a modern design ready for the years to come. 

This post outlines the more general ideas of the design, without going into any details on how the implementation or the final product will look like. More detailed documents about the whole design and implementation are being worked on and they will be published soon. 

Blender is a software that was always designed for interactivity, and most of its technical innovations were done in this field. This means features like Cycles rendering directly in the viewport with all of the scene contents as they are being edited, EEVEE, modifiers and geometry nodes and the redo panel. We can also mention planned projects that still did not happen such as the full interactive mode or the real time viewport compositor. 

There is another way of approaching a software design, which is prioritizing handling any arbitrarily big amount of data. The main selling point of these software is that it can actually render and edit the data, leaving interactivity as a “nice to have” feature when it is technically possible to implement. So, when you are designing for handling any data size, you can’t assume that the tools will just scale in performance to handle the data. Software designed like this will try to make sure that no matter how many textures, video files or polygons you want to edit, you will always be able to do it in some (usually, not interactive) way.

The core concept of the new asset creation pipeline design is embracing Blender’s interactive design instead of trying to fit a different workflow inside it. 

So, development and new technical innovations regarding the asset creation workflow will focus on having the most advanced real time interaction possible instead of handling large amounts of data. This means that performance will still improve (new features will need performance in order to keep the real time interaction working), but the features and code design won’t be targeting handling the highest possible poly count or the largest possible UDIM data set. The focus on performance won’t be on how high the vertex count in Sculpt Mode can be, but how fast Blender can deform a mesh, evaluate a geometry node network on top of it and render it with PBR shading and lighting. 

Clay brushes working with EEVEE enabled (performance prototype). Supporting sculpting tools while using a fully featured render engines is one of Blender’s strengths. Improving that workflow is one of the goals of this project.

Focusing on this new design will allow Blender to have the best possible version of features that will properly take advantage of Blender’s strengths as an entire software package. This means things like:

  • The most advanced deformation tools to control the shapes of base meshes, which can be used in combination with procedural shaders and geometry node networks for further non-destructive detailing of the assets. Current tools like the Pose, Boundary and Cloth brushes are poorly implemented in master due to handling the legacy sculpt data types. Addressing these limitations will make them work as they should.
  • The best possible version of Keymesh in order to combine fully rigged and stop motion animation in the same scene. 
  • A fully customizable painting brush engine, allowing the support for procedural painting brushes for textures, concept art and illustration. 
  • The ability to use advanced painting tools to control how procedural PBR materials or procedural geometry effects are applied to the surface of an asset, manipulating surface information stored in mesh elements and textures that can control both masks or general attributes.
  • Multi data type tools, the same brush will be able to deform meshes, volume levels sets, curves, Grease Pencil strokes or displacement vectors stored in a texture without the need for baking them from a mesh. 
Blender allows tweaking the shape, details and surface of objects using different system that interact with each other, providing real time feedback and non destructive editing.

Handling High-poly

We also know that handling large amounts of data is important for some studio pipelines. For those tasks, the plan is to handle the data from a separate editor that does not interfere with the rest of Blender interactive workflow. When it comes to meshes, this can come as a separate editor with its own viewport optimized only for rendering as many polygons as possible and non real time mesh processing operations. This will keep the high poly mesh data isolated from the rest of the scene, making the tools, real time viewports and the rest of the features perform as they should. Having this kind of data into its own container will also help with features like streaming directly to render engines without affecting the performance of the scene.


In order to fit all planned features of the new design, some bigger changes have to be made to Blender in order to properly organize the new workflow. Among other changes, this means that the modes and their roles have to be redefined. Modes will contain all features that have a common purpose, regardless of the target data type or workflow stage. Workspaces will be responsible for configuring the modes so they can be used for a particular task. This will allow handling a much higher level of tool complexity and flexibility when defining custom workflows. 

These are the proposed modes for all object types, describing their functional purpose in the pipeline. Note that the naming of the modes is not final, but their design and intended purpose on the workflow are:

  • Object: Manages objects in the scene and their properties.
  • Freeform: Controls the base shape of organic objects.
  • CAD: Controls the base shape of hard surface and mechanical objects.
  • Paint: Controls the base surface information of objects.
  • Layout/Topology: Prepares the data for procedural tools and animation.
  • Attribute Edit: Controls the source data for the procedural systems.
  • Edit: Does low level data layout editing when needed, allowing direct manipulation of the individual elements of the data type.

Other modes related to other parts of the pipeline like weight painting, Grease Pencil draw and Pose are not directly related to the asset creation pipeline, so they won’t be affected by this project.

Not just that workspaces, tool presets and editors can be reorganized for various tasks but that the user has control over this customization for their own tasks. As an example, let’s define workspaces for different uses cases of painting, all based on the Paint Mode:

  • A hand painting workspace uses the Paint Mode. It contains a viewport with white workbench studio light. The UI shows blend brushes presets, color gradients presets. There should be open UI panels for color palettes and color wheels.
  • A concept art workspace uses the Paint Mode. It is similar to the hand painting workspace but it contains a 2D viewport and 2D selection tool action presets.
  • A PBR texture workspace uses the Paint Mode. It contains a viewport with EEVEE enabled in material preview mode. The UI also shows a texture node editor and an asset browser with material presets. 


Despite the amount of changes this design introduces, most of the development required to achieve the proposed product already happened (some of them are in master with a different UI, other in separate branches or disabled as experimental). The first step would be to gather and merge all that development into an MVP version. This initial version will have mostly the same features as the current Blender master branch (no functionality nor performance will be lost), just organized in a different way. Hopefully, this new organization and naming will make more clear how the workflow and tools were intentionally designed, so they can be used to their full potential. For example, after the reorganization, the same sculpting functionality will still be available as a subset of features of the Freeform Mode, which now has a much broader scope. 

After that initial step, more technical development can happen. This includes things like redesigning the texture projection painting code, refactoring the tool system for tool preset management or building better overlays and snapping for retopology. With this design clear, those tasks can now happen faster as they fit in a well defined big picture design.

It is also important to notice that this design also includes some tasks that require some technical research and innovation, like painting displacement surface details. These tasks have a high risk of taking much longer to develop, but they are not crucial for having a functional version of the asset creation pipeline. 

This project also depends on other ongoing development like the asset browser and storage or the upcoming UI workshop. More detailed designs about the final features that involve the asset creation pipeline will be discussed and worked on with those projects. 

Source: Blender