When I join a project, it’s often the system diagrams that help answer those complex structural questions. Using these in onboarding can quickly give you a tour of the system without the nitty-gritty details. As a result, many projects I have been on attempt to maintain one source of truth, one master system diagram. However, […]
When I join a project, it’s often the system diagrams that help answer those complex structural questions. Using these in onboarding can quickly give you a tour of the system without the nitty-gritty details.
As a result, many projects I have been on attempt to maintain one source of truth, one master system diagram. However, this type of system documentation has a flaw. It’s extremely hard to keep it both up to date and correct. For this reason, I will try to convince you that low-fidelity diagrams are more helpful and easier to maintain than a master source.
Too often, perfect is the enemy of done. This is just as true in software as anywhere else, but it’s often the truest when it comes to documentation.
I have a love-hate relationship with documentation. I often feel the pressure of having to choose between a new feature and adding/updating documentation. This is often a nuanced decision. I must weigh the complexity of the solution against the added benefit of documentation (as well as the future cost of updating). When making this decision, I look to answer two questions: “Why did I do that?” and “What’s it supposed to look like?”
To be clear, I avoid “What did I do?” documentation. It’s usually clear what I did. Whether you want to call it self-documenting code or just good coding practice, it’s typically pretty straightforward to see the “what.” And, for that reason, I avoid them.
The “Why did I do that?” question, however, is great since it’s a chance to explain why I made those choices. Every application you could create is just a set of choices between different implementations. That means answering “Why did I do that?” will give insight into that process.
The “Why did I do that?” question can get you pretty far, but eventually, you’re going to need to put it all together. For this, we have the “What’s it supposed to look like?” question. This is the chance to set out the design thinking that went into your application.
At this point, your application is complex and often needs a diagram to pull it all together. The only problem is that creating a good system diagram is hard.
For this reason, I often find that system diagrams suffer the most. That’s because they must accurately portray a constantly updating system. This ideal of having a perfect diagram leads to one of two outcomes: An overly complex system diagram that is inevitably wrong, or none at all.
It can be wrong in obvious ways that are easy to pick out or in more subtle ways leading to disaster. I have seen this pop up as poorly ordered processes or scale that is orders of magnitude off between components.
To build on this, complexity portrays trust. When I see a complex, meticulously detailed diagram, I trust that time and care went into creating (and maintaining) that diagram. The only problem is, who’s to say that it was maintained? Or even worse, it might have been wrong to begin with.
For these reasons, I advocate for low-fidelity diagrams. Break out of the habit of trying to create an all-encompassing system diagram. Rather, focus on clearly communicating relationships.
In complex systems, it is easy to have tens, if not hundreds, of components, but seldom do these components truly interact. I have found far more success sketching with big blocks to answer “What’s it supposed to look like?” using the major players.
From here, if there is a relationship to model, break it out. Make it a new diagram and throw an arrow, or annotation, or whatever helps make the connection, onto the original. This way you can organically break into more depth where it’s needed without feeling the need to maintain that level of fidelity across the whole system.
The beautiful thing about low-fidelity diagrams is that, for as quick as it was to create, it is just as quick to update. By isolating the levels of detail, you can quickly rearrange or scale images as needed without having to update every other part. The inherent simplicity or cheapness lends well to keeping it updated. There’s no need to worry about losing knowledge when discarding or updating low-fidelity diagrams since they were not meant as dense information sources.
Finally, once you have created a catalog of low-fidelity diagrams, you now have the building blocks of your system. As you need to dig into complexity, you can compose these diagrams to look at different interactions and generate more. Constant iteration and building continue to clarify crucial relationships and reduce unnecessary complexity.
I hope I have been able to explain not only why I tend to avoid complex master diagrams, but also why I favor low-fidelity diagrams. The benefits of low-barrier to creation, quick maintenance, and easy composability generate more accurately documented systems. Give low-fidelity system diagrams a try on your next project, and see what you think.
The post Better Documentation Through Low-Fidelity Diagrams appeared first on Atomic Spin.
Source: Atomic Object