Two overloaded and misunderstood terms. Interchanged quite a bit. However, in order to implement either, we must understand what both mean and what they entail.


Observability describes the maturity of a workload or system in terms of one’s ability to monitor its execution. In other words, when you consider your application or environment, can it be monitored? This may sound like a strange question. However, I have witnessed countless applications and environments that cannot be monitored. In other words, they are essentially black boxes. No one knows what’s going on inside of them. Additionally, while this typically involves legacy workloads, that’s not always the case. When it comes to applications, either developers make assumptions that the application is solid or the workflow is so complex that any monitoring would be a little bit more than jibberish, requiring a substantial amount of time for investigation and interpretation. Systems, on the other hand, may be architected in such a way that, though logs are generated, very little good is accomplished by doing so. For example, an environment comprised of an application that isn’t thread-safe and, therefore, cannot scale. One may be able to capture performance logs, but besides adding more compute or memory resources, very little can be done to enhance performance.

Observability is the ability to monitor in order to affect change.

Observability is more than just having the capacity to create logs. Observability is about generating and capturing the necessary telemetry to affect change. To be clear, though an environment may generate logs, it doesn’t mean that it is observable. In the previous example, the environment would not be considered “observable” because the application is constructed in such a way that inhibits performance. Therefore, the ability to generate performance logs is a moot point.


Monitoring is the practice of capturing and reporting telemetry essential for gaining actionable insights to efficiently respond to incidents and to effectively improve design. Monitoring, a practice executed against an observable workload or environment, is conducted for a single purpose. That is to gain actionable insights—insights that provide direction.

If observability affects change, monitoring directs where the change should occur.

Before we continue further, let’s briefly address what monitoring is not. Monitoring is not merely capturing a bunch of data—logs, traces, telemetry, etc.—for the sole purpose of doing so. It’s not a competition to see who can generate the most data. Additionally, monitoring is not simply turning on a collection of dashboards so that we “feel good” that our environment is being monitored. I’ve asked many customers what they have learned from their dashboards. What insights have they uncovered? More often than not, those questions go unanswered. Yet, customers remain pleased with the beautiful dashboard designs and the blinking lights.

Proper monitoring produces actional insights. It tells us how our systems and processes are operating and gives us quality feedback on our SLOs. It shows us our gaps and in what areas we need to improve. As the definition says, correct monitoring practices should improve efficiency in responding to incidents, and they should effectively challenge our application designs and architectures. It should be understood that, without SLOs, monitoring is simply log harvesting without fruit. However, once SLOs have been defined, monitoring is intended to enhance our SLIs. The result of monitoring is constant improvement.

Without SLOs, monitoring is simply log harvesting without fruit.

If you aren’t currently monitoring your environments or applications, or if you are unsatisfied with your workload performance, let us know. Missional can improve your workload’s performance and resiliency.

Your email address will not be published. Required fields are marked *