Every organization knows what quality should look like for the products they work on. However, what’s the best way of knowing whether the obtained quality compares to the expected quality of the product(s)?
Quality can be both quantitative and qualitative. Many questions could be asked to determine quality. For example, "How do you feel about the product you use? How easy is it to use? Does it behave as expected? Do customers keep coming back to use our product? Do customers like how the product looks? Does the product behave as it was intended to by design?"
One way of determining quantitative aspects of quality is by using metrics. Metrics can help us understand whether the quality is improving or deteriorating. Let’s dive in and understand how to determine key metrics for product quality.
Differentiating between metrics
Often when selecting metrics, we can become blind-sighted about the value that the metrics are adding. We can also start looking at what metrics are easy to extract versus what metrics can really tell us a story. In addition, it is extremely important to differentiate between product and productivity metrics.
Often organizations also start measuring various items, this leads to a comparison between teams and their output rather than the quality of the product as a whole. Examples of such metrics include velocity per team, number of commits per person, number of defects found per team, features delivered per team, etc. We must define the best metrics to measure product quality to avoid these vanity metrics.
What metrics to use for Product Quality
Although there are many metrics that can be used to determine product quality, there are a few which have proved to be very useful in my experience. The inspiration for some of these metrics comes from DORA project 1, a research program that uses behavioral science to identify the most effective and efficient ways to develop and deliver software.
Let’s take a look at 6 of the most important metrics to determine product quality.
Teams can easily determine the number of defects found per sprint or the number of defects found per release. The metric that really adds value in this scenario is when each defect found is linked to the feature area in which it was found. It’s also possible to determine the rate at which the defects are being raised during a release window.
Defects found earlier in the development cycle minimize context switches and provide instantaneous feedback to the team. Defects found closer to release dates can sometimes impact the delivery date based on how severe they are and subsequently delay the release.
Automated tests can be used to generate quite a few metrics. Often we can get the number of tests written per feature, but the number that adds value is whether these tests helped the organization achieve higher functional test coverage rather than code coverage. Tests may be written for multiple levels, e.g., unit, integration, functional, and end-to-end.
While it’s important to have code coverage, the real value from a product quality perspective is whether automated tests help achieve higher levels of functional coverage. Higher functional coverage can help reduce time spent on manual regression testing.
Mean Time to Green
This metric helps you determine how long it takes for a red build to go green. It helps gauge team efficiency regarding how quickly a failed build is picked up, the issue diagnosed, and subsequently rectified. The same analogy can be used to determine whether an organization can actually adhere to its SLAs for fixing production incidents.
Speed of Development
Speed of development helps you understand productivity rather than product quality. However, this metric indirectly impacts the product’s quality. How teams write production code has a huge role to play in how the quality of the product shapes up. Code reviews, feedback from running automated tests and executing exploratory tests, and how the team responds to this type of feedback and incorporates it into the codebase, all contribute to the state of code in production.
Defect Rate in relation to Automated Tests
While writing automated tests helps speed up feature delivery, it’s still important to look into what tests are being automated. Teams should always keep an eye on the defects being discovered during development or after deployment and how they are discovered.
There are two main aims of test automation; fast feedback when changing code and the ability to pick up defects when the product changes. If automated tests are not helping detect defects, then it’s important to review the value added by them.
Quality of Acceptance Criteria
Finally, one of the metrics which can greatly reduce waste for any team is the quality of acceptance criteria. If the acceptance criteria are incomplete or not clearly defined, it leads to the engineering of the solution which is not understood correctly and subsequently degrades the customer’s experience.
I have numbers. What do I do with them?
Often we have many metrics we can extract from our organization. The problems arise when we just start publishing metrics without any narrative.
The narrative around metrics helps us understand how the numbers and metrics help us make decisions and improvements in an organization. For example, the number of defects is not useful until we know what components in a product those defects belong to, so that is where the product’s quality can be improved.
In the end, the goal is to enable teams to use the metrics to help improve the quality of the product they are building.