Outcome Metrics

Andreas Soller

Outcome Metrics

Why is it important to measure outcomes and not only output or impact? How can outcomes be measured?

EARLY ACCESS VERSION:

Happy to receive feedback on how to improve this article further :-)

6 min read (1344 words)

Feb 13, 2023 – Updated Aug 4, 2023, 2:26 AM

OUTPUT

IMPACT

OUTCOME

METRICS

Outcomes

To measure success, it is important to understand those user behaviors that drive the desired business results:

“(…) an outcome is a change in human behavior that drives business results. (…) outcomes are the changes in in customer, user, employee behavior that lead to good things for your company, your organization, or whomever is the focus of your work.” – Joshua Seiden 2019:12

Definining the right metrics

When we think about agile software development we deal in essence with three types of metrics:

  1. Output metrics
  2. Impact metrics
  3. Outcome metrics

Output metrics

Output metrics are important to measure the performance of our activities.

Examples:

  • To measure delivery speed and if stories where implemented in an efficient order we use story breakdown charts.
  • To improve the work process we do team retros and agree on concrete action points.

But those metrics cannot tell us anything about the usage of the working software. If the changes we did had an impact on user behavior.


Impact metrics

Impact is about understanding what will drive business results. Think about: increasing the revenue, decreasing costs, increasing market share / customer adoption, increasing revenue from existing customers, increasing shareholder revenue, increasing service delivery / productivity, strengthening the brand, increasing life-time value, decrease cost of aquisition (marketing, sales people), increase monthly return in revenue (MRR), etc.

An impact metric is a generic way to measure business success and provide us with comparable quantifiable data (increase / decrease).

“(…) at the highest level of a business, leaders are concerned with the overall performance of the organization, and the performance numbers they watch tend to come down to these factors – which, in our langauge are high-level or “impact” metrics.” – Joshua Seiden 2019:24

Usually impact metrics are connected with business value – an indicative calculation what business results will be generated:

Examples:

  • Cost/Benefit ratio: compare the investiment (efforts spent) to the expected value. To give a high level example, building a certain feature might cost 80 person days and 5 person days per year for maintenance. Then you can compare those efforts to the expected return.
  • Business Impact potential: additionally you can calculate the potential return and how likely is it to be in the lower / middle / high ranges?

To utilize impact metrics it is important that they are concrete. Often impact metrics as success metrics are too abstract. They cannot easily be translated into concrete features. Think about increase revenue: how should this be translated to somehing particular?

Therefore, impact metrics serve only as a basis to define specific outcome metrics that can then be translated to output (working software). Impact metrics are more like the north star but to travel there, we must first understand what we can do in our own solar system. It is all about making things concrete and applicable.


Outcome metrics

Outcome or user metrics include impact and output metrics:

  • Understand what customer or user behaviors drive business results.
  • Understand how we can increase or decrease those behaviors.
  • Measure the outcomes to make necessary course corrections.

Example (cf. Seiden 2019:32):

  • Impact: decreasing costs
  • Outcome: fewer people calling tech support for product A
  • Output: improved usability of confusing features

As not everything that brings value to users brings also value to your organization we can also get blindfolded by looking at outcomes alone. Therefore, it is important to understand how output delivers outcomes that in turn impact your company or organization.

Types per objective

Distinguish between explorative metrics and reporting / monitoring metrics. Explorative metrics help us to find better solutions and give us clues if we run successful tests. Reporting or monitoring metrics help us to continuously monitor our product or service.

Based on the target objective we can further differentiate:

  • Growth and activation (outcome)
  • Engagement (outcome)
  • Retention (outcome)
  • User Happiness (outcome)
  • Revenue (impact > transfer to measurable outcome))

Growth and activation

Question: “How is the product / company growing?”

  • monthly new users
  • monthly active users (…)

Engagement

Question: “How do users engage with our product / service?”

  • multiple logins per month
  • messages sent per month
  • likes / quotes per month
  • views (example: YouTube) per month
  • feedback (example: App store) per month (…)

Retention

Question: “Do our users come back and continue using our product / service?”

  • retained user
  • resurrected users
  • channels how they were resurrected (Example: notification, email…) (…)

Metrics look ahead

User happiness

Question: “How satisified are users with our product / service?”

  • Number of complaints in support per month
  • NPS score: how likely would they recommend the service to others?

Lagging indicator

A lagging indicator helps to measure the status quo after something has been used.


Example: On average we had 15 support calls per month last year.


Leading indicator

A leading indicator is focusing on activities that result in a future change. It is about predictions and measure change.

Question: Where does sth. lead to?


Example: We learn from research that opening our newsletter increases sales.

Opening our newsletter is our leading indicator. We can test this assumption by adjusting our newsletter, offering additional vouchers or making it easier for our users to enter the voucher. With each identified opportunity we want to change the behavior of our users to open the newsletter more frequently.

Let's say we believe that adjusting the newsletter will have the biggest change on user behavior. Therefore we run a test if this assumption is correct. We measure how many adjusted newsletters are opened and if an increased number of opened newsletters increases sales.


OKRs to draft metrics

Objectives and Key Results (OKRs) have become popular in the past years. The focus is not on a feature but rather on a change in user behavior as result of your initiative.

You can define target outcomes for any initiative:

  • Product metrics: adoption, used as intended, does it improve the usage of…
  • Business metrics: growth, reputation, revenue, operational…
  • Learning metrics
  • Process metrics
  • (…)

Structure of OKRs

  1. What customer problem are we focusing on?
  2. What objective do we want to solve?
  3. How can we measure that this objective is achieved?

(1) Problem: I see a product on … but don't know where to buy it.

(2) Objective: I can figure out how to find / buy any product I see on … .

(3) Key Result examples:

  • Purchases from … videos
  • Sharing / Viewing / Reviewing of videos with product …


Frequency

Frequency of measurement depends on the objective. Often OKRs are measured on a quarterly basis.

Risk in evaluating metrics

Stay open minded

Even though we have set up our metrics in the best way we could foresee, we might not track the right data. Therefore it is important to stay open minded (sceptical) about your data. It might be that we didn't target the correct user group as the changes in working software have a greater impact on another group we didn't expect. Open minded refers to the fact that you don't just blindly trust your metrics but check the metrics once more against your data.

Be aware of biases

“Bias: a strong feeling in favour of or against one group of people, or one side in an argument, often not based on fair judgement.” – Oxford Dictionary

Think about idea bias: we perceive our own ideas as better than those of others. Statistics show that up to 80% of all ideas are not awesome for users. Sometimes what we consider as great ideas are even making things worse for our users. This means that 8 out of 10 ideas will statistically not bring the expected value for users and might even result in sunk cost.

There are many types of biases such as

References and recommended readings

How useful was this article to you?

Thank you for your feedback.