I always recommend Product Management to people who are easily bored, as you get to solve new problems every few weeks. These problems don’t have a one-size-fits-all solution and, since there’s no formal academic training for PMs, you can find people of different backgrounds coming up with radically distinct approaches to the same issue. However, we do have to solve the same type of problems, and this is why we use frameworks.
In this article I’d like to share a personal framework (as if Product Management didn’t already had hundreds) I’ve developed over the last few years to solve one of the most recurring yet important problems of this role: defining feature success and how to measure it.
You’ll probably be familiarized with most of the concepts here, but I’ve found that most KPI-definition literature always circles around entire products or business units and not features (AKA, smaller portions of a bigger product). Even though they’re closely related, I found some nuances worth considering. This is my take on the latter.
This framework proposes a methodology which is divided in three parts: Context, meaning of feature success and KPIs definition.
- Context is about identifying the underlying purpose of what you are building, and a few suggested tools to know where you are standing and what to expect from your new feature.
- The second part is my point of view on what I define as feature success, which combines user adoption and generated value.
- Finally, the actual KPI definition uses all the above and some guidelines to determine what you should use to measure your feature performance.
Let’s get to it.
Start with why
“Why am I building this?” is the most important question you must address before creating the first specs document, drawing a wireframe or writing a line of code.
Luckily, if your company objectives and product strategy are properly defined, this is a multiple choice question. The possible answers are basically contained in a list made of your company’s core business metrics. If you can’t map why you’re building something with one of these, you should probably consider not doing it at all. In the case you’re new to this type of exercise, the five whys technique is the simplest way to answer this question.
Understanding why you’re building something is not only a great sanity check, but is also the cornerstone of the narrative you’ll use to align your team and stakeholders on a common goal. Quoting the great Simon Sinek: people don’t buy WHAT you do, they buy WHY you do it. Your feature must be the most logical consequence for anyone aiming to impact that core business metric. If it’s not the case, you should revisit prioritization.
Know what to expect from your feature
I’ve learned through the years that it’s impossible to predict feature performance before actually shipping it, and believe me, I’ve tried. There are simply too many variables and effects that are mostly impossible to consider in a deterministic model. You can analyze historical data and distributions, but even then it’s difficult to have comparable information and probably not worth the effort.
Having said that, I do find quite useful to calculate best and worst case scenarios to have something to compare to. Take the best and the worst possible conversion rates, average sale price, DAU or whatever you’re measuring, and calculate both edge cases using benchmarks (ex: if the industry benchmark for an e-commerce’s conversion rate is 1%, taking 2% is an extremely optimistic case) and simple models (like funnels). Your actual feature performance should fall between these two limits. If the best case scenario doesn’t look appealing, you should probably revisit the whole feature concept design, as it may not be effective.
There are two main variables worth considering for context…
For a mature product, PMF happened a long time ago and the product itself feels quite complete. By definition, for example, a month of the team’s efforts will yield lesser results when compared to the same amount of work in an earlier stage
Get to know in which stage of the product life cycle you’re working on, to properly manage your expectations and adjust feature strategy if necessary.
Type of feature
Use the Kano Model to compare the feature’s value proposition to other solutions in the market, and develop an idea of how users should react. Ex: If you’re working on a streaming service, an improvement on loading speed would probably have a smaller impact on user satisfaction than a delighter, such as a “Skip intro” button.
Hands on the framework!
Having understood what’s the main business objective we’re aiming to impact and some context on what we’re trying to build, it’s time for some KPI definition.
The best moment to start working on this is right after you’ve done your homework and have a rough idea of how your feature will look like (I consider the sweet spot to be when you’re working on the feature shaping, as stated in Basecamp’s “Shape up”). This is a point in which the product’s specs are still malleable, but you are already slightly committed and investing resources on the idea.
What is a successful feature?
Let’s break this down…
It’s the amount of people that see your value prop and choose to interact with your product. User adoption is about distribution, or how to get the new feature to these users. User adoption considers the following assumptions…
- Your feature gets to an audience that may be interested in using it. The size of this base audience is key to determine if the feature is worth the time and effort (ex. doesn’t make much sense to have a 10-dev team working on a problem only 2 people have).
- After seeing it, the user understands it, perceives its value and chooses to interact with it. This implies you were good at communicating (or selling) what you shipped.
This is about how the user interaction will bring value to herself/himself and to your company’s business, particularly in terms of the core business metrics. A measure of value could be revenue, profit, NPS, etc. I will explore this in detail later.
Why the multiplication?
Defining feature success as the multiplication of these two variables determines a dynamic in which the final result is proportional to both of them. In simpler words, solving a big problem for few people is similar to solving a small problem for a big crowd.
Analyzing edge cases (user adoption = 0, or value = 0), we get that…
- The most complete product isn’t successful if no one uses it.
- A feature that doesn’t brings no value to the user nor the business (in terms of your company’s main KPIs), but is seen by many, won’t be considered successful.
Proxy Performance Metrics
Let’s now define what we’ll actually use to measure feature value (performance). In an ideal world, a performance metric…
- Can help me validate hypothesis
Meaning the number can confirm (or not) if the assumptions I took when I launched the new product were true.
- Provides a fast feedback loop
I need to know ASAP if what I built had an effect on this metric, in order to iterate or adjust if necessary.
- Is easily attributable
I should be able to tell to which extent what I built affected this metric.
- Have actionable insights
I should be able to quickly determine if something’s wrong and know what to do about it.
With this logic, main business metrics are lousy performance indicators. This is because their behavior is usually explained by too many variables (ex: many people trying to improve them, external factors, etc). They can also be quite laggy (ex: NPS can take a whole customer journey to show results). Also, being a high level metric doesn’t provide much detail when something goes wrong.
This is why I use what I call “proxy KPIs” to measure value. Proxy KPIs are short-termed and easily attributable metrics that relate to a main business performance indicator. The feature you’re building should have a direct impact on it.
I found KPI trees to be the most powerful tool to define Proxy KPIs. The key is to identify those metrics which behavior is easily linkable to a main business indicator, but are closer (in terms of feedback loop speed and attribution) to the feature you’re shipping.
Example 1: A higher conversion rate leads to higher margins (as it optimizes CAC). Profit is something that will be affected by multiple variables, and it could also be laggy if the BI stack doesn’t process these calculations fast enough.
Example 2: Fewer customer complaints will lead to a better NPS. NPS measures user satisfaction regarding the whole user experience, and it will take an entire customer journey to be measured. Customer complaints can be measured daily.
Most growth comes at the expense of something else. Health metrics aim to measure the impact your feature may have in the product in which it’s contained.
Health metrics could be…
- Resource consumption rate.
Ex. You’re giving a bonus to users that bring their friends to your platform. Are you spending accordingly to budget?
- Feature cannibalization
Ex. A new payment method replaces an existing one, damaging the company’s payment processing costs.
- Unwanted user behaviors
Ex: A change in your website’s navigation bar causes a decline in conversion rates.
Imagining what could go wrong is an exercise of holistic thinking, and it takes time and knowledge of how your own company works. Performance indicators without health KPIs are usually vanity metrics.
When it comes to actually using this in your day to day, the framework would look something like this:
- Understand why you’re working on that feature, and identify the main business metric you’re aiming to impact.
- Determine the proxy KPIs for your feature. They should be easily linkable to the main business indicator, but with a fast feedback loop and easy feature to KPI attribution.
- Identify possible unwanted effects and make sure to keep track of them.
And remember to do the sanity checks along the process…
- What you are building is aligned with the company’s objectives and product strategy. It serves a bigger purpose.
- What you’re building is (at least) in the top features that solve that specific problem / affect the core business indicator.
- Analyze and compare your feature’s value prop in the market you’re in, and understand the product maturity to have an idea of how users might react.
- Calculate worst and best case scenarios, and determine if it makes sense for you to aim for a value in between. If the best possible outcome isn’t attractive, revisit the feature itself.
- The potential user base for that feature is worth the effort.
I hope you found this helpful, and please feel free to start a conversation in the comments below :)