← Writing

Progress Metrics Applied

    With a unified view of a ‘product’ as a business model, we then needed to create an effective way to evaluate the progression of that business model to produce a stable metric. This evaluation had to account for a balance between accelerating product development cycle time and derisking critical assumptions.

    To measure confidence objectively, we employ four dimensions: Customer Do vs. Customer Say, Real World vs. Lab Context, Data vs. Opinion, and High Fidelity vs. Low Fidelity.

    Customer Do vs Customer Say"Customer say" refers to the preferences or opinions expressed by customers verbally or in writing, indicating their preferences, concerns, or satisfaction levels. Think; customer surveys, focus groups, and “market research.” On the other hand, "customer do" refers to the actual actions taken by customers, such as making purchases (monetary or otherwise), making choices, or engaging with a product or service, which often reflect their true behavior beyond just preferences.

    Real World vs Lab Context

    "Real World" refers to the generation of evidence in environments and situations outside controlled settings where products are used in diverse and dynamic conditions, providing authentic insights into user behavior and performance. In contrast, "Lab Context" pertains to controlled experimental setups where products are tested under specific conditions typically slanted toward the businesses advantage.

    Data vs Opinion

    "Data" refers to factual information collected and analyzed systematically providing objective insights into trends, patterns, and behaviors. "Opinion" represents subjective viewpoints or beliefs based on personal experiences, perspectives, or biases, which may not always align with empirical evidence or data-driven conclusions.

    High Fidelity vs Low Fidelity

    "High fidelity" refers to experiments that more closely resemble the final product or product feature, providing richer opportunities for customer interactions with critical behavioral assumptions. Conversely, "Low Fidelity" refers to simplified or rough drafts of product features or prototypes, focusing on basic behaviors and interactions.

    The organization can then use a Confidence Score to evaluate the evidence generated for a particular product in development. One of the keys to unlocking more accurate confidence scoring is to frame confidence in concrete decision making as it relates to resource allocation and/or funnel advancement and/or investment.

    The question becomes: Are we adequately confident or inadequately confident to make a well informed decision to advance this product to the next phase in the funnel and/or put further resources into this product? To what extent? And why?

    The answer to this question is expressed in a Confidence Score to provide feedback to the product team and to scale into an overall Progress Metric.

    Confidence Score Scale

    A Progress Metric is the aggregate result of confidence in a developing business model. It is utilized over the progression lifecycle of a new product through a structured product development model. In the simplest terms, the Progress Metric measures early stage products through objective confidence, as measured by the Confidence Score, in the evidence-based market opportunity of that product.

    To calculate the Progress Metric we aggregate the Confidence Scores across the nine building blocks of the business model, weight each component according to risk factors (industry and/or market and/or others), and produce a single ‘metric that matters' to measure the progression of a pre-market product.

    Successful product leaders and teams have utilized the Progress Metric to:

    With consistent evaluations, product portfolio leaders can begin to see the trends, swings, tendencies, and ultimately the movement of not just a single product but the entire product development portfolio.

    (Note:For the sake of anonymity I will generalize the team, organization, and numbers outlined, however, the case is based on a real application of the Progress Metric Framework).

    In this case, the score was lagging behind the timeline expectations portfolio leadership had anticipated. This triggered a deeper exploration into the score.

    We also see the Confidence Score in the Revenue component of their business model receive a dramatic increase since the last evaluation. Is this due to that new competitor product successfully establishing a pricing baseline? Has a new development on the supply side drastically reduced the cost of manufacturing? Did an experiment into the potential for a different revenue model produce strong evidence?

    In this case, indeed an unanticipated competitor product had entered the market successfully. Although the competitor product was actually intended to solve a different need, the usage of the competitor product effectively addressed the customer need that the product team had focused on.

    In the end, four results occurred.

    In the next article in the ‘Introducing Progress Metrics Framework’ series, I will take a closer look at the application of the framework within an organization and address some key questions about the practical use of Progress Metrics. How do you actually produce a Progres Metric session? Who should provide scoring? And how many? How often should you score? How do these scores inform leadership and key stakeholders upstream in the organization?

    ## A Blueprint to use Progress Metrics for Pre-Market Product Portfolios

    ### A Unified View of a Product

    ### Measuring Confidence

    ### Calculating the Progress Metric: Aggregate Confidence in the Business Model

    ### The Progress Metric Framework: A Case Example

    • 0 = Not addressed yet; Not within scope of this evaluation

    • 1 = The team has provided little evidence, there is low confidence in the evidence

    • 2 = The team has provided some evidence, there is medium to low confidence in the evidence

    • 3 = The team has provided partial evidence not adequate to make a decision; neutral/moderate confidence

    • 4 = The team has provided evidence to make a premature decision, there is medium to medium high confidence in the evidence

    • 5 = The team has provided adequate evidence to make a decision, there is high confidence in the evidence

    • Customer–Need Fit = 0 -21

    • Need–Solution Fit = 22 - 39

    • Solution–Market Fit = 40 - 49

    • Micropilot (Ready for pilot) = 50+

    • Shape sprint planning and backlogs

    • Drive progress between scores

    • Inform key decisions for their development and for the portfolio

    • Generate conversations and deeper dives into accelerating and/or bolstering progress

    • The team spent the next few sprints focusing on the Customer Need they were addressing by generating deeper evidence for the line-by-line of the ‘Jobs to be Done’ and the ‘Pain Points’ associated with the ‘Unmet Need.’ This led to a better understanding as to how the market was shifting and why the new competitor product was effective.

    • Team Green became a cultural case study for the organization in the celebration of ‘failing fast.’

About the Author

Jeremiah Gardner

Award-winning keynote speaker, bestselling author, and elite cave diver. Jeremiah helps leaders find clarity in the dark.

Connect With Jeremiah →

Field Notes

Get Weekly Field Notes in Your Inbox

Raw insights on embracing uncertainty, managing risk, and discovering breakthrough opportunities.

No spam or BS from us. Unsubscribe anytime.