Skip to content
English
  • There are no suggestions because the search field is empty.

Prioritization & Scoring Framework

Background / Context

To drive predictable growth and strategic alignment, we must adopt a data‑driven, outcome‑focused roadmap. This requires a lightweight but rigorous prioritization framework that balances value, effort, and impact, while still accommodating time‑critical needs (e.g. custom work, churn‑prevention, contract renewals, or other deadlines).

WSJF-RICE Hybrid Framework

To achieve a more strategic and predictable planning process, we require a system that:

  1. Links every backlog item to measurable revenue impact or risk reduction (e.g., addressing technical debt).

  2. Is inherently difficult to manipulate or "game."

  3. Clearly surfaces the Cost of Delay in understandable terms.

While WSJF is a solid foundation for prioritizing items that deliver high value with low effort (a current need for our customers) the B2B SaaS context also requires consideration of market reach and the confidence in our estimations. Therefore, this proposal outlines a hybrid framework, blending elements of WSJF and RICE, structured around the following components:

Component

Description & Scoring

Why It Matters

Impact - 

MRR12 (40 %)

Mid‑case incremental or retained MRR over 12 months, mapped to a 1 - 5 scale:
1: < $25 k
2: $25k - 100k
3: $100 - 250k
4: $250k - 500k
5: > $500k

See “How to Estimate MRR

Primary driver of cash and ARR growth.

Risk Reduction & Enablement (20 %)

Severity & likelihood of technical debt, compliance gaps, platform reusability.
1 (negligible) to 5 (existential).

Prevents catastrophic failures and unplanned rewrites.

Time Criticality (15 %)

Days until deadline or renewal cliff
1: > 180 days
2: 90 - 180 days (1 to 2 qtr)
3: < 90 days
4: < 60 days
5: < 30 days

Ensures we meet hard dates for contracts, CDRs, renewals, etc.

Reach (15 %)

Percentage of customers or users impacted in first 90 days
1: < 5%
2: 5% - 30%
3: 30% - 55%
4: 55% - 80%
5: > 80%

Prioritizes features with broad customer impact.

Confidence (10 %)

Evidence quality
1: gut feel
2: preliminary data and customer validation
3: lo-fi design and feasibility assessed
4: hi-fi functional prototype tested
5: multiple design iterations, signed LOI, etc.

Guards against “HiPPO” bias; enforces validation.

Effort (Denominator)

Sprints × complexity multiplier (API, mobile, infra). Raw value. Capacity gate

Normalizes for scope; smaller effort raises priority.

References: WSJF (Scaled Agile Framework), RICE (ProductPlan)

Score = Σ (Weights · Components) ÷ Effort

The prioritization score is calculated by summing the weighted scores of the numerator components and dividing by the Effort. The higher the resulting score, the higher the priority.

In this framework, all components in the numerator are quantitative, using a 1-5 scale. For Business & User Value (MRR12), the estimated dollar impact is first determined and then mapped to the corresponding 1-5 band. Similarly, Risk Reduction & Enablement, Time Criticality, Reach, and Confidence are directly assigned a 1-5 score based on their defined criteria. It is mandatory for every initiative to have estimates for at least Business & User Value (or its equivalent in strategic importance) and Risk Reduction. No item can proceed to prioritization without these foundational estimates. This mapping to fixed bands ensures a standardized scale, promoting consistency and preventing subjective, free‑form inputs that can obscure true comparative impact.

To ensure the framework is difficult to game, we utilize a weighted composite score for the numerator. While an inflated score in one dimension (e.g. MRR12) would increase the overall score, this can be counterbalanced by low scores in other dimensions, such as Confidence (if evidence is weak) or a high Effort estimate. Since Confidence explicitly ties priority to the quality of supporting data (e.g., user tests, signed CDRs or LOIs) rather than opinions, any stakeholder attempting to unfairly influence prioritization must either produce convincing data or accept a lower Confidence score, which will balance the initiative's overall priority. This transparent formula, clear weighting, and defined rating system enhances trust and makes attempts at manipulation more visible.

This framework also helps surface the Cost of Delay. The MRR12 component directly represents the anticipated "lost or gained revenue" over a 12-month period if an item is delayed or implemented. While the formula uses a scaled score for MRR12, the underlying dollar value can remain the basis for its strategic importance regarding Cost of Delay.

Additionally, Time Criticality quantifies urgency. By scoring deadlines on a 1 to 5 scale linked to specific timeframes (e.g. days until a critical event like a renewal), we translate somewhat arbitrary dates into a numeric factor that contributes to the overall urgency reflected in the score. A higher Time Criticality score significantly boosts an item's priority as its deadline approaches, reflecting the increasing cost of not acting.

The resulting score provides a single, comparable "weighted value per effort unit" (e.g. points per Sprint). This allows for more objective ranking and decision-making when slotting work, ensuring that development resources are allocated to initiatives that promise the optimal balance of strategic value, risk mitigation, urgency, reach, and achievability.