Our Wildly Important Goals (Product OKRs)
Level 1 Goal: This is the most important goal for CXT Software, known as the Wildly Important Goal. All initiatives in the company should contribute to this goal. This goal is in Dark Green below.
Level 2 Goal: This represents the more specific strategic objective for each department (i.e. Product) that has been identified as having the greatest possible impact on the WIG.
Level 3 Goals: These are the individual Product Manager and Team (Squad) Goals, which consist of Lead measures connected to our existing workflows. Lead measures are both predictive of the WIG’s success and influenceable by the team. They will be shown on a live scoreboard, so that PMs can see, at a glance, if they are winning or losing the game.
|
Grow from $8M to $10M ARR by Dec 31. 2025 |
||
|
Elevate Net Promoter Score from 6 to ≥ 20 by Dec 31 |
||
|
Achieve an average DDQS of ≥ 6 on features shipped in Q3 and Q4 |
Achieve a PLOAR of ≥ 75% on features shipped in Q3 and Q4. |
Maintain CIRCT of ≤ 45 days for ≥ 90% of pain/detractor tagged insights in Q3 and Q4. |
WIG: Driving Customer Loyalty and Perceived Value
Increase Net Promoter Score (NPS) from 6 to >= 20 by Dec 31, 2025
Lag Measure: Net Promoter Score (NPS). This is the ultimate measure of customer loyalty and the success of this WIG. It will be calculated quarterly from a random, representative sample of active users to ensure statistical validity and control for biases related to tenure or engagement levels. As a classic lag measure, the NPS score tells a story about the past; it is the result of countless interactions and experiences with the product. To influence the future score, we must focus on the predictive activities that create satisfied, loyal customers.
Lead Measure 1.1: Discovery & Definition Quality Score (DDQS)
The Discovery & Definition Quality Score (DDQS) is a composite score, calculated for each feature upon passing the 'Develop' stage gate, that quantifies the rigor of the upfront problem validation and solution definition process. The score is calculated using data points already mandated by our workflows:DDQS=(Confidence Score at Develop Gate)+(Number of Customer Interviews Linked in Discovery Brief)
The target is for PMs to achieve an average DDQS of 6 or higher (3 for Confidence + 3 for interviews) on all shipped features per quarter.
Connection to Workflow: This measure is directly and entirely influenced by the PM's work in the 'Discover' and 'Define' phases of your product lifecycle. To achieve a high DDQS, a PM must execute the prescribed process diligently. They must conduct the required "5+ customer interviews" and link the notes in the Discovery Brief artifact. This activity, combined with creating lo-fi wireframes and assessing feasibility, provides the necessary evidence to justify raising the 'Confidence' score in the prioritization framework from a "gut feel" (score of 1) to "lo-fi design and feasibility assessed" (score of 3) or higher. The PM's daily discovery time blocks and weekly discovery/strategy sessions are the dedicated opportunities to perform the work that drives this score.
Predictive Impact: The predictive power of the DDQS lies in its ability to serve as a proxy for building the right thing. A high NPS is the result of delivering products that customers find valuable and intuitive, which stems from deeply understanding and solving their actual problems.
The 'Confidence' score is not merely an input for prioritization; it is a direct, quantifiable measure of process adherence and problem-solution validation. A higher confidence score is a leading indicator that the PM has successfully de-risked the problem space. When a PM consistently produces features with a high DDQS, they are demonstrating a repeatable, disciplined process for validating that a problem is real and that the proposed solution is viable. This discipline is highly predictive of building features that customers will love and recommend. The causal chain is clear:
-
NPS, the lag measure, is driven by customer satisfaction and loyalty.
-
Customer satisfaction is driven by effectively solving real, significant user problems.
-
The 'Discover' and 'Define' stages are explicitly designed to "Prove the problem is real" and "select the winning approach".
-
The DDQS, composed of the 'Confidence' score and customer interview count, is a mandated, non-gameable metric that quantifies the quality of the evidence gathered during these critical, front-loaded stages.
-
Therefore, a team that consistently achieves a high DDQS on its initiatives is systematically increasing the probability of shipping valuable features, which, in turn, is highly predictive of a future increase in the NPS lag measure. This directly connects the PM's daily process discipline to the strategic outcome.
Bottleneck Identification: Tracking the DDQS will immediately highlight bottlenecks in the discovery pipeline. If a PM's average DDQS is consistently low, it points to a specific process failure. Are they struggling to schedule customer interviews? Is their Discovery Brief failing to pass the IPF review? It allows for targeted coaching on discovery habits rather than waiting for the lagging indicator of a failed feature or low NPS score months later.
Lead Measure 1.2: Post-Launch Outcome Achievement Rate
The Post-Launch Outcome Achievement Rate is the percentage of a PM's shipped features that meet or exceed their pre-defined success KPIs within the 14/30/60/90-day 'Measure & Learn' windows. For example, if a PM ships 4 features in a quarter and 3 of them achieve their outcome targets, their achievement rate is 75%.
PLOAR = # shipped features that hit success KPI within 14/30/60/90 days ÷ # features shipped in the quarter.
The H2 2025 target Post-Launch Outcome Achievement Rate (PLOAR) for a PM is 75%.
Connection to Workflow: This measure holds the PM accountable for the entire end-to-end lifecycle, perfectly aligning with our cultural commitment to "Own the Outcome". The process begins in the 'Define' stage, where the PM is required to document "success KPIs & guardrails" in the Discovery Brief v2. It continues in the 'Deliver' stage, where they must ensure "KPI instrumentation" is complete and that analytics and telemetry are firing correctly before launch. The loop is closed in the 'Measure & Learn' stage, where the PM is explicitly accountable for conducting the 14/30/60/90-day outcome review, analyzing the Pendo dashboard, and documenting the "KPI deltas versus initial targets" in Confluence.
Predictive Impact: This measure transforms the PM's role from a feature shipper to a true outcome owner. It creates a powerful and rapid feedback loop that is essential for learning and improvement. A PM who knows they will be measured on the actual results of their work is intensely motivated to get the upfront discovery and definition right.
Case studies demonstrate that companies that use customer feedback to prioritize their roadmap and address user problems see their NPS scores improve.12 This lead measure operationalizes that principle. When a PM's features consistently achieve their intended outcomes—be it reducing user task time, increasing engagement with a key workflow, or reducing errors—they are demonstrably delivering tangible value to users. The consistent delivery of value is the most reliable way to improve overall user satisfaction and, consequently, the NPS lag measure.
A PM with a high Outcome Achievement Rate is a PM who has mastered the entire cycle from hypothesis to validation. Their ability to repeatedly deliver on the promised value of a feature is a strong predictor of their ability to positively impact the broader measure of customer loyalty.
Bottleneck Identification: A low Outcome Achievement Rate is a powerful diagnostic tool. It forces a retrospective analysis: Was the initial problem misunderstood (a failure in 'Discover')? Were the success KPIs defined poorly (a failure in 'Define')? Was the user experience flawed (a failure in 'Develop')? Or was the feature launched without proper enablement (a failure in 'Deliver')? It points directly to the weakest link in the PM's execution of the end-to-end process, allowing for precise coaching and improvement.
Lead Measure 1.3: Customer Insight-to-Resolution Cycle Time (CIRCT)
Definition & Target
Customer Insight-to-Resolution Cycle Time (CIRCT) captures how rapidly CXT turns a validated customer pain point into tangible product value. It is measured as the calendar days between the moment a Productboard note is marked Processed (meaning the insight has been reviewed, tagged as a pain/detractor, and linked to a roadmap feature) and the moment that card’s Portal status changes to Launched. We will track the 90th-percentile of those cycle times over a rolling ninety-day window, with the goal that 90% of pain-tagged insights are fully resolved within 45 days. This threshold balances urgency with realism and is short enough that even slow-moving items still feel prompt to customers.
CIRCT = 90th_percentile(launched_occurred_at – processed_at) over rolling 90-day window
The target CIRCT in any rolling 90-day window is that 90% of pain/detractor-tagged insights are resolved in ≤ 45 days.
Connection to Workflow: The clock starts in daily feedback triage when a note is tagged Pain / Detractor and linked to a roadmap item. The note is validated in the weekly discovery block, then flows through Develop and Deliver gates under normal WIP limits. The clock stops when the fix ships and the Portal card flips to Launched.
Predictive Impact: Responsiveness turns detractors into promoters. Industry data shows that customers whose reported pain is fixed within six weeks convert to Promoter at more than double the rate of those who wait longer. By relentlessly collapsing the insight-to-resolution loop, PMs:
-
Demonstrate that “CXT listens,” driving positive sentiment and loyalty.
-
Prevent small frustrations from compounding into systemic dissatisfaction.
-
Create frequent “wow” moments that lift NPS faster than large, slow initiatives.
Bottleneck Identification: CIRCT naturally surfaces where friction lives:
|
Segment |
Typical Root Cause |
Coaching / Intervention |
|---|---|---|
|
Triage Delay (Note → Idea) |
Feedback inbox backlog, unclear tagging norms |
Daily triage habit; tighten Productboard taxonomy |
|
Validation Delay (Idea → Planned) |
Difficulty scheduling interviews, decision paralysis |
Pair-interview support; enforce 2-week Discover time-box |
|
Delivery Delay (Planned → Launched) |
Overloaded WIP, QA bottleneck, unclear ACs |
WIP-limit audit; swarm Friday Flow Review; refine AC templates |
Whichever segment expands first tells the PM exactly where to focus process improvements well before a slip shows up in the NPS lag measure.
At-A-Glance Overview of the Lead-Measure Set
|
Lead Measure |
Key Question Answered |
|---|---|
|
Discovery and Definition Quality Score (DDQS) |
Are we choosing the right problems and defining solutions rigorously? |
|
Post Launch Achievement Rate (PLOAR) |
Did the shipped solution deliver the intended outcome? |
|
Customer Insight-to-Resolution Cycle Time (CIRCT) |
How fast do customers experience the fix or value? |