Skip to content
English
  • There are no suggestions because the search field is empty.

How to Estimate Complexity

Complexity Assessment Rubric – How to Score Effort on a 1‑5 Scale

Before you can compare initiatives apples‑to‑apples, convert the raw effort estimate from Engineering (story points, hours, or “sprints”) into a normalized “Effort” score (1 = smallest, 5 = largest).
That conversion has two parts:

  1. Size – How long will it take? (Engineering estimate)

  2. Complexity – How risky or uncertain is the work?

Key idea: Ten straightforward days of coding (low complexity) is not equal to ten days spent pioneering a new architecture (high complexity). The latter carries far more delivery risk and therefore should score higher.

Use the rubric below to rate complexity across five dimensions and use the highest single dimension as the Complexity score.

Dimension

Low (1 – 2 pts)

Medium (3 – 4 pts)

High (5 pts) 

Illustrative Examples

Technical implementation

Re‑use existing patterns; isolated code changes; minimal dependencies

Some new patterns or light refactor; affects several services

New architecture or unproven tech; refactors core components

Low: add a field to an existing screen. High: first release of AI‑driven autonomous dispatch.

Integration impact

No new integrations; tweak existing internal API

1‑2 new standard REST integrations or major changes to several internal modules

Multiple complex integrations (legacy EDI, streaming); overhaul of integration layer

Low: expose an existing data point via REST. High: real‑time, bi‑directional link to a 3PL’s proprietary system.

Requirement stability

Well‑defined, stable, agreed by stakeholders

Mostly clear but some ambiguity; minor scope shifts likely

Vague, evolving, or contentious; high scope‑creep risk

Low: clearly reproduced bug fix. High: “next‑gen customer portal” based on loose, shifting ideas.

Data‑model change

No schema change; existing structures used

Minor schema tweak; simple migration

New core entities or major table redesign; heavy migration; perf risk

Low: show an existing field in a new report. High: introduce “Multi‑leg Shipment” entity affecting ordering, tracking, billing.

New‑tech risk

Familiar stack; team has deep experience

Moderately new lib/framework; some learning curve

First use of a new tech stack or ML technique; many unknowns

Low: add feature with current Java/.NET libs. High: deploy first ML model with a new MLOps pipeline.

This Effort score is used as the denominator in WSJF‑RICE. It rewards lean, well‑understood work and properly penalizes risky, uncertain initiatives.