Category: Benefit Tracking, Thought Leadership.

Why Post-Project Benefit Reviews Are Too Late — And What to Do Instead

The case for continuous benefit measurement during execution, not retrospective assessment after closure

The Retrospective Illusion

The post-project benefit review is one of the most widely recommended and least effective practices in enterprise portfolio management. The idea is intuitive: once a project is complete, convene a review to assess whether the benefits that justified the investment were actually delivered. It sounds responsible. It sounds rigorous. And it is, almost without exception, too late to matter.


The retrospective review suffers from a fundamental timing problem. By the time it is conducted, typically six to twelve months after project closure, every lever that could have influenced the outcome has been released. The project team has disbanded. The budget has been consumed. The organisational attention has moved on. The review produces a finding, sometimes an interesting one, but never a fix.


Worse, the review creates an illusion of accountability. The organisation can point to a documented assessment and claim that benefits are being measured. But measurement after the fact is not management. It is an autopsy. And autopsies, however thorough, do not save the patient.



The Data Problem With Retrospective Reviews

Even setting aside the timing problem, retrospective benefit reviews face a data quality challenge that makes their findings unreliable. Without continuous check-ins during execution, the review team must reconstruct benefit delivery from whatever evidence is available after the fact. This typically means financial reports that may or may not isolate the project’s impact, anecdotal feedback from stakeholders whose memories have faded, and backward-looking estimates that are coloured by hindsight bias.

The attribution problem is particularly severe. If a project was funded to deliver a ten percent improvement in customer retention, and customer retention improved by seven percent during the project’s execution period, the retrospective review must determine how much of that improvement was caused by the project and how much was caused by other factors: market conditions, competitor actions, unrelated process changes, seasonal effects. Without a structured measurement baseline and periodic data points, this attribution is essentially guesswork dressed in analytical language.

Contrast this with continuous tracking. When a benefit is measured through periodic check-ins from the start of execution, each data point is captured in context. The benefit owner records the actual value, explains the contributing factors, notes any external variables, and provides a narrative that connects the data to the delivery activity. Over the course of a twelve-month project, this creates a rich, contemporaneous record that the retrospective review can never replicate.

The difference is the difference between a medical chart maintained throughout treatment and a diagnosis attempted from memory after the patient has been discharged. The latter may arrive at the right conclusion. But it is far more likely to miss critical details, misattribute causes, and produce recommendations that do not reflect what actually happened.


The Intervention Window That Closes

The most damaging consequence of waiting until after project closure to assess benefits is the loss of the intervention window. During execution, an at-risk benefit is actionable. The project scope can be adjusted. Additional resources can be deployed. Dependencies can be escalated. The tollgate review can impose a hold condition pending corrective action. In extreme cases, the project can be stopped and the remaining capital redirected to a higher-performing investment.

None of these interventions are available after the project closes. The retrospective review identifies that a benefit was not delivered, records the finding, and files it. The capital has been spent. The opportunity to redirect it has passed. The organisation has learned something, perhaps, but it has not saved anything.

This is not a marginal issue. In a portfolio of fifty investments, even a modest rate of benefit underdelivery represents millions of dollars in unrealised value. If continuous tracking could have identified at-risk benefits during execution and triggered intervention in even a fraction of those cases, the value recovered would dwarf the cost of implementing the tracking system.

The retrospective review cannot recover value. It can only document its absence. Continuous tracking operates in the window where recovery is still possible. That window opens at the first check-in and closes when the project closes. Every month without benefit tracking is a month of lost intervention opportunity.


Why Organisations Default to Retrospective Reviews

If continuous tracking is superior in every dimension, timing, data quality, and intervention potential, why do organisations default to retrospective reviews? The answer lies in a combination of structural inertia, misplaced pragmatism, and a fundamental misunderstanding of what benefit tracking requires.

The structural inertia comes from the separation of project management and value management. Project management is a mature discipline with established processes, tools, and career paths. Value management, specifically the measurement of whether a project delivered its intended outcomes, is treated as a finance function or a strategy function that sits outside the project lifecycle. By the time it engages, the project is over.

The misplaced pragmatism comes from the assumption that tracking benefits during execution is burdensome. Project managers are already managing scope, schedule, budget, risk, and stakeholder expectations. Adding benefit check-ins feels like one more administrative task on an overloaded plate. This perception is understandable but incorrect. A structured check-in takes minutes and produces data that is exponentially more valuable than anything a retrospective review can generate.

The misunderstanding comes from confusing benefit measurement with benefit realisation. Organisations assume that benefits cannot be measured until they are fully realised, which may take months or years after the project closes. This is true for the final realisation number. But the trajectory toward that number, the pace of delivery, the status of contributing activities, the early indicators of underperformance, these can and should be measured from the moment execution begins.


What Continuous Tracking Looks Like

Continuous benefit tracking is not a parallel project. It is a lightweight measurement cadence embedded into the existing project governance rhythm. At its core, it requires three things: a defined benefit with a measurable target, a periodic check-in that records actual progress, and a status classification that triggers the appropriate governance response.

The check-in cadence aligns with the organisation’s reporting cycle, typically monthly or quarterly. At each interval, the benefit owner records the current actual value, selects a status (on track, at risk, or exceeding), and provides a brief narrative that explains the number. This data feeds into the planned-versus-actual chart, which plots the benefit’s delivery trajectory against the target trajectory defined at the fund request stage.

The governance response is tiered. On-track benefits require no action beyond continued monitoring. At-risk benefits trigger a review by the value realisation officer, who assesses the trajectory and determines whether escalation is needed. Exceeding benefits are noted as positive signals that may indicate an opportunity to accelerate or expand the investment.

This is not an administrative burden. It is a five-minute data entry that creates a permanent record, feeds an automated comparison, and activates a governance response when needed. The cumulative effect of these five-minute entries, over the life of a twelve-month project, is a comprehensive delivery record that no retrospective review could ever reconstruct.


Replacing the Autopsy With a Vital Signs Monitor

The shift from retrospective review to continuous tracking is a shift from autopsy to vital signs monitoring. One diagnoses after the outcome is determined. The other monitors while the outcome is still being shaped.

This does not mean that post-project reviews have no place. A structured review after project closure can validate the final realisation number, capture lessons learned, and assess whether the investment thesis proved correct. But this review is infinitely more valuable when it is built on twelve months of check-in data rather than conducted from scratch. The review becomes a synthesis of existing evidence, not a forensic investigation.

The organisations that make this shift do not just improve their benefit data. They change the fundamental relationship between investment and accountability. Benefits are not something you check on after the fact. They are something you manage in real time. And the difference between those two approaches is the difference between hoping your investments deliver and knowing whether they will.


Related Articles