You can't manage what you can't measure

February 28, 2025
Join my first live webinar: "What's next for 2025?". Reserve your spot here.
Full version
Just the highlights ✨
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

In “A dog’s dinner”, I returned to a theme I have written about before, which, although a little dry for some tastes, goes to the heart of the debate about what has been going on in the UK economy and how vital Establishment bodies, like the Bank of England, seem to be incapable of accurately measuring it consistently. For example, a week after the Bank of England told us that the economy had shrunk by 0.1% in Q4 last year, the ONS told us it had grown by 0.1%. This may seem trivial to some, but I come back to the old adage, which is especially pertinent to economics: if you can’t measure something accurately, you can’t manage it. In other words, poor data and the failure to interpret it appropriately will inevitably lead to damaging policy mistakes.

And if you are tempted to think of me as some kind of rabid iconoclast, don’t take my word for it. Just under a year ago, the globally respected ex-FED governor Ben Bernanke, in a report commissioned by the Court of the Bank of England, essentially the board of the Bank, concluded that the Bank’s forecasting infrastructure, especially its main macroeconomic model, was no longer fit for purpose. I think this is quite important, but seemingly, this report has had no impact on the way we all seem to accept without question the Bank’s prognostications about the economy and what the appropriate monetary policy settings should be.

I believe the Bank’s woeful forecasting record combined with Bernanke’s diagnosis should have prompted wider, more fundamental questions not just about the Bank’s discredited model but also about the very unusual way in which the UK’s statistical bodies go about measuring the UK economy. I am not sufficiently qualified to know precisely what is going on at the ONS, the OBR and the Bank of England, nor to identify exactly what the differences are between the various statistical bodies that compile data for the G7, but I do know, for example, that the UK is alone in estimating GDP by reference to output on a monthly basis.

Without getting too much into the weeds, I just wanted to outline what I see as one of the major measurement problems with macro statistics. Here’s an example I know quite a bit about to help explain it.

Measuring output at Woodford IM

As some of you know, I set up Woodford Investment Management in 2014. At its peak in 2017, it employed over 110 people and had assets under management of about £17 billion. It had one activity, fund management, and our clients paid us to provide that service for them. Although exclusively focused on fund management, I was part of the organisation's leadership team. Nevertheless, I had absolutely no idea how WIM’s output changed from month to month. Indeed, neither did anyone else in the business know. Not least because we never tried to measure it. If we had, I have no idea what metric we would have used to calculate that output.

Of course, we knew exactly how much money we had spent (along with other things like how many client meetings we'd completed, sheets of paper printed and even cups of coffee drunk), but the “output” of WIM, in my opinion, was unknowable and unmeasurable. And I can equally imagine that the same goes for every bank, law firm, insurance company, estate agent, IFA, fund manager, retailer, school, police force, etc.

In other words, measuring the UK economy by reference to output measures seems to me inherently difficult, even impossible. And yet that is exactly what the ONS does each month (service industries in the UK account for 81% of total UK output and 83% of employment).

Measuring service-based output is hard

Some of the difficulties alluded to here were highlighted in an ONS publication I read last week about some of the very weird data that emerged from the UK during the pandemic and its associated lockdowns. The paper was written to highlight in particular the measurement problems encountered by the ONS during the pandemic as they related specifically to what it calls non-market services output and importantly, how these differed with methodologies employed by other G7 countries (“Non-market services may include goods and services purchased at market price (by governments) on behalf of households who then receive them free or below economically significant prices” – like healthcare, education and other public services)

As the ONS says, there is “ambiguity in quantifying what constitutes production (output)”. The examples they use are the following but could be extended across the private sector and across many service industries:

“Are firefighters producing more output if they attend more fires, or is their state of readiness production in itself? And, what constitutes one unit of education output? Is it based on inputs, student numbers, or the quantity of learning conveyed?”

These are interesting questions, but the reality is that whatever methodology the ONS deployed during COVID, it produced a dramatically different economic picture from that experienced by other developed economies that were doing fundamentally exactly the same things.

If you believe this data, aside from France (which apparently experienced a similar collapse and then recovery in public sector output), what happened in the UK was extreme in the context of what was happening across the rest of the developed world.

But this is just not credible. What the UK and possibly France experienced was not extraordinary. It was very much in line with what was happening elsewhere. What this data tell tells me is that there are important measurement differences here, and I believe they persist to this day.

Furthermore, my guess is that the “differences” or asymmetries, as they are sometimes referred to, are errors in the ONS’s methodology, not just differences in interpretation. After all, and as I said at the time, how can the UK have been experiencing a collapse in public sector output, which is what the data required us to believe, when government spending was expanding at a rate not seen since the Second World War.

Something else I learned yesterday shines an even brighter light on what I have described as very noisy, volatile, and fundamentally unreliable UK trade data. If you have read “A dog’s dinner” you will know that the Bank of England’s latest downgrade to the outlook for the UK economy is largely attributable to its mysterious downgrade of the UK’s expected export performance in 2025, something that received no attention at the press conference or in subsequent media commentary. I commented in that article that Andrew Bailey didn’t refer to this in the meeting because he probably hasn’t got a clue why the Bank’s model spewed out this odd number.

What I learned yesterday reinforces that view. Official UK trade data shows the UK enjoys a significant trade surplus with the US. Indeed, it is our largest surplus by a considerable margin, at £71 billion. The extraordinary thing, however, is that according to trade data compiled by the US Bureau of Economic Analysis, the US enjoys a trade surplus with the UK of $14.5 billion. Quite clearly both cannot be right.

I am not about to engage in a debate about who is right and who is wrong. I just don’t know. But yet again, it’s clear here that there are serious measurement errors which receive no attention from the Establishment nor analysts, economists and commentators, all of whom seem happy to accept that the data are reliable and paint an accurate picture of UK economic reality. I am not so sure. In fact, I believe that UK economic data measurement is fundamentally flawed, and that the ONS could and should do better as should the Bank of England and the OBR. If they did, maybe their forecasts would start to become more accurate and reliable, and by implication, fiscal and monetary policy judgements would improve. In the meantime, I would advise both of these institutions to take a long hard look at their forecasting record and embrace the FED and ECB’s approach, which is to keep their detailed forecasts to themselves.

Comparing the US and UK

This leads me to another issue linked to this measurement problem. In this section, I want to highlight some important similarities between the US and the UK, as well as some significant differences. These differences are widely accepted as reflective of the UK’s fundamentally poor economic performance. This is accepted wisdom. However, without wishing to appear deluded or in denial, if you accept, as I do, that the recording of UK economic data is flawed, then there may be more to this apparent divergence in performance than is understood by a broad consensus. In fact, I hope to show that to believe that the data accurately represents poorer outcomes, you have to believe that there is something inherently inferior about the UK labour force compared to the US.

Let’s start with the data, and a comparison between the US and the UK.

These two charts show that the increase in hours worked in both economies since the pandemic is roughly equal, at about 3%. However, that’s where the similarity ends. In the US, output and output per hour have both grown strongly since the pandemic, but in the UK, they have flatlined. In the US, output per hour has grown by 9% since 2019; in the UK, it hasn’t risen at all.

Before tackling what this implies, I should add that hours worked are very accurately measured in both economies, but as you already know, output certainly isn’t in the UK.

So, what does this mean? Well, the average US worker has become 9% more productive than the average UK worker since the pandemic struck in early 2019. That’s what the data says, but is that intuitively right? I don’t know, but I am struggling to believe it. To be credible you have to believe that in the space of five years, US workers have become an additional 9% better in terms of output per hour (the result of physical and mental effort) than their UK peers.  However, this conclusion hardly chimes, for example, with the UK’s higher ranking than the US in terms of education outcomes, nor with obesity rates that are significantly higher in the US than in the UK.

The UK Public Sector

A more detailed look at the UK’s productivity data also begs some more important questions, especially about what’s happening in the public sector. If you believe in the data, we all have to draw some very worrying conclusions about the sustainability of public services. If you don’t, it naturally implies that as a nation, we are failing to properly record what’s actually going on in the economy. Something that has some equally profound implications for policy makers. Maybe the reality is that there is an element of both at work–deteriorating productivity in public services (and there are a number of obvious causes there), combined with the mis-measurement of output across the increasingly service-focused economy.

First to the data. If you believe this very poor productivity data for the UK economy, you need look no further than the public sector for an explanation. According to the data, its record on output per hour since the pandemic has been utterly woeful as you can see in the two charts below.

The healthcare sector is arguably the most shocking. It shows a near 20% fall in the sector’s productivity since the pandemic. If this really is an accurate reflection of what’s happening in the NHS, it is deeply alarming and totally unsustainable.

This speaks to the lazy political consensus view that the answer to the NHS’s problems is always to provide it with more money. I am sure more money could be usefully deployed in improved front-line services, more doctors, nurses, and midwives, and better access to life-saving cancer medications, for example. But surely, in tandem with that must come a root-and-branch examination of what’s going wrong in the healthcare sector and the political determination to address this glaring betrayal of patients and taxpayers.

Someone I have known very well since my first year in school has had a very long career in the NHS in clinical practice in one of the country’s top teaching hospitals and is now in administration at a very senior level. We often talk about the problems he and the organisation confront, but what surprises me, is that alongside some very complex issues, many of the solutions to the NHS’s biggest challenges are very simple, such as providing the organisation with the ability to fire persistently poorly performing staff, whom I am told, often end up being promoted out of harm’s way instead of being removed, as one would expect in any properly functioning organisation.

Conclusion

As you may have guessed, I am not confident that the UK statistical authorities accurately record what is happening in the economy. Fundamentally, this appears to be a function of the flawed approach taken by the ONS, which I am told is not shared by our economic peers, to attempt to calculate what’s going on in the economy by reference to output measures. I can only imagine how challenging that is at an economy level when I know it would have been nigh impossible in my own business.

At best, the ONS's data is consistently unreliable, and at worst, it is totally misleading. This cannot be good for policy makers in the UK who need an accurate picture of economic reality to set fiscal and monetary levers at the right level. This muddle is further complicated by the insistence of the Bank of England and OBR to drown their audience in hugely detailed forecasts that lack credibility and accuracy. My advice to both would be to adopt a FED-like minimalist approach and to employ more judgement and pragmatism, rather than drowning in detail and unknowable, unmeasurable academic concepts such as the output gap and potential supply capacity.

Basically, let’s just keep it simple, please.

The Bank of England says the UK economy is shrinking. The ONS says it’s growing. So which is it? The truth is, we don’t actually know—because the UK’s economic data is fundamentally flawed.

Unlike the US, which publishes stable, quarterly GDP figures, the UK relies on highly volatile monthly GDP estimates that are frequently revised—sometimes by tens of billions of pounds. Yet policymakers, investors, and the media react to these numbers as if they are definitive. They are not.

What’s Wrong With the Data?

  • The UK is a services-driven economy (80%) but GDP is measured using models designed for manufacturing-heavy economies.
  • Public sector productivity is severely underestimated, distorting the real picture of economic output.
  • Short-term GDP figures are highly unreliable, yet they drive interest rate decisions and market sentiment.

Why Measuring Output in a Services Economy is Flawed

Manufacturing output is easy to track—you count the number of cars, machines, or goods produced. But how do you measure the “output” of a lawyer, consultant, or software engineer? Many high-value services don’t produce physical goods, making traditional GDP calculations misleading. Worse still, ONS methods systematically understate the productivity of public services like healthcare and education, meaning the UK’s real economic output is likely far stronger than official figures suggest.

Yet these flawed numbers drive critical decisions—leading to higher interest rates, lower confidence, and misinformed economic narratives.

Why This Matters

  • Interest rate policy is being made on bad data. If GDP figures are wrong, the Bank of England may be keeping rates too high for too long, hurting growth.
  • Investors are flying blind. Market movements are being driven by economic reports that later get revised beyond recognition.
  • Public spending is being misrepresented. The way the ONS accounts for government services systematically understates their contribution to GDP, reinforcing misleading economic narratives.

What Needs to Change?

  1. Fix the methodology. The UK’s GDP calculations must reflect a modern, services-led economy.
  2. Stop overreacting to monthly figures. Short-term GDP swings are mostly noise—long-term trends matter far more.
  3. Recognise the risks. Until these flaws are fixed, GDP reports should be treated with extreme caution.

Bad data leads to bad decisions. Right now, the UK is making major policy calls based on unreliable numbers. That needs to change.

Disclaimer: These articles are provided for informational purposes only and should not be construed as financial advice, a recommendation, or an offer to buy or sell any securities or adopt any particular investment strategy. They are not intended to be a personal recommendation and are not based on your specific knowledge or circumstances. Readers should seek professional financial advice tailored to their individual situations before making any investment decisions. All investments involve risk, and past performance is not a reliable indicator of future results. The value of your investments and the income derived from them may go down as well as up, and you may not get back the money you invest.

Related posts

No items found.

Subscribe to receive Woodford Views in your Inbox

Subscribe for insightful analysis that breaks free from mainstream narratives.