There are situations where measurement is rejected with the claim that it isn’t accurate enough. There may be underlying motivations for this resistance – a fear of judgement perhaps – but challenging accuracy could mean that the concept of measurement itself is misunderstood.
Information Value is Purposeful
Measurement is an observation which reduces uncertainty about some true quantity. Accuracy is how close a measurement gets to that true quantity. We may never be able to know the ‘true’ quantity of something however because all measurement has error. Even if we could achieve zero measurement error, the cost of obtaining this perfect information (or waiting for certainty to come along) would outweigh its value.
What really matters is whether the knowledge made possible by the measurement is adequate for some purpose which, in business, is usually a decision with positive or negative financial consequences. Until we know the purpose, we can’t know how ‘good’ a measurement needs to be or whether it is needed at all.
A lot of measurement in business happens without deeper thinking about its purpose and economic value.
Lets say I’m uncertain about the length of a plank of wood. Before any measurement attempt is made, information about some property (the length) of an object (the plank) is absent. After a measurement is made, new information is acquired. The new information yielded by the measurement has brought my knowledge of the plank’s length closer to some true value.
I may attempt a measurement with my eyes, stand it up next to me or walk alongside it in UK size 10 boots. Are these measurements good enough? We still can’t say. It all depends on the use I will make of this information:
- If I need to decide whether to transport the plank inside the car or strap it to the roof rack then I’ve reduced uncertainty enough and done it quickly at low cost.
- If I need to decide where to cut the plank to fit a 2350mm hole in a floor then I need to invest in reducing uncertainty further. If make a cut without more information I will either have to waste a lot of time cutting and testing the fit or a waste a lot of money buying more planks.
In the second case, investing in a better measurement instrument – a tape measure and the ability to use it – will help me to avoid economic wastes.
The Cost of Uncertainty
This simple example illustrates that the decision being made and the cost of making the wrong choice – where to cut a plank – is what gives a measurement its value. Whether or not a measurement needs to be more accurate depends on the use of the information, and this use is usually economic even for the most intangible of things.
A lower accuracy, lower cost measurement instrument – like my boot – can reduce uncertainty a lot. Seeking yet more measurement accuracy at higher cost eventually produces diminishing gains. Imagine pumping up a bike tyre which is easier to begin with but then gets progressively harder.
Controlling for Accuracy & Precision
All measurement instruments have both accuracy & precision errors. Accuracy is determined by a systemic bias which produces quantities some distance from the true value. Precision is determined by the randomness from one measurement to the next. The two types of error are often confused and the terms used interchangeably.
In performance improvement we are more interested in a relative change over time. In this case accuracy is less important because the same systemic bias appears in each sample. Random error can be controlled for statistically so that we know whether a change between samples was a true signal or noise.
Seeking Integrity or Insight?
Much of the ‘Best Practice’ KPI guidance out there conveniently ignores real world uncertainty and error, especially in relation to detecting signals and setting targets. By being aware of error and adjusting for it we can still make better decisions with imperfect information than we can without it.
More accuracy only matters if it changes a decision to one which produces better consequences.
Similarly having too many metrics which don’t contribute to improving something is, quite simply, waste. Pages of performance reports which never get read are testament to this.
People with a technical background seem to find it especially hard to shake beliefs about data integrity and this diverts attention from seeking greater meaning and insight. It can be counter-intuitive that lots of accurate metrics can have lower collective decision value than a handful of more-meaningful but less-accurate ones. I suspect this is one reason why technical and operational units struggle to evidence business value using off-the-shelf metrics instead of designing their own strategic improvement KPIs.
I’m off to cut that plank …