A Question of Claims

We are bombarded daily by claims about the world. Politics, health, activism, advertising on every media channel. Sometimes claims are dramatic and attention-grabbing. Some are deliberately adversarial and inflammatory. Others have serious research behind them. Which should we pay attention to?

Extreme claims are the fuel of opinionated confrontation. On Twitter this is a race to the bottom; 140-character shouting-matches with little hope of an enlightened discourse. The competing, polarised beliefs about claims and counter-claims lead to escalating distrust, outrage, conflict.

Even a word, poorly or cleverly chosen can trigger our associative machinery to produce a response. Journalists and politicians know this well. Words which are socially or politically-loaded are good examples; think ‘capitalism’ or ‘gun control’. The source of a claim – a news channel or a venerated leader – can override our ability to question its validity, a version of the halo effect.

Part of the problem is that claims are frequently over-simplistic. Our brains crave simple stories which the media is happy to supply in bite-sized chunks. We easily accept explanations which are congruent with our beliefs, they fit neatly into our mental model of how the world works. Anything else would feel dissonant and force us to invest in thinking harder about it.

If we find it easy to accept a claim whilst others easily reject it, then the rationality warning bells should ring. If something in the real world is entirely black, it can’t also be entirely white. There is probably something wrong with our respective maps of the world, not with the world itself. There’s a good chance we’ve both jumped to a black or white conclusion when the world is almost certainly some shade of grey.

The most rational response to a claim is to use Bayesian reasoning to update our existing beliefs. Even if our beliefs start at one end of the spectrum we should shift them when we receive new evidence.  People tend to do the opposite, to dig in their heels, select their own confirming evidence. This can be exaggerated by group affinities and norms; a tribal ‘us vs. them’ response. A group’s version of the truth stops the threat of a different truth in its tracks.

Claims about the future are pernicious, however confident commentators seem in their own foresight. Expert meteorologists can predict the weather a week or so in advance and have very direct feedback about the validity of their models. Forecasters in many other domains have been shown to be no better than dart-throwing monkeys. Can anyone make a highly confident claim about a highly uncertain future?

We should be cautious about claims which originate from ‘out there’, beyond our system too. This is particularly true for organisations trying to change something. If we accept claims at face value without local causal validation then there’s a danger of decision errors. We’ll invest time, money and attention in one thing when we could have improved something else.

However instantly plausible a claim may seem we should expect to do some critical thinking, especially for the big important stuff.

We may only be hearing a claim because its more salient, newsworthy, dramatic than it is accurate.

It can help to be skeptical about simple claims which neglect systemic complexity or chance. An example might be of the ‘eat this super-food’ reductionist variety. Others make a leap of causal inference: ‘if you do this then you’ll be a somebody’.

We may find it easy to believe claims because of a group we’re aligned with and tuned in to. Seeking a counterfactual explanation could to nudge us towards a more nuanced understanding.

Any claim which contains the words ‘new research shows…’ should lead us to ask who funded it and what the prior, larger body of research shows. And read Ben Goldacre’s blog.

Claims which are a snapshot number or a comparison of two numbers are always a simplification of a richer story about how things are changing. Even things which feel bad, can still be getting better.

We can also embrace a claim as being both true AND false, however uncomfortable this might feel. Brexit could be both good and bad for the UK and the EU at the same time, with a multitude of scenarios and probabilities.

When we sense a dissonance, it might not be about the claim itself but a difference in perspective and values. But even those, when clarified, might be surprisingly similar.

Here are some steps for bringing clarity to a claim using a causal model (I’ll get around to doing an example).

  1. Draw the causal relationship on a piece of paper.
  2. Rewrite the language as variables which change over time.
  3. Ask ‘Why?’ and ‘How?’ to uncover new variables.
  4. Add in different perspectives, both congruent and dissonant.
  5. Expand the boundaries of the claim.
  6. Seek ultimate goals and values.
  7. Seek loops.

It is getting harder than ever to identify trustworthy sources of claims. Which one have our best interests at heart? Which have their own agendas? Which are exploiting a halo or social group effect? Why are we hearing this claim at all? What if we hadn’t?

Should we accept claims as truths, unchallenged, and allow them to infiltrate and distort our view of the world? I don’t think so.