Introduction
Some followers of this blog will know that my own personal involvement with Smartodds stems from an academic paper I co-authored with Mark Dixon on calculating match outcome probabilities for football matches. Several newspapers picked up on our work, and eventually this led to us being featured on ‘Tomorrow’s World‘, which was a popular science TV programme in those days.
Mark was interviewed on the show and one of the things he said was that he realised the statistical tools he’d been using for his academic research could be equally well applied to football match results, and that was the genesis of our paper. This is indeed one of the great things about Statistics: it provides a set of tools which can be applied in a very similar way across a variety of applications that, on the face of it, have very little in common.
But this homogeneity also carries risks, especially as statistical models are often misinterpreted as statements of fact, with little regard for the context in which they have been developed or the uncertainty in the results themselves. The potential for misinterpretation is often then magnified by the media, who are happy to pick up on headline results and conclusions, without any real regard as to whether those results stand up to serious scrutiny or not.
To be fair to Tomorrow’s World, they did a reasonable job at describing our work in lay terms, though they did conclude their piece with a challenge to our model’s accuracy: could we make a profit by placing a single bet on each of the following weekend’s Premier League fixtures? Obviously, with so few games, this is no way to seriously judge a statistical model. Nonetheless, we did make a small profit, and the bookmaker donated our nominal winnings to charity.
Statistics and the Media
I was reminded of all this a short while ago when almost every media channel carried an article based on statistical research that had – and I’ll quote the Mirror here – “calculated the exact formula for predicting backseat tantrums”. Different outlets put it in slightly different ways:
Sky: Maths expert creates formula to predict children’s backseat tantrums on long car journeys.
The Guardian: T-minus 10: Statistician writes formula to predict kids’ backseat tantrums.
The Week: Mathematical formula can prevent child tantrums in cars.
Scary Mommy: A Scientist Says He’s Figured Out An Exact Equation for Predicting Kids’ Car Tantrums
And so on and on and on…. In fact, a quick Google search led to more than 40 different media outlets carrying the story, and that’s just the English speaking ones. If you prefer your news in Italian, for example, there’s La Repubblica: Bimbi e capricci da rientro a casa: per evitarli c’e’ anche una formula.
The articles are all based on a statistical study by Dr James Hind of Nottingham Trent University. Dr Hind surveyed 2000 parents and carried out a statistical analysis on the data he collected. This led to the following formula:
T = 70 + 0.5E +15F – 10S
where
- T is the expected time (in minutes) to a tantrum;
- E is the time (in minutes) by which a child is entertained (books, videos etc.);
- F is 1 if food is provided, and 0 otherwise;
- S is 1 if siblings are present, and 0 otherwise.
According to ITV:
this is the code parents can use to crack the probability of their offspring’s backseat breakdowns.
But let’s detangle things a little. Plainly, this ‘code’ can’t ‘crack probability’ since it says nothing about probability or chance. At best it’s a formula that tells you what will happen on an ‘average’ car trip to an ‘average’ family, whatever that might be.
Assuming it to be correct, you can plug some numbers in and find, for example, that if that family provide books and sandwiches on a car journey with their son, and leave his sister at home, they’ll get an average of 90 minutes’ peace before a tantrum starts. But what are their chances of avoiding a tantrum if they have to make a 60 minute journey? There’s no way of knowing from this formula.
It’s likely that Dr Hind’s results are based on a statistical model which would enable such probability calculations, but since there’s no link to the actual research, we can’t be sure. But there are other questions too:
- How accurate is the formula? Were 2000 responses enough to ensure it’s reliable? And how were these 2000 parents selected? Is it reasonable to assume their children are a fair cross-selection of the wider population of kids?
- It’s a little disconcerting that the numbers in the formula – 70, 0.5, 15, 10 – are all conveniently round. Some rounding is always sensible, but these numbers just seem too good to be true.
- What if someone’s son has always thrown a tantrum as soon as they’ve turned the first corner? Can they still rely on this formula for their next journey?
- Were the variables in this formula all significant? Would predictions be worse if we dropped any of the terms?
- What other variables were considered and found not to be relevant?
- There seem to be some very obvious factors which one would think should be included in such a formula, but aren’t there. For example, ‘Age’. Is it really plausible that 2-year old and a 12-year old would have the same expected time to a tantrum?
- What methodology was used, and based on what assumptions?
- And… what can be said about probabilities rather than averages?
But here’s the thing: not one of the the media outlets carrying this story even hinted at any of the questions I raised above. Nor did they question the research in any other way. Many reported that it allowed parents to calculate the probability of a tantrum, even though the formula itself says nothing about probabilities. None questioned the accuracy, applicability or authenticity of the formula they reported. Most commented that it was based on research funded by the car breakdown company LV= Britannia, but none gave a link to any sort of publication, academic or otherwise, that might give more details about the methodology used.
All in all this just seems like lazy journalism to me. Admittedly, this is taking way too seriously a piece of research that was most likely meant to be just tongue-in-cheek. And while one might question the accuracy of the tantrum formula and the method by which it was derived, it’s almost certainly good practice to give your kids food and entertainment – and keep other kids to a minimum – when making long journeys. So whether the formula can be trusted or not, its basic message is undoubtedly valid. Nonetheless, I believe the media should be more critical in its reporting of Statistics.
That said, things are getting better. One side-effect of the Covid pandemic has been a wider public interest in Statistics, and a greater preparedness for the media to handle statistically-based stories. But on the journey to consistently high standards in the media representation of Statistics – are we there yet? Definitely not.
Postscript
Here are a couple more recent examples of `magic formulae’ for day-to-day situations that were derived from statistical analyses and widely reported in the media…
The perfect chip butty formula:
You want to make to make a chip butty using bread that weighs b grams. You intend to apply m grams of butter and add k grams of ketchup. How many chips should you place in the sandwich? Based on an of results from a survey of 2000 customers of the food chain Iceland, it turns out that the minimum weight of chips required (in grams) is given by this formula:
C ≥ 3b/4 + k + 3m
The perfect tights formula:
Tights can be uncomfortable to wear if they are too thick or too thin. But this depends on the weather: the cooler it is, the thicker the tights required to stay warm. The following formula, based on a statistical analysis of a statistician’s wife’s preferences, provides the optimal denier (thickness) of tights, depending on wind speed, w, and temperature, t (units unreported):
D = 110 - \left(\frac{110} {1+A}\right)
where
A =e^{\frac{\sqrt {w} - t}{2 \pi}}
What’s the Link?
Notice that the sample size of 2000 in the chip butty study is exactly the same as that of the tantrum study. Was that really the optimal choice in both cases? And regardless of your level of statistical expertise, you’ll be aware that there’s no reason why a formula based on data collected from a single person should be applicable to a wider population.
But just as with reporting of the tantrum study, neither of these issues – or indeed any other – were raised in any of the media reporting. And there’s something else which links these two formulas with the tantrum formula. Here are the links again to check:
Draw your own conclusions.