Thanks to those of you who wrote to say you found the previous post on the Coronavirus useful.
A small caveat though: I really am no expert on this subject. Statistics certainly has a role in understanding and predicting the evolution of the epidemic, but as in most areas of application, it’s in combination with other sciences – in this case epidemiology, virology, medicine and behavioural science – that Statistics will be of service. I know nothing about these subjects, and even the Statistics that I know is not especially geared to this type of problem.
I was thinking about this when I read the following tweet this morning:
This is not unique to coronavirus, but it feels like the people who know the *most* about something often express more uncertainty and doubt than people who have some adjacent knowledge but fall short of being subject-matter experts.
— Nate Silver (@NateSilver538) March 13, 2020
Chances are you’ll have heard of Nate Silver either from his work on baseball analytics, or as the founder of FiveThirtyEight, a website providing data analytics for sports and politics.
Anyway, Nate’s tweet seemed quite profound to me. Genuine subject-matter experts are entitled to be precise in their pronouncements about Coronavirus, though it may be that they also express uncertainties about how things will unfold. But you should be wary of comments by people – like myself – who have some tangential skill, but not deep knowledge of the subject. If we pretend to know more than we do, then we’re misleading you. The very best that any commentator can ever do is to frame their comments within the context of their limited knowledge and expertise.
This is really the nature of Statistics. Say as much as you can from what the data tell you; but be open and honest about the limitations of what you can conclude.
So, as I wrote previously, in the interests of sharing knowledge and understanding, I’ll try to write further posts on statistical aspects of the Coronavirus epidemic. But from the outset, please understand that I am really no expert, and that if my posts ever suggest otherwise you should discount them.
And maybe judge articles by anyone else from an equally critical perspective: is the author a subject-matter expert? If not, are they factoring in their limited knowledge by casting their conclusions with doubts and uncertainties?
It’s a bit of an aside, but reading the replies to Nate’s tweet it seems that the phenomenon of non-experts over-rating their own ability is a well-studied psychological phenomenon known as the Dunning-Kruger effect. I’m no expert (!) but it seems to me that properly done statistical analyses, in which uncertainties are properly accounted for via probabilities, provide a good antidote to this effect.