The dinner conundrum
You’ve been invited to dinner and feel like you should take a bottle of wine. Is it ok just to buy a cheap bottle of wine, or should you go for something more expensive?
One famous study suggests it doesn’t really matter. In outline, the details of the experiment were as follows:
- A sample of 578 people were asked to distinguish between an inexpensive and expensive glass of wine.
- Several different types of wine – red and white – were included in the experiment, ranging from a £3.49 bottle of claret to a £29.99 bottle of champagne. (The study was conducted in 2011 when everything was less expensive).
- Wines costing less than £5 were designated inexpensive; those greater than £10 were classed as expensive. (Presumably there were no bottles costing between £5 and £10).
- The experiment was ‘double blind’: neither the person drinking the wine, nor the person who gave it to them, knew whether the wine being tasted was genuinely cheap or expensive.
For white wines the correct classification rate turned out to be 53%; for red wines it was 47%. So, assuming the proportion of red and white wines used in the experiment was roughly 50/50, this means that the correct classification rate was around 50% – the same success rate you’d get, on average, from simply tossing a coin to make the choice.
Though the study doesn’t seem to have been published in an academic journal, the results are widely used to support the case that people, on average, can’t distinguish between expensive and cheap – or good and bad – wines. Slightly more detail can be found in this Guardian article, while the study is also referenced in a Wikipedia article on blind wine tasting.
What do you make of it? A sample size of 578 is reasonably large in this context. By standard statistical arguments the true successful classification rate in the wider population is likely to be somewhere between around 48% and 52%. So even allowing for the fact that these are just sample results, the ability to correctly classify wine expensiveness is still very close to 50%.
Think a little about the experiment and decide, on this basis, whether you’d feel comfortable taking an inexpensive bottle to the dinner on the assumption that the hosts wouldn’t know you had opted not to buy an expensive bottle.
|
|
|
|
|
|
|
|
|
|
|
|
Reasons to be cautious
There are actually several reasons why one might want to be cautious about taking the results of this study at face value.
- You might question where the sample of 578 individuals was selected from, and whether they are really representative of the wider population. As it turns out, the participants in this experiment were visitors to a science festival in Edinburgh. It’s likely that such a sample is representative of a wider population, but maybe there’s something not exactly standard about people going to science festivals who are happy to grab glasses of free wine.
- The details of the study are very sketchy. It’s clear from the description above that several types of wine were used, but how many samples were used of each and is there a difference in results for the different types? For example, the Guardian article explains that for two versions of Pinot Grigio the successful classification rate was 59%, while for two types of Claret it was just 39%. Are these values significantly different from each other or indeed from 50%? Without details about the numbers of participants given each type of wine it’s impossible to know. It therefore remains plausible that the classification rate depends on the type of wine, a subtlety which is lost in the headline results. Without additional information, it’s even possible that something like Simpson’s paradox is lurking beneath the data: imbalance in sample numbers for different wine types could result in a situation where, although the successful classification rate for each individual wine is well above 50%, the overall correct classification rate is very much lower.
- Above all else, the design of this experiment leaves a lot to be desired. It’s not completely clear from the description, but in this article the main author of the study explains: “To keep it as realistic as possible, we presented them (the participants) with a single glass of wine and they had to say whether inexpensive or expensive”. So, the experiment didn’t ask individuals to try 2 different wines and decide which was the more expensive; they were just given a single glass and asked whether it was cheap or expensive. This, of course, makes things much harder for participants, since they have no baseline for making a comparison. The statistical reliability of a test based on individuals comparing two wines would also be much greater if it were based on individuals having classified a pair of wines, one expensive and one not. It’s reasonable to argue that if you take a bottle of wine to the party then you only need the host and other guests to believe it’s an expensive choice. But what if one of the other guests takes an expensive bottle? Dinner might then include a comparison of your wine against that of the other guest, and your tightfistedness might be easily exposed.
- Even taking the results at face value, there are two possible explanations for the results, and by one of them it may still be risky taking a cheap bottle to dinner. The suggestion from the results is that a random person will only make the correct classification of wine 50% of the time; i.e. essentially making a random choice. But it’s entirely plausible – likely even – that some people are better than others in making the classification. In particular, you’d expect a wine connoisseur to be able to do better than 50%. Admittedly, for the overall sample to return a 50% classification rate you’d then need a group of individuals with a lower than 50% classification rate. But then if your dinner party includes people in the first group, they are again likely to recognise you brought a cheap bottle to the dinner.
- Finally, as also discussed in this article, there’s not necessarily a strong correlation between the cost of a wine and its quality. It’s therefore entirely plausible that the participants in the study are making quite good assessments about the quality of a wine, but these aren’t translating accurately to assessments of expensiveness.
Incidentally, one producer of fine wines used the results of the study as part of their own advertising campaign. They argued that since the results of the study “prove” that people can’t distinguish a decent wine by its taste, they must be using other criteria such as production zone, brand and rarity. In other words, don’t rely on your tastebuds to point you to a decent wine, rely instead on their advertising material to point you to their own rare (and expensive) luxury brands. It’s definitely a creative line to take.
Conclusions
Taken together, the various concerns above lead to reasonable doubts about the headline conclusions of the study. And this is the point really: statistical analyses are often presented in the media and elsewhere as having clearcut conclusions. But when you look at the detail of how studies are designed and executed, there is often cause to doubt the simplicity of the headlines.
In summary: be suspicious of any simple summary of statistical analyses. Get as much information as you can about the way the study has been designed and conducted Think critically about the way the study has been designed and the way the data have been analysed.
And by all means take a cheap bottle of wine to dinner, but don’t assume that no one will notice.
Postscript
The main author of the wine study referred to above is Richard Wiseman, an academic psychologist at the University of Hertfordshire. While researching for this post I came across this video he produced.
It’s not exactly Statistics, but you might enjoy following the instructions to see how predictable you are: