If you have not yet read Ben Goldacre’s book Bad Science, then I thoroughly recommend that you do. As readers of his regular Guardian column or his website will already know, Goldacre has embarked on a campaign to root out example of pseudoscience and shoddy science whereever they may be found.
All the usual villians are present – homeopaths, nutritionists, slack journalists, pharmaceutical companies and AIDS dissenters. Some are mentioned by name, but given their alleged predilection for litigation, and since I do not have the time, the money or the inclination to do battle with them in the courts, I shall not repeat their identities here!
It would be wrong, however, to give the impression that Goldacre is merely on a crusade against high profile exponents of “bad science”. True, the author does sometimes betray a little too much glee as he places a bomb under the throne of a media “health expert” (in a way that I found disturbingly reminiscent of the Physiology lecturer, when I was a first year undergraduate, recalling his boyhood experiments on frogs). Nevertheless, Goldacre is keen to emphasise that his purpose is to “teach good science by examining the bad” (p165 in my copy), adding that “the aim of this book is that you should be future-proofed against new variants of bullshit” (p87).
It seems to me that Goldacre is correct in his assertion that the public needs help in ‘bullshit-spotting’ and that this book is an extremely valuable tool in achieving that goal. Scientific colleagues will (hopefully!) be familiar with at least some of the pitfalls of poor study design, inappropriate use of statistics and outright spin that lead to dramatic-but-spurious headlines in the newspapers. I am, however, convinced that there is plenty here that will improve the scientific literacy of undergraduates in medicine and bioscience subjects, as well as a more general readership.
For an experiment involving human subjects to have at least some hope of generating objective data, it is important that the research method includes:
- control groups – you need something against which to compare your intervention, whether it be a placebo or sham treatment, or the best treatment currently in use;
- appropriate blinding – i.e. that neither researcher nor participants know during the trial which individual is receiving each intervention;
- randomisation – trial subjects need to be assigned to different regimes in a genuinely unbiased way (some randomisation protocols are actually open to significant abuse, albeit subconscious);
- documentation – when the work is published, the account needs to include suitably transparent and complete details of the methods and the results such that any reader will know how the study was conducted and can therefore have a sporting chance of spotting the glitches.
The case of the widely-reported Durham trial of fish oil tablets containing Omega-3 fats (Chapter 8, ‘Pill solves complex social problem‘) is a chastening tale of ways in which poor research methodology can effectively ruin a study before it has even started. Alarm bells ought to have been triggered as soon as the trial (I will call it that for simplicity, although those involved in the research have shyed away from this term) was trumpeted in advance as a test to prove the effectiveness of fish oils in boosting academic performance. The fact that the participants knew that they were in a trial has been shown in itself to ellicit improvements (the so-called ‘Hawthorne effect’), even without the media scrum that accompanied this particular trial. Add to this the influence of potential ‘confounding factors’ (see below) and this study was never going to give clear and unequivocal results.
Common mistakes involving science literature
Goldacre’s critique of ‘nutritionists’ highlights four frequent errors in the way that science literature is handled. These are:
- extrapolating and overinterpreting data – For example studies conducted on isolated cells in vitro, can provide useful pointers for future studies in humans, but it is wrong to naively take findings from cell-based work and assume the equivalent is true in vivo in a whole organism. To purloin one of Goldacre’s favourite phrases “I think you’ll find it’s a bit more complicated than that” (p100)
- extrapolating from observational studies to make claims that require an interventional study to be conducted – ‘confounding variables’, that is differences between individuals that may or may not be linked to the factor under investigation, are hard enough to control in a study where the researcher is deliberately intervening in the partcipants’ lives to measure any apparent effects. If the study is merely observing differences between people reported to have an important lifestyle or dietary factor, there may be a lot more going on. Superficial analyses are prone to come up with erroneous conclusions.
- Cherry-picking only results that fit the hypothesis – it is, as Goldacre points out, a facet of human nature both to see patterns in data and to be more receptive to results that fit your expectation than those that do not. We need therefore to guard against selectively quoting only experiments that give the results that we want, and ignoring data (possibly the majority of findings) that don’t fit our model. This is why ‘systematic review‘ of all of the data on a particular topic is an essential process.
- Referring to studies that are not published in peer-reviewed journals, and frequently not published at all – it is bad enough when conference papers and press releases are reported with the same gravitas and authority as experiments which have been scrutinised by experts in the same field as part of the peer-review process. It is even worse, however, when some interviewees are prone to make specific claims such as “a study published just last week in America has described the same effect we see here” whilst it later turns out that no such article exists. In written work, some authors have increasingly given their books a spurious air of authority by adopting the trappings of good citation practice, e.g. use of superscript numbers to direct readers to their sources. When you flick on to check the reference, however, it turns out to be a non-scholarly document or something that they themselves have said on a different occasion.
Lies, damned lies and statistics
Statistics are clearly vital in substantiating the findings of any kind of trial and Goldacre attacks abuse of statistics on two fronts. Firstly, there is the deliberate use of an inappropriate statistical test to generate a positive-sounding number. Pharmaceutical companies are said to be guilty of this sleight of hand, and it requires a certain amount of statistical nous in order to detect when this crime is being perpetrated.
Secondly, there is the way that the numbers are presented to the public. Newspapers are prone to report the ‘relative risk increase‘, i.e. the percentage increase in condition X when presented with risk Y because it generates the most attention-grabbing numbers. The shock statistic “reading science-related blogs increases the chance that you’ll have a heart attack by 50%” may alarm you (so just in case, let me say straight way that I made this up). A very different impression is given if we consider the ‘absolute risk increase‘ which state that “reading science-related blogs increases the chance that you’ll have a heart attack by 0.2%”. Goldacre recommends that there ought to be a move towards quoting ‘natural frequencies‘, i.e. as intelligible numbers. In this case, therefore we might say “reading science-related blogs increases the chance that you’ll have a heart attack from 4 in every 1000 people if you don’t, to 6 in every 1000 people if you do”.
Putting Bad Science to use in formal education
Are there ways in which Bad Science might be employed as a teaching tool in either secondary or tertiary education? The specifications for GCSE Science in England and Wales were altered in 2006 to place greater emphasis on “How Science Works“, and A levels were similarly altered in 2008 when this cohort passed on to the higher qualification. The reading level required to appreciate Bad Science probably procludes recommending it for the majority of 16 year olds. I believe, however, that the text would make an excellent resource for students of A level biology and/or General Studies. I do not know if the publishers have considered producing a structured guide based on the book or inclusion of end of chapter study questions in future editions, but there is certainly scope for this.
Similarly, the book would be valuable reading for first year undergraduates in Medicine, Bioscience or Journalism. I think there would be more merit in having this as prescribed reading for a Year One skills or introductory module than several of the more ‘academic’ alternatives.
As an admissions tutor, I receive several e-mails each summer from students starting the following term and asking which textbooks to buy. My consistent response this time around has been to recommended that they read Bad Science now and wait until the course has started before they part with money for a chunky Biochemistry text.
This is not to say that Bad Science is without faults. I do have a number of minor quibbles with the book, but I would say for the most part the fault lies with the editorial process rather than with the author per se.
Haven’t I read that before? Understandably much of the content of the book has already seen the light of day in shorter pieces in the Guardian’s Bad Science column. Repetition and/or poor ordering (by which I mean a point is introduced at length after it has already been previously noted) betray the ‘cut and shut’ nature of some of the present material. As an example of the former, we are told twice in consecutive paragraphs on page 113 about the crusade led by cereal magnate John Harvey Kellogg against a particular personal vice. Similarly, the fact that Durham council altered a press release on their website sometime after its release in order to remove the word ‘trial’ is mentioned on pages 143 and 149.
Examples of the ‘introduction after being stated’ phenomenon include the mention on page 157 that Equazen had been acquired by Galenica, follwed on p160 by a fuller account of this transaction in a tone that gave the impression it was ‘new news’. Similarly we are told on page 313 that some researchers did “something called a ‘case-control’ study” despite the fact that case-control studies were amongst the variety of experimental models discussed on page 103 and pages 295-296.
Page numbering: The cover of my edition of the book (Harper Perennial, 2009) trumpets the addition of an extra chapter. This material has not been added at the end of the text, but rather inserted at the appropriate point in the unfolding ‘story’. In consequence, page numbering downstream of the insertion is altered. Although this has been recognised in the index, there are several examples of in-text cross-references where the page numbers are now 17 out. (In case anyone with influence on the next version is reading this review the reference on page 106 to p240 should be p257; page 282 should cite p294 not p277; page 330 should point to p293 not p276).
Referencing: Bad Science is intended to be a popular book not an academic tome. As such, it would be completely inappropriate for the text to be peppered with citations in a way that would interrupt the flow. I think the solution chosen here works very well – the notes in the back use page numbers and a short quote from the text as the identifiers of the source. It is partly because I know Goldacre makes regular criticism of the lack of referencing in media reports of science that I was disappointed on a couple of occasions to turn eagerly to the back and not find a citation. These tended to be times when a broad statement had been made – for example on page 75 “A huge amount of research...” does not provide any corroborating references and on page 144 “there is a lot of history here… the field of essential fatty acid research has seen research fraud, secrecy, court cases, negative findings that have been hushed up, media misreporting on a massive scale…[the list continues]” but no notes are offered. If a new edition is produced, please can these be added.
As I have already said, these are minor (some would say picky) criticisms of an otherwise extremely valuable book. Overall, I believe Ben Goldacre has provided all of us with a toolbox for evaluating sciencey-sounding stories in the media and alerted future scientists to some of the pitfalls they should avoid in the design and reporting of their work. Bad Science would make an excellent resource for post-16 education and I hope to see it adopted as a course text on A level and undergraduate programmes.
Bad Science has a cover price of £8.99 At the time of writing it is available from Amazon for £3.60.