Watching a TED talk by Ben Goldacre recently, my attention was drawn to an excellent Nature article on fundamental flaws in cancer research. The Comment Raise standards for preclinical cancer research (subscription required), by Glenn Begley and Lee Ellis, discusses some systematic weaknesses in basic biomedical research and proposes some solutions that would resolve some of these problems.
Nature 483:531–533 (29 March 2012) doi:10.1038/483531a
As part of their work at the Amgen pharmaceutical company, the authors have tried to replicate the findings in 53 “landmark” papers reported to reveal important advances in understanding about the molecular biology of cancer. Despite their best efforts, including contacting the scientists responsible for the original studies, getting resources from them and, in some cases, visiting their labs to repeat the protocol there, Begley and Ellis only managed to successfully reproduce the published results in 6 (11%) of cases. We are not told which experiments were replicable, or perhaps more importantly which were not, since confidentiality agreements had been made with several of the original authors (a point that was made post hoc in a clarification statement). Continue reading
Comparing 'before' and 'after' data needs some identification
When undertaking educational research you often want to know how an intervention has affected a cohort, and ideally to be able to drill down into the data to see the impact on individuals. In order to match pre-and post- activity surveys, some kind of identifier is required. You could ask the students to put their names on the forms, but they may have concerns that this will have ramifications for their coursework. What else you could do?
There are a range of semi-anonymised labels you could use. At various times in my own work I’ve used formal candidate number, email username and date of birth (the latter often throws up more than one student with the same date, but handwriting can then distinguish). In each of these cases, however, it remains a relatively trivial step for someone with access to the right databases to decode the label and convert it into a name. Of course there is generally no reason why a researcher would want to do this, and students trust that you are not going to waste your precious time doing so.
What else might you do? You could ask the students to pick a bogus name or their favourite superhero, but these run several risks – including having surveys completed multiple “lady gaga”s or “dr [insert your name here]”. The students might also forget the random name they picked between the first and the second test. Continue reading
Adherence to the ethical and legal guidelines can be problematic in any research. These difficulties are potentially compounded if the research involves adults who are lacking capacity to consent to their participation.
The National Research Ethics Service (NRES) have recently published an online toolkit to help researchers, members of research ethics committees, and institutional research managers to ensure that projects fit with the legal requirements (for example, adherence to the Mental Capacity Act in England and Wales). The toolkit was developed at the University of Leicester and is primarily the brainchild of Emma Angell and Mary Dixon-Woods, with input from Ainsley Newson at the University of Bristol and with a little help from me.
The toolkit is split into Clinical Trials Involving Medicinal Products (CTIMPs) and non-CTIMPs to reflect the fundamental differences in the structuring and administration of each type of activity. There is also a separate section on emergency research.
We would value your feedback on the toolkit – please feel free to post comments here.
I have been doing some reading for a while now on the ethics of research involving model organisms, particularly the potential for studies on lower species to offer insights into human disease (and thereby contribute to the 3Rs). Some of my musings on the topic can be found here.
Aware of this interest, a colleague recommended that I read a 2004 paper published in the journal Cell. I am very grateful that he did, since the study really has the “wow” factor – demonstrating beautifully the potential of comparative genomics, experiments on model organisms and knowledge of human disease to work together to produce new insights that would have been much harder if any one component was missing. The paper is Comparative genomics identifies a flagellar and basal body proteome that includes the BBS5 human disease gene by Li JB et al. The following notes are my attempt to summarise the best bits.
The importance of cilia and basal bodies in disease
The role of cilia in respiration (and the detrimental effects of smoking on their function) were features of the school biology curriculum when I was a child. However, research over the last ten years or so has demonstrated that cilia have surprisingly diverse roles in development, from determination of left-right symmetry in the body, through to formation and function of specific organs such as the kidneys (for more detail see the Wikipedia entry on Ciliopathy or, if you have access permissions, Badano et al (2006), The ciliopathies: an emerging class of human genetic disorders Annual Review of Genomics and Human Genetics 7:125-148). Bardet-Biedl syndrome (BBS) is one disorder associated with non-functional or malfunctional cilia. The clinical features can be varied, but include obesity, mental retardation, progressive-onset blindness and polydactylism (i.e. possession of extra digits). Continue reading
From time to time examples of scientific fraud come to light and raise questions about the integrity of scientific endeavour. The most well-known example of recent years must surely be South Korean stem cell biologist Hwang Woo-Suk, whose ground-breaking discoveries in the field of therapeutic cloning were exposed as bogus (In addition to his science reputation being in tatters, Hwang was convicted in October 2009 of embezzlement and violation of bioethical laws, although he escaped a custodial sentence).
In physics, the multiple re-use of the same graphs as data for entirely different experiments led to the downfall of a leading young nanoscientist (this was the subject of a 2004 episode of the BBC’s Horizon series The dark secret of Hendrik Schön). Are Hwang and Schön rare examples bringing unwarranted criticism to a body of otherwise exemplary scientists, or are their crimes indicative of much wider malpractice within the scientific community?
University of Edinburgh researcher Daniele Fanelli has shed some light on the the extend of scientific fraud in an article How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. Published in the open access journal PLoS ONE in May 2009, the research brought together data from a number of earlier smaller studies on scientific misconduct to generate “the first meta-analysis of these surveys” (p1).
Harper Perennial edition (2009)
If you have not yet read Ben Goldacre’s book Bad Science, then I thoroughly recommend that you do. As readers of his regular Guardian column or his website will already know, Goldacre has embarked on a campaign to root out example of pseudoscience and shoddy science whereever they may be found.
All the usual villians are present – homeopaths, nutritionists, slack journalists, pharmaceutical companies and AIDS dissenters. Some are mentioned by name, but given their alleged predilection for litigation, and since I do not have the time, the money or the inclination to do battle with them in the courts, I shall not repeat their identities here!
It would be wrong, however, to give the impression that Goldacre is merely on a crusade against high profile exponents of “bad science”. True, the author does sometimes betray a little too much glee as he places a bomb under the throne of a media “health expert” (in a way that I found disturbingly reminiscent of the Physiology lecturer, when I was a first year undergraduate, recalling his boyhood experiments on frogs). Nevertheless, Goldacre is keen to emphasise that his purpose is to “teach good science by examining the bad” (p165 in my copy), adding that “the aim of this book is that you should be future-proofed against new variants of bullshit” (p87). Continue reading