Fundamental flaws in (cancer) research

Watching a TED talk by Ben Goldacre recently, my attention was drawn to an excellent Nature article on fundamental flaws in cancer research. The Comment Raise standards for preclinical cancer research (subscription required), by Glenn Begley and Lee Ellis, discusses some systematic weaknesses in basic biomedical research and proposes some solutions that would resolve some of these problems.

Nature 483:531–533 (29 March 2012) doi:10.1038/483531a

Nature 483:531–533 (29 March 2012) doi:10.1038/483531a

As part of their work at the Amgen pharmaceutical company, the authors have tried to replicate the findings in 53 “landmark” papers reported to reveal important advances in understanding about the molecular biology of cancer. Despite their best efforts, including contacting the scientists responsible for the original studies, getting resources from them and, in some cases, visiting their labs to repeat the protocol there, Begley and Ellis only managed to successfully reproduce the published results in 6 (11%) of cases. We are not told which experiments were replicable, or perhaps more importantly which were not, since confidentiality agreements had been made with several of the original authors (a point that was made post hoc in a clarification statement).

Sidestepping the question of overt fraud, Begley and Ellis decide instead to see the lack of reproducibility of nearly 90% of the investigated studies as a warning about the importance of adequate pre-publication verification of findings and put out a challenge to reviewers, editors and funders to alter their criteria for acceptability; “business as usual is no longer an option” they conclude (p532).

Amongst their conclusions and recommendations, Begley and Ellis appeal for reviewers and editors to make more exacting demands of authors submitting papers. At the same time they call on journals and grant reviewers to accept presentation of imperfect stories. Superficially, these recommendations may appear somewhat self-contradictory – how can journals be both more demanding and more willing to accept imperfection? Closer examination shows that these calls actually relate to different aspects of the publication process.

A call to researchers: in order for experiments to be worthy of publication they must involve more than one different cell line. As far as possible, the cells used ought to be well characterised and be an appropriate model for the relevant disease in the intended patients. There is an onus on the scientists not to fall into the trap of reporting “a typical result is found in Figure 1” if this is, in reality, the only time that the experiment has worked successfully. As they note “the scientific process demands the highest standards of quality, ethics and rigour” (p533).

A call to journals: whilst requiring authors to be as thorough as possible in the conduct of their research, there is a contrary demand for journals to recognise that real science is messy. The experiments may include repeated and repeatable findings that are hard to fit with the general direction of the data. If that is the reality, then it ought to be acceptable for the account of the experiment to reflect this uncertainty. The pressure to produce a story that has all the i’s dotted and the t’s crossed encourages scientists to gloss over and downplay the material that doesn’t ‘fit’, but “there are no perfect stories in biology…journals and grant reviewers must allow for the presentation of imperfect stories, and recognize and reward reproducible results, so that scientists feel less pressure to tell an impossibly perfect story to advance their careers” (p533). Above all there needs to be better channels for publishing negative results; “negative data can be just as informative as positive data” (p533).

A call to grant-reviewers, funders, employers and anyone else charged with “weighing” the merit of scientists: the pressures on scientists to cherry-pick their own results and to submit papers as soon as they have enough data to scrape past the reviewers are caused, at least in part, by the existing paradigm for evaluating their merit. The worthiness of scientists is far too closely aligned with their publication history. For example, the periodic Research Excellence Framework (REF) looms over UK academics at the moment. Under this scheme, the contribution made to the furthering of knowledge will be primarily judged on the number of papers accepted by the ‘right’ journals. Begley and Ellis call for an enhanced emphasis on the value of both teaching and mentoring, observing that “relying solely on publications in top-tier journals as the benchmark for promotion or grant funding can be misleading, and does not recognize the valuable contributions of great mentors, educators and administrators” (p533).

Additionally, Begley and Ellis call for better dialogue between basic research scientists, clinicians, patients and their advocates. There are a range of fora in which these discussions take place (I know of conferences that feature all of these parties interested in, for example, osteogenesis imperfecta and in Huntington’s disease), but these opportunities ought to be expanded. Finally, they recommend that it ought to be easier for anyone, however junior, within an institution to raise suspicions regarding unethical, fraudulent or sloppy behaviour.

So, overall a very interesting and thought-provoking piece. The extent to which these improvements can be put into action remains to be seen.

Advertisements

1 Comment

  1. […] The issues raised by the difficulties replicating basic research are not considered (at least not by the authors of the Nature article) as evidence of research fraud. Instead, they are taken as a warning to the scientific community to ensure that they have adequately checked and double-checked their own findings and on journals and funders to be more willing to accept negative and imperfect data as legitimate outputs of good science rather than requiring unrealistically clear results (I have discussed there paper more fully in a post over at Journal of the Left-Handed Biochemist). […]


Comments RSS TrackBack Identifier URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

  • Awards

    The Power of Comparative Genomics received a Special Commendation

  • December 2012
    M T W T F S S
    « Oct   Jan »
     12
    3456789
    10111213141516
    17181920212223
    24252627282930
    31