Pedagogic Journal Club 101

In preparation for a journal club I’m leading shortly, I was reflecting on some generic starter questions which could be applied to reading any paper (this would be true in the context of reading an article on your own as well).

To start with, you can apply the 5W1H approach. In the context of reading a journal article I tend to take these in a non-typical order:

  • Who? Who conducted the research?
  • Where? Was it one institution only or a multi-centre project? UK, USA or elsewhere?
  • What? What, briefly, was the main point of the work [you will look in finer detail later on]?
  • When? Not only when was the work published, but when was the work actually conducted? This is especially pertinent if the article is describing the impact of technical innovations.
  • Why? What are the reasons the authors give for conducting the work? These may be generic and/or driven by particular local developments.
  • How? This is the nitty-gritty and will take on the bulk of a journal club discussion.

As part of the “how” there are additional key questions to bear in mind as you work step-by-step through the paper. These are:

  • What key information are we being presented in this section of the paper?
  • What key information are we *not* being presented in this section of the paper?

In both pedagogic research articles and scientific papers these two questions are particularly valuable when examining information that has been presented in figures and/or tables. Sometimes necessary background details to follow the implication of displayed data have to be found elsewhere in the text, and sometimes they are missing entirely (at which point you need to decide for yourself whether this an accidental or a deliberate omission).

For a journal club specifically you also need to remember that it is intended to be a discussion not a presentation of what you have found; you are the guide as you lead a band of intrepid explorers below the surface of the paper. If the journal club is working well you will come away from the process with additional insights they have made about aspects you missed in the text.


The NSS and Enhancement (Review)

Coverage of the findings from the recent, new style, National Student Survey drew my attention to the Making it count report for the Higher Education Academy, coordinated by Alex Buckley (I’m afraid I’ve lost details of who pointed me towards the report, so cannot offer credit where credit is due).

make it countMaking it count is not new, it was published by the HEA in 2012, and therefore predates both the new-NSS and the introduction of the TEF. Nevertheless I found it a fascinating and worthwhile read – hence this reflective summary.

As most readers of this blog will know, the UK National Student Survey was introduced in 2005 and draws inspiration from the Australian Course Experience Questionnaire (CEQ), which had been in use since the early 1990s. From inception until 2016 there were a standard set of 23 questions in the NSS (see this link for complete list). The questions were all positively phrased and students in their final year were invited to respond using a standard five-point scale from “definitely agree” through “mostly agree”, “neither agree or disagree”, “mostly disagree” to “definitely disagree”  (“not applicable” was also an option). Following extensive consultation, the questions were changed for the first time in 2017. A total of 27 questions were included, with some original questions retained, some rephrased and some brand new added (see this link for 2017 questions). Continue reading

When is the right time to stop taking antibiotics?

Press coverage has picked up on an interesting paper The antibiotic course has had its day published in the British Medical Journal (online 26th July 2017). The paper was of interest to me as I studied antibiotic resistance for my PhD, and this topic was also the theme of (to date) my only appearance on TV news.


As anyone who has ever been prescribed antibiotics ought to know, current clinical practice from the World Health Organisation and others recommends completion of the course (often 7 days), even if the patient feels better sooner. The justification for this strategy has been concern that premature ending of treatment might allow the disease-causing bacteria to recover and continue to wreak havoc, possibly in a newly-resistant manner.

In the new paper, Martin Llewelyn (Brighton and Sussex Medical School) and colleagues from a number of institutions in South-East England question the basis of this recommendation. Whereas the link between exposure to antibacterials and the development of resistance is well documented, these authors wondered about the origins of the original advice. They suggest that the requirement to “complete the course” probably stands on little more than the anecdotal experience of some of the antibiotic pioneers. Continue reading

Capturing more than lectures with “lecture capture” technology (paper review)

The July 2017 edition of British Journal of Educational Technology includes a pilot study The value of capture: Taking an alternative approach to using lecture capture technologies for increased impact on student learning and engagement investigating the potential to exploit lecture capture technologies for the production of teaching resources over and above recording of lectures per se.


I was keen to read this paper because I am already using Panopto (the same software used in the study) as a means to generate short “flipped classroom” videos on aspects of bioethics which, it is hoped, students will watch before participating in a face-to-face session. I have also produced some ad hoc materials (which author Gemma Witton terms “supplementary materials”), for example to clarify a specific point from my lectures about which several students had independently contacted me. Furthermore, I have also written some reflections on the impact lecture capture is already having on our courses (see Reflecting on lecture capture: the good, the bad and the lonely). Continue reading

Headline Bioethics

I have mentioned the Headline Bioethics project here previously, including links to a poster I presented at the Leicester Teaching and Learning event (January 2013) and again at the  Higher Education Academy STEM conference (April 2013).

A paper giving more details about the task was published last week in the journal Bioscience Education. The abstract states:

An exercise is described in which second year undergraduate bioscientists write a reflective commentary on the ethical implications of a recent biological/biomedical news story of their own choosing. As well as being of more real-world relevance than writing in a traditional essay format, the commentaries also have potential utility in helping the broader community understand the issues raised by the reported innovations. By making the best examples available online, the task therefore has the additional benefit of allowing the students to be genuine producers of resources.

This is not, incidentally, to be confused with the other activity I’ve been doing with a different cohort of second year students in which they produce short films about bioethics (the paper on that subject is forthcoming).


Fundamental flaws in (cancer) research

Watching a TED talk by Ben Goldacre recently, my attention was drawn to an excellent Nature article on fundamental flaws in cancer research. The Comment Raise standards for preclinical cancer research (subscription required), by Glenn Begley and Lee Ellis, discusses some systematic weaknesses in basic biomedical research and proposes some solutions that would resolve some of these problems.

Nature 483:531–533 (29 March 2012) doi:10.1038/483531a

Nature 483:531–533 (29 March 2012) doi:10.1038/483531a

As part of their work at the Amgen pharmaceutical company, the authors have tried to replicate the findings in 53 “landmark” papers reported to reveal important advances in understanding about the molecular biology of cancer. Despite their best efforts, including contacting the scientists responsible for the original studies, getting resources from them and, in some cases, visiting their labs to repeat the protocol there, Begley and Ellis only managed to successfully reproduce the published results in 6 (11%) of cases. We are not told which experiments were replicable, or perhaps more importantly which were not, since confidentiality agreements had been made with several of the original authors (a point that was made post hoc in a clarification statement). Continue reading

Oral versus written assessments

The January 2012 meeting of the Bioscience Pedagogic Research group at the University of Leicester included a “journal club” discussion of a paper Oral versus written assessments: a test of student performance and attitudes by Mark Huxham and colleagues from Napier University, Edinburgh. The paper had recently been published online in advance of a paper copy appearing in the February 2012 edition of Assessment and Evaluation in Higher Education.

To kick off the discussion I shared the following slides:

We had a good debate about the paper. For the most-part we thought it was an interesting and thought-provoking study, prompting us to consider greater use of oral examinations in the assessment repertoire at Leicester. A few questions were raised. It was felt to be a pity that the authors had not included an evaluation of the overall staff time involved in oral assessment versus written assessment (particularly for the first year cohort that had been randomly assigned one or other task). This would have been a valuable addition.

I don’t claim to be statistically-minded, but those with greater expertise in this field felt that the Mann-Whitney U-test might have been better than Student’s t-test for comparison of student scores in the oral and written assessments. The notion that a p-value of 0.079 was “not quite a significant difference” (p130) also ruffled some feathers.

Aside from these relatively minor issues, it was felt that the Napier study was a useful addition to the canon on assessment and readers of this short reflection are encouraged to seek out the original paper.

Thanks to Mark Huxham for some e-mail discussion prior to the meeting.

  • Awards

    The Power of Comparative Genomics received a Special Commendation

  • October 2017
    M T W T F S S
    « Sep