When is the right time to stop taking antibiotics?

Press coverage has picked up on an interesting paper The antibiotic course has had its day published in the British Medical Journal (online 26th July 2017). The paper was of interest to me as I studied antibiotic resistance for my PhD, and this topic was also the theme of (to date) my only appearance on TV news.

bmjab

As anyone who has ever been prescribed antibiotics ought to know, current clinical practice from the World Health Organisation and others recommends completion of the course (often 7 days), even if the patient feels better sooner. The justification for this strategy has been concern that premature ending of treatment might allow the disease-causing bacteria to recover and continue to wreak havoc, possibly in a newly-resistant manner.

In the new paper, Martin Llewelyn (Brighton and Sussex Medical School) and colleagues from a number of institutions in South-East England question the basis of this recommendation. Whereas the link between exposure to antibacterials and the development of resistance is well documented, these authors wondered about the origins of the original advice. They suggest that the requirement to “complete the course” probably stands on little more than the anecdotal experience of some of the antibiotic pioneers. Continue reading

Capturing more than lectures with “lecture capture” technology (paper review)

The July 2017 edition of British Journal of Educational Technology includes a pilot study The value of capture: Taking an alternative approach to using lecture capture technologies for increased impact on student learning and engagement investigating the potential to exploit lecture capture technologies for the production of teaching resources over and above recording of lectures per se.

BJET

I was keen to read this paper because I am already using Panopto (the same software used in the study) as a means to generate short “flipped classroom” videos on aspects of bioethics which, it is hoped, students will watch before participating in a face-to-face session. I have also produced some ad hoc materials (which author Gemma Witton terms “supplementary materials”), for example to clarify a specific point from my lectures about which several students had independently contacted me. Furthermore, I have also written some reflections on the impact lecture capture is already having on our courses (see Reflecting on lecture capture: the good, the bad and the lonely). Continue reading

Headline Bioethics

I have mentioned the Headline Bioethics project here previously, including links to a poster I presented at the Leicester Teaching and Learning event (January 2013) and again at the  Higher Education Academy STEM conference (April 2013).

A paper giving more details about the task was published last week in the journal Bioscience Education. The abstract states:

An exercise is described in which second year undergraduate bioscientists write a reflective commentary on the ethical implications of a recent biological/biomedical news story of their own choosing. As well as being of more real-world relevance than writing in a traditional essay format, the commentaries also have potential utility in helping the broader community understand the issues raised by the reported innovations. By making the best examples available online, the task therefore has the additional benefit of allowing the students to be genuine producers of resources.

This is not, incidentally, to be confused with the other activity I’ve been doing with a different cohort of second year students in which they produce short films about bioethics (the paper on that subject is forthcoming).

 

Fundamental flaws in (cancer) research

Watching a TED talk by Ben Goldacre recently, my attention was drawn to an excellent Nature article on fundamental flaws in cancer research. The Comment Raise standards for preclinical cancer research (subscription required), by Glenn Begley and Lee Ellis, discusses some systematic weaknesses in basic biomedical research and proposes some solutions that would resolve some of these problems.

Nature 483:531–533 (29 March 2012) doi:10.1038/483531a

Nature 483:531–533 (29 March 2012) doi:10.1038/483531a

As part of their work at the Amgen pharmaceutical company, the authors have tried to replicate the findings in 53 “landmark” papers reported to reveal important advances in understanding about the molecular biology of cancer. Despite their best efforts, including contacting the scientists responsible for the original studies, getting resources from them and, in some cases, visiting their labs to repeat the protocol there, Begley and Ellis only managed to successfully reproduce the published results in 6 (11%) of cases. We are not told which experiments were replicable, or perhaps more importantly which were not, since confidentiality agreements had been made with several of the original authors (a point that was made post hoc in a clarification statement). Continue reading

Oral versus written assessments

The January 2012 meeting of the Bioscience Pedagogic Research group at the University of Leicester included a “journal club” discussion of a paper Oral versus written assessments: a test of student performance and attitudes by Mark Huxham and colleagues from Napier University, Edinburgh. The paper had recently been published online in advance of a paper copy appearing in the February 2012 edition of Assessment and Evaluation in Higher Education.

To kick off the discussion I shared the following slides:

We had a good debate about the paper. For the most-part we thought it was an interesting and thought-provoking study, prompting us to consider greater use of oral examinations in the assessment repertoire at Leicester. A few questions were raised. It was felt to be a pity that the authors had not included an evaluation of the overall staff time involved in oral assessment versus written assessment (particularly for the first year cohort that had been randomly assigned one or other task). This would have been a valuable addition.

I don’t claim to be statistically-minded, but those with greater expertise in this field felt that the Mann-Whitney U-test might have been better than Student’s t-test for comparison of student scores in the oral and written assessments. The notion that a p-value of 0.079 was “not quite a significant difference” (p130) also ruffled some feathers.

Aside from these relatively minor issues, it was felt that the Napier study was a useful addition to the canon on assessment and readers of this short reflection are encouraged to seek out the original paper.

Thanks to Mark Huxham for some e-mail discussion prior to the meeting.

Marking (in)consistency – the elephant in the assessment room?

In September 2006 Banksy (briefly) included a painted "Elephant in the Room" in his LA show

In a thought-provoking article, available online ahead of publication in the February 2012 edition of Assessment and Evaluation in Higher Education, Teresa McConlogue looks into the pedagogical benefits of peer assessment. Her paper But is it fair? Developing students’ understanding of grading complex written work through peer assessment focuses on work conducted with engineering students at Queen Mary University of London.

Two distinct cohorts of students were required to peer assess a piece of coursework, leading to generation of a summative mark; a laboratory report (n=56, 10% of mark for module) and a literature review (n=26, 25%). Each piece of work was assessed by 4 or 5 peers who were required to provide both a mark and comments on the work. The students were then awarded the mean mark.

Thus far there is nothing exceptional about this process – peer assessment is an established practice in Higher Education (see, for example, Paul Orsmond’s excellent guide on Self- and Peer-Assessment). The controversial element of McConlogue’s activity comes with the fact that the authors of the peer-assessed work were provided with all of the comments made by their contemporaries AND a full record of the range of marks awarded. This “warts and all” approach exposed the students to the mechanics of marking – showing them both the reasoning that went into a mark (some of which seemed poorly aligned with the mark awarded or based on ‘trivialities’) and the fact that an individual “rogue” mark may have significantly influenced the mean. In some cases the individual marks awarded apparently spanned  several grade boundaries.

Continue reading

An instrument to evaluate Assessment for Learning

A&EinHE now has an impact factor

Assessment for Learning (AfL) has been a key notion in recent curriculum developments in both secondary and tertiary education (see this link for previous left-handed biochemist posts on AfL).

The December 2011 edition of Assessment and Evaluation in Higher Education featured a paper Does assessment for learning make a difference? The development of a questionnaire to explore the student response by Liz McDowell and colleagues from the recently-closed AfL CETL in Northumbria. Quoting AfL guru Paul Black, the authors point out that the definition of Assessment for Learning has become overly flexible, “a free brand name to attach to any practice,” before clarifying that for them AfL must encompass six dimensions:

  • Formal feedback – e.g. from tutor comments or self-assessment
  • Informal feedback – e.g. from peer interaction or dialogue with staff
  • Practice – opportunity to try out skills and rehearse understanding
  • Authenticity – assessment tasks must have real-life relevance
  • Autonomy – activities must help students develop independence
  • Summative/Formative balance – involves an appropriate mix of both tasks that are “for marks” and those that are not

The bulk of the paper describes the development and testing of a questionnaire used for evaluation of students’ experience of a module. The questionnaire, which can be downloaded from the AfL CETL website, could be used to provide evidence to justify curriculum change and/or to support the case for quality enhancement. Each of the questions maps to at least one of the six key dimensions.

In analysing the use of this research instrument to evaluate modules at their own institution, the authors highlighted three principal factors distinguishing AfL and non-AfL courses: staff support and module design; engagement with subject matter; and the role played by peer support. Overall they suggest that the student experience was more positive in modules where AfL approaches were employed.

  • Awards

    The Power of Comparative Genomics received a Special Commendation

  • August 2017
    M T W T F S S
    « Jul    
     123456
    78910111213
    14151617181920
    21222324252627
    28293031