When assessment interferes with the measured

There was a time, not so long ago, when no scientific presentation could afford to omit at least one cartoon from The Far Side. One of my personal favourites (which can be seen here) depicts people of an apparently remote part of the world hiding their luxury Western goods as anthropologists arrive unannounced in the village.

I was reminded of this cartoon recently whilst washing my hands at work. This surprising mental leap was prompted by the temporary addition of a tool for monitoring water consumption in one of our buildings.

Can the method of assessment interfere with the thing it is supposed to be measuring?

Can the method of assessment interfere with the thing it is supposed to be measuring?

As can be seen in the photograph, the equipment being used scores few points for subtlety. I cannot believe that people use their usual amounts of water when confronted by this instrument. This raises the question of their value given that the method of monitoring is almost certainly interfering with the thing that is being measured. Continue reading

Feedback on exams: how much of an issue is it?

In keeping with other universities, Leicester is thinking through the best way to offer feedback to students on their exam performance. Several different models are under consideration, each of which involves significant logistical difficulties, and no firm decision has been made about the approach to take.

We do not (yet) offer students in bioscience the opportunity to see their marked scripts themselves. At various times in recent years I have acted as “broker”, looking through the exam answers of personal tutees in order to suggest ways that they might improve their future answers.

In truth, the findings of this distinctly unscientific study are not earth-shattering. There are three or four key explanations as to why students fail to achieve the mark they desire:

1. Not answering the question asked – lowest marks, and those that come as the biggest surprise to the authors, are for answers in which they have significantly missed the thrust of the question. There is not much that can be done to counsel against this, except the old mantra “read the question carefully”.

2. Not offering enough detail – The most common error in exam answers is a failure to include sufficient detail to be worthy of the mark the author anticipated. This issue can be addressed in practical ways, e.g. by making the expectations overt and by exposing students to real essays written by previous students (as we do in this exercise).

3. Unstructured “brain dump” – This can be a manifestation of error #1, possibly leading to #2 (in the sense that a lot of time and effort has been spent on material which is not strictly relevant). Essays of this type are frequently characterised by flitting back and forth between different issues as the author thinks of them. For the examiner it comes across as either a lack of true understanding about the topic in hand, or as a lazy student essentially saying “I recognise this word in the question, here’s a random collection of things I recall from your lectures, you work out whether any of it is relevant”.

Encouraging students to take a few moments to sit down and plan out their answer before dashing off into the main body of their response will help to reduce this mistake.

4. Illegible handwriting – I speak as someone whose hand-writing is not fantastic on a good day and which gets worse under time constraints. Nevertheless, regardless of weaknesses in the marker, the old adage “we can’t award credit for it if we can’t read it” remains true.

Scripts that are completely undecipherable are rare, but they do exist. In the past we were able to offer some glimmer of hope in the form of the opportunity to call in the student and, at their expense, to employ someone to write a readable version as the original author translated their hieroglyph. This back-up position has now been removed, apparently because a student at a different institution significantly embellished their original version whilst it was being transcribed [note: does anyone have chapter and verse to show that this is not an urban myth? I’d like to know the details].

Given that other mechanisms for capturing an exam answer exist (e.g. use of a laptop) it ought not to be the case that someone gets to the point where they discover after the event that their legibility was inadequate. It is therefore important both that we have the chance to see some hand-written work (preferably early in a student’s time at university) and that a culture is engendered in which it is acceptable to comment on poor handwriting, even if the hand offering the rebuke is not itself ideal.

Student-generated video as a means to teach bioethics

The second phase of my November tour has taken me to Naples, for the UNESCO Chair in Bioethics 9th World Conference on Bioethics, Medical Ethics and Health Law. I hope to find time to reflect more fully on the conference in the next few days.

In the meantime, I’ve provided a link to the slides from my presentation on the work we’ve been doing over the past six years, in which second year Medical Biochemists (and Medics) produce short videos about different aspects of biomedical ethics.

Headline Bioethics

I have mentioned the Headline Bioethics project here previously, including links to a poster I presented at the Leicester Teaching and Learning event (January 2013) and again at the  Higher Education Academy STEM conference (April 2013).

A paper giving more details about the task was published last week in the journal Bioscience Education. The abstract states:

An exercise is described in which second year undergraduate bioscientists write a reflective commentary on the ethical implications of a recent biological/biomedical news story of their own choosing. As well as being of more real-world relevance than writing in a traditional essay format, the commentaries also have potential utility in helping the broader community understand the issues raised by the reported innovations. By making the best examples available online, the task therefore has the additional benefit of allowing the students to be genuine producers of resources.

This is not, incidentally, to be confused with the other activity I’ve been doing with a different cohort of second year students in which they produce short films about bioethics (the paper on that subject is forthcoming).


More on “Headline Bioethics Commentaries”

Following on from last week’s post about our Headline Bioethics project, here is a poster about the assignment and repurposing which I presented at the University of Leicester Learning and Teaching conference on January 10th. The poster is shown here, with a pdf version avialable via this link.


Headline Bioethics Commentaries

Headline Bioethics Commentaries are a new series of student-authored articles reflecting on bioethical aspects of news stories

Headline Bioethics Commentaries are a new series of student-authored articles reflecting on bioethical aspects of news stories

Over at our sister site BioethicsBytes I’ve started to release a new series of articles under the title Headline Bioethics Commentaries. The first couple are already up, and I’ll be adding some more over the next few days.

I mention this here because I wanted to expand a bit on the pedagogic thinking behind this new strand, but this “show your workings” post is more applicable on Journal of the Left-handed Biochemist than over at the bioethics site proper.

The Headline Bioethics Commentaries start out as assessed pieces in a second year Research Skills module (a core unit, currently taken by about 150 students). I contribute a sizeable slice of bioethics teaching to the module and we needed an assignment to go with this chunk of the course. Right from the outset (in 2009), I wanted to break the “write an essay on…” mould and to ask the students to carry out a task that would have real-world applicability, to produce material that would potentially be useful to other people rather than languishing in a leverarch folder. Continue reading

JK Rowling and the Assessment Dilemma

JK Rowling’s first post-Potter novel

This is a thought-in-progress rather than a full-blown post. Whilst browsing around on the Amazon website last week I happened to notice that JK Rowling had a new novel “The Casual Vacancy” coming out. What struck me most was the low star-rating the book was apparently scoring… not least because it hadn’t actually been published yet.¬†Curious, I clicked onto the customer feedback to find out what was going on.

It quickly transpired that the panning the book was receiving had nothing to do with the written word. Instead Kindle-owners were venting their wrath about the fact that the ebook was retailing for more than the hardback. “too expensive”, “why did I buy a kindle”, “rip off”, “disgusted” cried the subject lines of the comments*.

Rather than rating the quality of Ms Rowling’s story, the intended focus of the feedback, the potential customers were using the only channel open to them to register a different complaint about.

This set me thinking about the kind of Module Assessment feedback Universities gather from students. If we haven’t provided them with appropriate mechanisms to raise issues about which they are dissatisfied, then there is a danger that the numeric module feedback we receive may actually mean something entirely different to the interpretation we later place upon it.

(* as it happens the feedback since the book was published has continued to be pretty rotten, but this doesn’t negate the original observation)