Feedback on exams: how much of an issue is it?

In keeping with other universities, Leicester is thinking through the best way to offer feedback to students on their exam performance. Several different models are under consideration, each of which involves significant logistical difficulties, and no firm decision has been made about the approach to take.

We do not (yet) offer students in bioscience the opportunity to see their marked scripts themselves. At various times in recent years I have acted as “broker”, looking through the exam answers of personal tutees in order to suggest ways that they might improve their future answers.

In truth, the findings of this distinctly unscientific study are not earth-shattering. There are three or four key explanations as to why students fail to achieve the mark they desire:

1. Not answering the question asked – lowest marks, and those that come as the biggest surprise to the authors, are for answers in which they have significantly missed the thrust of the question. There is not much that can be done to counsel against this, except the old mantra “read the question carefully”.

2. Not offering enough detail – The most common error in exam answers is a failure to include sufficient detail to be worthy of the mark the author anticipated. This issue can be addressed in practical ways, e.g. by making the expectations overt and by exposing students to real essays written by previous students (as we do in this exercise).

3. Unstructured “brain dump” – This can be a manifestation of error #1, possibly leading to #2 (in the sense that a lot of time and effort has been spent on material which is not strictly relevant). Essays of this type are frequently characterised by flitting back and forth between different issues as the author thinks of them. For the examiner it comes across as either a lack of true understanding about the topic in hand, or as a lazy student essentially saying “I recognise this word in the question, here’s a random collection of things I recall from your lectures, you work out whether any of it is relevant”.

Encouraging students to take a few moments to sit down and plan out their answer before dashing off into the main body of their response will help to reduce this mistake.

4. Illegible handwriting – I speak as someone whose hand-writing is not fantastic on a good day and which gets worse under time constraints. Nevertheless, regardless of weaknesses in the marker, the old adage “we can’t award credit for it if we can’t read it” remains true.

Scripts that are completely undecipherable are rare, but they do exist. In the past we were able to offer some glimmer of hope in the form of the opportunity to call in the student and, at their expense, to employ someone to write a readable version as the original author translated their hieroglyph. This back-up position has now been removed, apparently because a student at a different institution significantly embellished their original version whilst it was being transcribed [note: does anyone have chapter and verse to show that this is not an urban myth? I’d like to know the details].

Given that other mechanisms for capturing an exam answer exist (e.g. use of a laptop) it ought not to be the case that someone gets to the point where they discover after the event that their legibility was inadequate. It is therefore important both that we have the chance to see some hand-written work (preferably early in a student’s time at university) and that a culture is engendered in which it is acceptable to comment on poor handwriting, even if the hand offering the rebuke is not itself ideal.

Letting students know what we expect in essays

At a recent student-staff committee meeting, a first year student rep noted that it was difficult to know what sort of things markers would be looking for in an essay (especially since many people had no cause to write essays at all during their A level science courses).

I was able to point him to the generic guidance we offer in the Undergraduate Handbook, issues to all new students. However, I also wondered whether we ought really to give more information (particularly when we likely give *markers* of the work quite a thorough checklist). So this year I’ve decided to send an email overtly pointing out the kind of things that gain or lose marks (see below). Critics might argue either that (a) this is undue spoon-feeding or (b) that it will make it harder for us to find criteria on which to comment. I would counter this by saying that ironing out of some of these issue should make it clearer for us to actually get into the *content* of the essay we are assessing and not end up so focused on the *production and process* that we barely get into discussion of the substance of the essay proper.

Anyhow, we’ll see how it goes.

Criteria markers may be judging:

  • Has the essay got the correct title (not some vague approximation to it)?
  • Does the essay have a proper introduction and conclusion?
  • Are references cited in the text (using the Harvard system)? Is there a well-organised reference list at the end?
  • Does the essay answer the question posed in the title?
  • Is there a logical flow to what has been written (or is a random collection of points, albeit valid points)?
  • Is the sentence construction good? Are there issues with paragraphing? Is the story “well told”?
  • Has selective or partial coverage of the topic, inevitable in short essays, been justified in any way?
  • Have other instructions been followed e.g. is the essay double-spaced? page numbers? word limit?
  • Are there diagrams? Do they have: Figure number? Title? Legend (if applicable)? Are they referred to in the text? Are they neat and fit for purpose? If “imported” from a source are they cited?
  • Is the title and/or legend “widowed” (on a different page) from the image?
  • Is there inappropriate use of quotes?
  • Is the essay clearly too long (or too short)?
  • Awards

    The Power of Comparative Genomics received a Special Commendation

  • July 2017
    M T W T F S S
    « Jun