When technology models poor practice

Year1 assessmentOne of the difficulties in teaching first year students is to convey the importance of appropriate handling of data, both in terms of data display and degrees of significance. I’ve commented previously on this site about times when technology can produce utterly inappropriate graphic representation of results (see A bonus lesson in my data handling tutorial).

At the end of the first semester we conduct an online exam using the Blackboard quiz tool. The assessment is out of 200, marked automatically and scaled to a percentage. When the students submit their answers at the end of the test, they get  instant reporting of their result. The screenshot on the right shows a section from the gradebook where the results are recorded in exactly the detail each students gets, i.e. up to 5 decimal places! It is unfortunate that this inappropriate “accuracy” gets displayed to the students.


“Please send a photo”


One recent email exchange related to someone else’s order for running shoes, sent to me in error

I’ve recently had cause to contact three different companies about inadequacies in their service. The reasons for doing so in each case were very different, but there was a common thread to their replies: “Please send a photo of the [relevant item]”. When the third request came in, I started to see a pattern and this set me ruminating on why they were adding this extra step to dealing with my query.

And then it struck me, that this was exactly the reason – it was an extra step. It is part of a filtering process. It is easy enough for all and sundry to fire off email requests willy-nilly. As a mechanism to weed out the serious appellant from the time-waster there needed to be an additional hurdle. [I have vague memories from school history lessons that monasteries used to offer a similar process. Potential novices were never admitted at their first attempt, they were required to return on several occasions before securing entry into the monastic life.]

I mention this here, on my education blog, because I actually operate a similar system when it comes to requests from students. If you are involved in academia I am sure you recognise emails, particularly as exams loom, that go something like: Continue reading

Feedback on exams: how much of an issue is it?

In keeping with other universities, Leicester is thinking through the best way to offer feedback to students on their exam performance. Several different models are under consideration, each of which involves significant logistical difficulties, and no firm decision has been made about the approach to take.

We do not (yet) offer students in bioscience the opportunity to see their marked scripts themselves. At various times in recent years I have acted as “broker”, looking through the exam answers of personal tutees in order to suggest ways that they might improve their future answers.

In truth, the findings of this distinctly unscientific study are not earth-shattering. There are three or four key explanations as to why students fail to achieve the mark they desire:

1. Not answering the question asked – lowest marks, and those that come as the biggest surprise to the authors, are for answers in which they have significantly missed the thrust of the question. There is not much that can be done to counsel against this, except the old mantra “read the question carefully”.

2. Not offering enough detail – The most common error in exam answers is a failure to include sufficient detail to be worthy of the mark the author anticipated. This issue can be addressed in practical ways, e.g. by making the expectations overt and by exposing students to real essays written by previous students (as we do in this exercise).

3. Unstructured “brain dump” – This can be a manifestation of error #1, possibly leading to #2 (in the sense that a lot of time and effort has been spent on material which is not strictly relevant). Essays of this type are frequently characterised by flitting back and forth between different issues as the author thinks of them. For the examiner it comes across as either a lack of true understanding about the topic in hand, or as a lazy student essentially saying “I recognise this word in the question, here’s a random collection of things I recall from your lectures, you work out whether any of it is relevant”.

Encouraging students to take a few moments to sit down and plan out their answer before dashing off into the main body of their response will help to reduce this mistake.

4. Illegible handwriting – I speak as someone whose hand-writing is not fantastic on a good day and which gets worse under time constraints. Nevertheless, regardless of weaknesses in the marker, the old adage “we can’t award credit for it if we can’t read it” remains true.

Scripts that are completely undecipherable are rare, but they do exist. In the past we were able to offer some glimmer of hope in the form of the opportunity to call in the student and, at their expense, to employ someone to write a readable version as the original author translated their hieroglyph. This back-up position has now been removed, apparently because a student at a different institution significantly embellished their original version whilst it was being transcribed [note: does anyone have chapter and verse to show that this is not an urban myth? I’d like to know the details].

Given that other mechanisms for capturing an exam answer exist (e.g. use of a laptop) it ought not to be the case that someone gets to the point where they discover after the event that their legibility was inadequate. It is therefore important both that we have the chance to see some hand-written work (preferably early in a student’s time at university) and that a culture is engendered in which it is acceptable to comment on poor handwriting, even if the hand offering the rebuke is not itself ideal.

  • Awards

    The Power of Comparative Genomics received a Special Commendation

  • February 2018
    M T W T F S S
    « Sep