Biosummit 2017

The University of East Anglia (Norwich) was the venue for the annual Biosummit, a gathering of UK bioscientists with an active interest in pedagogic research. As usual there was much to reflect upon. A summary of the event is captured in this Storified summary of tweets. My own formal contribution was limited to reflections on the value of using the Royal Society of Biology’s CPD framework as a valuable mechanism for capturing the evidence of activity, and reflection upon that activity, which is increasingly required for appraisals, accreditation and applications. The slides from my talk are available below (and via this link).

This continues to be a bona fide “Community of Practice”. One of the highlights is seeing like-minded friends and catching up on what they’re doing in their lives as well as in their work. The content of the conference, however, remains central. This year there were a number of highlights for me. Continue reading

Advertisements

The NSS and Enhancement (Review)

Coverage of the findings from the recent, new style, National Student Survey drew my attention to the Making it count report for the Higher Education Academy, coordinated by Alex Buckley (I’m afraid I’ve lost details of who pointed me towards the report, so cannot offer credit where credit is due).

make it countMaking it count is not new, it was published by the HEA in 2012, and therefore predates both the new-NSS and the introduction of the TEF. Nevertheless I found it a fascinating and worthwhile read – hence this reflective summary.

As most readers of this blog will know, the UK National Student Survey was introduced in 2005 and draws inspiration from the Australian Course Experience Questionnaire (CEQ), which had been in use since the early 1990s. From inception until 2016 there were a standard set of 23 questions in the NSS (see this link for complete list). The questions were all positively phrased and students in their final year were invited to respond using a standard five-point scale from “definitely agree” through “mostly agree”, “neither agree or disagree”, “mostly disagree” to “definitely disagree” ¬†(“not applicable” was also an option). Following extensive consultation, the questions were changed for the first time in 2017. A total of 27 questions were included, with some original questions retained, some rephrased and some brand new added (see this link for 2017 questions). Continue reading

Feedback on exams: how much of an issue is it?

In keeping with other universities, Leicester is thinking through the best way to offer feedback to students on their exam performance. Several different models are under consideration, each of which involves significant logistical difficulties, and no firm decision has been made about the approach to take.

We do not (yet) offer students in bioscience the opportunity to see their marked scripts themselves. At various times in recent years I have acted as “broker”, looking through the exam answers of personal tutees in order to suggest ways that they might improve their future answers.

In truth, the findings of this distinctly unscientific study are not earth-shattering. There are three or four key explanations as to why students fail to achieve the mark they desire:

1. Not answering the question asked – lowest marks, and those that come as the biggest surprise to the authors, are for answers in which they have significantly missed the thrust of the question. There is not much that can be done to counsel against this, except the old mantra “read the question carefully”.

2. Not offering enough detail – The most common error in exam answers is a failure to include sufficient detail to be worthy of the mark the author anticipated. This issue can be addressed in practical ways, e.g. by making the expectations overt and by exposing students to real essays written by previous students (as we do in this exercise).

3. Unstructured “brain dump” – This can be a manifestation of error #1, possibly leading to #2 (in the sense that a lot of time and effort has been spent on material which is not strictly relevant). Essays of this type are frequently characterised by flitting back and forth between different issues as the author thinks of them. For the examiner it comes across as either a lack of true understanding about the topic in hand, or as a lazy student essentially saying “I recognise this word in the question, here’s a random collection of things I recall from your lectures, you work out whether any of it is relevant”.

Encouraging students to take a few moments to sit down and plan out their answer before dashing off into the main body of their response will help to reduce this mistake.

4. Illegible handwriting – I speak as someone whose hand-writing is not fantastic on a good day and which gets worse under time constraints. Nevertheless, regardless of weaknesses in the marker, the old adage “we can’t award credit for it if we can’t read it” remains true.

Scripts that are completely undecipherable are rare, but they do exist. In the past we were able to offer some glimmer of hope in the form of the opportunity to call in the student and, at their expense, to employ someone to write a readable version as the original author translated their hieroglyph. This back-up position has now been removed, apparently because a student at a different institution significantly embellished their original version whilst it was being transcribed [note: does anyone have chapter and verse to show that this is not an urban myth? I’d like to know the details].

Given that other mechanisms for capturing an exam answer exist (e.g. use of a laptop) it ought not to be the case that someone gets to the point where they discover after the event that their legibility was inadequate. It is therefore important both that we have the chance to see some hand-written work (preferably early in a student’s time at university) and that a culture is engendered in which it is acceptable to comment on poor handwriting, even if the hand offering the rebuke is not itself ideal.

JK Rowling and the Assessment Dilemma

JK Rowling’s first post-Potter novel

This is a thought-in-progress rather than a full-blown post. Whilst browsing around on the Amazon website last week I happened to notice that JK Rowling had a new novel “The Casual Vacancy” coming out. What struck me most was the low star-rating the book was apparently scoring… not least because it hadn’t actually been published yet.¬†Curious, I clicked onto the customer feedback to find out what was going on.

It quickly transpired that the panning the book was receiving had nothing to do with the written word. Instead Kindle-owners were venting their wrath about the fact that the ebook was retailing for more than the hardback. “too expensive”, “why did I buy a kindle”, “rip off”, “disgusted” cried the subject lines of the comments*.

Rather than rating the quality of Ms Rowling’s story, the intended focus of the feedback, the potential customers were using the only channel open to them to register a different complaint about.

This set me thinking about the kind of Module Assessment feedback Universities gather from students. If we haven’t provided them with appropriate mechanisms to raise issues about which they are dissatisfied, then there is a danger that the numeric module feedback we receive may actually mean something entirely different to the interpretation we later place upon it.

(* as it happens the feedback since the book was published has continued to be pretty rotten, but this doesn’t negate the original observation)

  • Awards

    The Power of Comparative Genomics received a Special Commendation

  • September 2017
    M T W T F S S
    « Aug    
     123
    45678910
    11121314151617
    18192021222324
    252627282930