Marking, remarking and meaningful learning

“Marking, remarking and meaningful learning: an assessment and feedback seminar” was held at the University of Leicester on April 4th 2008.  The event was organised by the Assessment and Feedback Working party of the University’s Student Experience Enhancement Committee and was attended by about 60 members of the academic community. The following are personal reflections and things that I took from the day.


The first presentation was given by Jon Scott, Director of Studies in Biological Sciences at the University. Jon’s cryptic title “How the baby got the Smartie” actually drew analogies between his research work on development of motor coordination skills and effective use of feedback. The ability of a baby to pick up a smartie from a flat surface is apparently a developmental landmark (presumably there are healthy options now available for choco-phobic parents). Research on brain activity whilst learning this task has shown that neurons are fired by failure to achieve the task, i.e. whilst the infant is self-feedbacking (is that a word?) . It knows what it is expecting (bright, interesting-looking object in mouth) and feedback modifies performance until it gets it. Once the task has been mastered, apparently, the relevant neurons go silent.

 Jon then applied this observation to the way students use (or ought to use) assessment and feedback. He stressed that feedback is an active process for assessor and learner and that feedback can show both what is ok, and what is needed for next time – hence we need to encourage reflective behaviour in students – they need to review their work in the light of feedback received to work out how to do better next time.

Why did I get 37%?
Next up was Brenda Smith from the Higher Education Academy. Brenda’s theme was Assessment for Learning: why did I get 37%? AfL is a topic we’ve considered previously at the Journal of the Left-handed Biochemist. The title of Brenda’s session reflects a comment made by a student in a research project; their tutor was reported to have refused point-blank to enter into discussion on the matter.

Brenda began by reflecting on the growing evidence from QAA subject reviews, the National Student Survey, etc that all is not well in the world of Assessment and Feedback. The problems noted are familiar to anyone who has taken an active interest in this process: too much summative feedback, not enough formative; feedback coming too late to be useful; inconsistency in assessment practice both within and between institutions. Referring to data from the 2006 and 2007 National Student Surveys, Brenda highlighted that the lowest scores (51% for both questions, in both years) were in response to the statements “feedback on my work has been prompt” and “feedback on my work has helped me clarify things I did not understand”.  These are serious issues if feedback is to give relevant guidance for improvement in a timely fashion.

Tackling a number of ‘myths’ about assessment and feedback, Brenda questioned whether high failure rates on some courses could be defended as “maintaining high standards” as some institutions might try to do so. The thorny issue of feedback on exam performance also came up – the more students pay for their courses, the more they feel they are owed it. I certainly have some sympathy with the need to offer students training and advice on taking exams, including the opportunity to receive both peer and tutor feedback – indeed we have run an exercise with precisely this aim over a number of years (see ‘You have 45 minutes, starting from now’: Helping Students Develop their Exam Essay Skills). I do, however, worry that this genuine need, and entitlement, would descend into the type of routine calls for remarking that are now rife with A level courses.

Turning to the need for consistency (but stressing that consistency was not the same as conformity), Brenda then showed some discrepancies between the different balance of coursework to examination ratios in a number of departments at a University “not dissimilar to Leicester”. One Department highlighted seemed to assess students in their second year entirely on the basis of exams. Given the lack of feedback students in general receive about exam performance, do these students therefore receive any feedback at all for the whole of that academic year? We were challenged both to know more accurately about assessment practice in other Schools and Depts within the institution, but also to actively seek out examples of best practice elsewhere (e.g. via joint awaydays).

Why do we assess? Brenda gave four reasons: certification (i.e. to show someone was fit to practice), quality assurance, learning and sustainability. She argued that we have perhaps overfocussed on the first two and not enough on the others. Regarding sustainability, for example, do we do enough to equip our students to be sustainable learners themselves, and also to be future givers of valuable feedback to others. Do we encourage students to read each other’s material and offer one positive and one negative comment; to develop a habit of critical assessment?

To make the best use of this opportunity, students will need some training to know what they are looking for. Begin simply, e.g. give them three pieces of work – one good, one average and one weak – can they tell which is which, and why? What feedback would they offer the authors of each piece of work? Moving on from that, what would they feel like if they received the feedback they had just written?

Finally, Brenda pointed us to the Student Enhanced Learning through Effective Feedback (SENLEF) pages on the Higher Education Academy website. She drew our attention particularly to the card-sorting activities and to the seven principles of good feedback practice.

How was it for you?

Next up was Aaron Porter, currently a Sabbatical Officer in the Students’ Union at Leicester, and recently elected  to the national leadership of the National Union of Students. Aaron began by showing a series of voxpop interviews featuring about ten students answering a set of questions: What is feedback? What is good feedback? What do you do with the feedback given to you? Does your personal tutor help you with feedback? How has feedback you received helped you to learn? What would be useful feedback to you? The views expressed were interesting, but without offering and particular fresh insights. I was starting to get really cross when all of the students talked about how carefully they re-visit the work after they’ve received feedback and was much relieved when Aaron also expressed his skepticism about how representative this was.

In fact I was generally impressed by Aaron’s presentation and he raised a number of thought-provoking issues. He considered the timing of assessments, the balance of different types of assessment, the usefulness of feedback and the use of appropriate technologies for the 21st Century. His thoughts on timing were probably the most interesting. Communicate with colleagues more effectively to spread out the load – this allows students to actually spend more time on each piece of work, spreads the marking burden for STAFF and has relevance for the mental health of students. The first of these wasn’t new, the second I actually disagree with (I LIKE marking coming in at certain times as it leaves other time periods free of marking) but it was the third point that was the fresh insight. Of course, once aired it makes a lot of sense; multiple deadlines crammed into a short period of time, coupled with financial worries, the burden of parental expectation (especially if they are paying) can be highly detrimental to a student’s mental well-being. He also put in a general plea for earlier notification of exam timetables to facilitate better planning, which is something staff would wholeheartedly echo.

Commenting on the National Student Survey, and the apparently poor satisfaction scores, Aaron made the point that if there is an approx even split between coursework and exams then the fact that most students prefer one or the other means that no-one is ever going to be offering top marks for assessment satisfaction. Good point. He also stressed the need to make students aware that they are receiving feedback in a variety of formats, it is not only written comment and advice from their personal tutors.

Whilst recognising that there are logistical problems associated with feedback on exam scripts, he added his voice to concerns that the status quo is unsatisfactory from an educational perspective. Having said that, Aaron did also query the appropriateness of exam essays as an assessment format at all. Students, he suggested, are increasingly calling for ‘real world’ relevance of the tasks that are set for the. When will someone in employment ever be asked to write on a topic (as opposed to word process), in a linear start-to-finish manner, for a period of an hour, without reference to source materials or the internet? Is it time, he wondered, for students to be allowed to use computers in exams?

Personally, I’m not sure this is a runner – unless there was some kind of auto-archiving as they went along, I can just see lots of people crying into their keyboards after two and a half hours complaining that they’ve accidentally pressed delete and lost all of their work. I’m also not taken with Aaron’s final suggestion for online tracking of coursework so that students can know where in the marking/second-marking process their work has got to, in the same way that you might track the progress of an online delivery. It strikes me this will involve a huge administrative burden for very little net gain. It was funny to hear it so soon after Alan Sugar had mocked the 24-hour hotline for laundry monitoring on this week’s The Apprentice –  they seem like two peas in a pod.

More for less?
The last presentation was by Phil Race. This is the second time I’ve been to a session led by Phil and hearing him again reminded me how many practical tips and tricks I’d taken and applied in my teaching after our previous encounter. He is also very generous with his resources via his website, This time around his topic was How can we get better feedback to more students in less time?

One of the points that came across most strongly from Phil was the need to separate feedback from the return of marks; in the table discussion before the talk he went as far as to say that “it’s unethical to give students a mark at the same time as feedback”. Feedback is most useful when it is given within 24 hours of a submission deadline, since the material will be fresh in the minds of all the students. Clearly we can’t be expected to have marked or even read all of the work in that timescale, but there is plenty of generic feedback we can give – the sorts of things that we end up writing time and again on student scripts (which allows the individual feedback we subsequently give to be all the more targetted).

The predictable problem with this approach was raised – how do you cope with late submissions? Phil was clear on this issue too – why should the 97% of student who submitted on time miss out on prompt and timely feedback just because of the 3% who haven’t made the deadline? The problem is sidestepped if an alternative but equivalent Assessment B (albeit deliberately on a slightly less appealing topic) is set at the same time of the original Assessment A. If a student misses the deadline for Assignment A then they do B; that way they get fair treatment if the deadline was missed for a genuine reason.

If you give feedback before you give the mark it also allows for an additional carrot to ensure students engage fully with the comments and advice that we’ve offered – if, on the basis of the feedback given, they come up with a mark within 5% of the mark you awarded they can have whichever is the higher mark. After reading your comments, more than 90% of students will be within that range of your score – you can then target discussions with those who have significantly mis-read the merit of their work and examine where the false perception has arisen.

The other main point I took away from Phil’s contribution – and it was from the round-table discussion rather than the talk – was the merit of offering students a proforma for recording the feedback they have received and the use to which they have put it. The form can be stored in their portfolio of PDP evidence, it can be a tool for aiding genuine interaction with their feedback to make it feedforward, and it may allow them to identify repeat themes coming up from several different markers. Of course some may not choose to use the proforma, and others may simply ‘play the game’ without really entering into the spirit of the exercise, but for those who do make the most of the opportunity it may be a valuable addition to their learning experience.

To summarise
Overall, this was a useful day – nothing ground-breaking, but some helpful reminders and some new food for thought. As is so often the case, the big frustration is one of ‘preaching to the converted’ – I suspect that the assembled staff are probably amongst those who are already the most conscientious about the feedback they give. It was also interesting to participate in Alan Cann’s experiment in liveblogging using Twitter (with hashtagging) – a valuable way to share and capture insights in 140 character chunks.

Leave a comment

No comments yet.

Comments RSS TrackBack Identifier URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s