Adjusting “exams” as they move online

Universities across the world are having to adjust to the fact that rooms full of students sitting exams is not an appropriate assessment format for May and June this year. As a consequence, teaching teams are needing to think laterally about how to interrogate students about the learning they have gained from their modules.

celebrity squares

Online meetings have started to look like episodes of the old “Celebrity Squares” gameshow

I am sure many places are way ahead of us on this one, but a few reflections on a recent teaching and learning committee meeting (held via zoom) may be of benefit to those who are just getting going in their thinking about this.

 

  1. Duration of tests and the length of time they are accessible. The amount of time that a test is “live” and the time for which an individual student can respond are not necessarily the same thing. A defined two hour period is not applicable for a number of reasons – including potential timezone differences and connectivity problems. My institution has mandated a “24 hour window” for assessments and it is our understanding at the chalkface (as it was) that this means that they are live throughout that time.
  2. Factual recall questions aren’t going to work. There has, of course, been a long-standing debate about the educational merits of an over-reliance of questions that reward regurgitation of factoids rather than probing higher learning skills. However, a move to remote (ie unsupervised) assessment of students who have ready access to Google* makes this format of question entirely redundant (*other browsers are available).
  3. Students need examples of any new style of questions. A decision that MCQs are not going to be appropriate is only half of the story. Introduction of radically different types of questions is going to require not only production of the actual paper but also additional specimen questions for students who will not be able to draw on past papers for guidance.
  4. Clear and timely instruction. Students are going to need clear guidance before the day of an exam, reiterated in the “instructions” section of the assessment itself. There will be all sorts of practicalities about submission as well as the questions themselves about which students will need advice.
  5. Essay questions will need word limits. For all manner of reasons the standard “three essays in two hours” format is not going to work. More time is inevitably going to mean more words. We all hope that essays represent carefully constructed and reasoned arguments in response to a specific question. Sadly the reality can sometimes be “brain dumps”, in which any material matching identifiable keywords in the title is served up for academics to sift through. A longer time will just allow for more of this unstructured stream of consciousness.
    Even taking a less jaundiced view, a good student is going to be tempted to offer far more material in support of their answer than they would realistically have managed in the typical exam scenario. If we cannot restrict the available time, then another option is to impose a word limit. Having looked at past answers, a suggestion of 1200 words for a 40 minute question (i.e. 30 words per minute) has been floated. The emphasis on “quality over quantity” needs to be emphasised – more is not necessarily better. Of course there may be a minority of student who would have written a longer essay that this, but even they will benefit from tailoring their material as a response to the specifics of the question.
  6. Plagiarism detection, and other “course essay” regulations, are back in play. The kind of measures being considered as “reasonable adjustments” in this unprecedented scenario are much more akin to coursework essays. We aspire to have novel synthesis presented in exam essays, but in the past we would not have penalised faithful regurgitation of material from lectures and other sources. Now, however, there is the very real danger of copy and paste plagiarism from lecture notes, from books and articles, or indeed of collusion between students. The requirement to use plagiarism detection tools is therefore going to be essential. Similarly, students will be able to drop in images taken from sources. Whereas in a constrained exam format we might not have worried about their origins, appropriate citation will need to be factored into marking criteria.
  7. Practicalities about format of paper set and submission requirements also need to be clear. It is not just the content of the questions that need addressing, but also aspects of the delivery of the paper and the safeguards students need to put in place regarding submission. For example it is likely that a paper will be distributed as a Word document, which is actually more accessible than many other potential formats. We know, however, that some elements of layout can be altered during the submission of word documents (eg positioning of images) and so we would probably recommend saving as a PDF before submission (much as we would usually for coursework).

This is not an exhaustive list, and you may instantly spot the flaws in the observations made – if so then do please let me know. I am very conscious that this is prepared in the context of a science program and that other disciplines may see things differently. But I hope these notes will be helpful for at least one or two of you.

 

 

What is marking for?

Alongside novel challenges in the delivery and assessment of higher education, the current health crisis is causing some older issue in pedagogy to bubble back to the surface. One of these is the tension between between marking and feedback.

essayhand

Most academics, I suspect, have had the demoralising experience of finding boxes of carefully annotated work sitting uncollected in the administration office long after any interested parties will have picked up their work. Even with the switch of many assignments online (over several years, not just this week) you can see that the feedback feature hasn’t even been opened by many students in a given cohort (and you cannot tell the extent to which those who have clicked on it actually engaged with the comments).

I have been reminded of this by the impact of COVID-19 on existing plans. My first year students have recently written an essay under exam conditions. This is their first taste of an assessment format they will encounter much more frequently over the next couple of years. Yes, I know this is anachronistic, and yes we have made significant strides towards diversification of assessment, but it remains the fact that at present essay-writing in a time-limited setting remains a skill they will need to develop. My belief in the importance of this task as part of the student’s training was a significant factor in my heavy-hearted decision not to participate in the recent strike (but that is conversation for a different day). Continue reading

Does attendance at lectures matter? An accidental case study

There is a lot of discussion in the University sector at the moment about student engagement and attendance at lectures. I know that several institutions (including my own) have ongoing pedagogic projects trying to ascertain why there has been a decline in the number of people turning up for face-to-face teaching sessions.

I was faced in March with the dispiriting spectacle of turning up to give one of my second year lectures and finding the room considerably under-populated. The attendance monitoring system suggests that there were 66 out of 185 students present, so that would be about 36%, so a smidge over a third (and this is before we get into the rising phenomenon of “swipe-n-go” students who log their attendance… then don’t!). Yes it was the last lecture in the entire module, yes there were probably looming deadlines in other modules, but part of my frustration at the level of absenteeism was borne out of the fact that I knew that my 15 mark Short Answer Question for the summer exam was based on the content of this session. I was therefore intrigued to see how the students would get on – would there be reams of blank pages (the outcome that leaves academics with mixed feelings – disappointment at missed learning, offset by a guilty acknowledgement that their marking burden is reduced)? Continue reading

When technology models poor practice

Year1 assessmentOne of the difficulties in teaching first year students is to convey the importance of appropriate handling of data, both in terms of data display and degrees of significance. I’ve commented previously on this site about times when technology can produce utterly inappropriate graphic representation of results (see A bonus lesson in my data handling tutorial).

At the end of the first semester we conduct an online exam using the Blackboard quiz tool. The assessment is out of 200, marked automatically and scaled to a percentage. When the students submit their answers at the end of the test, they get  instant reporting of their result. The screenshot on the right shows a section from the gradebook where the results are recorded in exactly the detail each students gets, i.e. up to 5 decimal places! It is unfortunate that this inappropriate “accuracy” gets displayed to the students.

“Please send a photo”

streetrunning2

One recent email exchange related to someone else’s order for running shoes, sent to me in error

I’ve recently had cause to contact three different companies about inadequacies in their service. The reasons for doing so in each case were very different, but there was a common thread to their replies: “Please send a photo of the [relevant item]”. When the third request came in, I started to see a pattern and this set me ruminating on why they were adding this extra step to dealing with my query.

And then it struck me, that this was exactly the reason – it was an extra step. It is part of a filtering process. It is easy enough for all and sundry to fire off email requests willy-nilly. As a mechanism to weed out the serious appellant from the time-waster there needed to be an additional hurdle. [I have vague memories from school history lessons that monasteries used to offer a similar process. Potential novices were never admitted at their first attempt, they were required to return on several occasions before securing entry into the monastic life.]

I mention this here, on my education blog, because I actually operate a similar system when it comes to requests from students. If you are involved in academia I am sure you recognise emails, particularly as exams loom, that go something like: Continue reading

The NSS and Enhancement (Review)

Coverage of the findings from the recent, new style, National Student Survey drew my attention to the Making it count report for the Higher Education Academy, coordinated by Alex Buckley (I’m afraid I’ve lost details of who pointed me towards the report, so cannot offer credit where credit is due).

make it countMaking it count is not new, it was published by the HEA in 2012, and therefore predates both the new-NSS and the introduction of the TEF. Nevertheless I found it a fascinating and worthwhile read – hence this reflective summary.

As most readers of this blog will know, the UK National Student Survey was introduced in 2005 and draws inspiration from the Australian Course Experience Questionnaire (CEQ), which had been in use since the early 1990s. From inception until 2016 there were a standard set of 23 questions in the NSS (see this link for complete list). The questions were all positively phrased and students in their final year were invited to respond using a standard five-point scale from “definitely agree” through “mostly agree”, “neither agree or disagree”, “mostly disagree” to “definitely disagree”  (“not applicable” was also an option). Following extensive consultation, the questions were changed for the first time in 2017. A total of 27 questions were included, with some original questions retained, some rephrased and some brand new added (see this link for 2017 questions). Continue reading

When assessment interferes with the measured

There was a time, not so long ago, when no scientific presentation could afford to omit at least one cartoon from The Far Side. One of my personal favourites (which can be seen here) depicts people of an apparently remote part of the world hiding their luxury Western goods as anthropologists arrive unannounced in the village.

I was reminded of this cartoon recently whilst washing my hands at work. This surprising mental leap was prompted by the temporary addition of a tool for monitoring water consumption in one of our buildings.

Can the method of assessment interfere with the thing it is supposed to be measuring?

Can the method of assessment interfere with the thing it is supposed to be measuring?

As can be seen in the photograph, the equipment being used scores few points for subtlety. I cannot believe that people use their usual amounts of water when confronted by this instrument. This raises the question of their value given that the method of monitoring is almost certainly interfering with the thing that is being measured. Continue reading

  • Awards

  • April 2020
    M T W T F S S
     12345
    6789101112
    13141516171819
    20212223242526
    27282930