What characterises “quality” in ethics education?

I recently read Ercan Avci‘s 2017 paper Learning from experiences to determine quality in ethics education (International Journal of Ethics Education 2:3-16). Avci, from Duquesne University, conducted a literature review looking for shared characteristics in peer-reviewed, full text articles with “ethics education”, “ethics teaching” or “ethics learning” in the title and “ethics” or “ethics education” in the keywords during the period 2010-2015 (which the author describes as the “the last five years”, though it looks like six years to me). A total of 34 papers were examined, drawn from 11 academic disciplines and 10 countries (plus 3 international studies). As one might anticipate, the USA was the most represented geographical context, and healthcare (Nursing, Medicine, etc) was the discipline with the highest number of studies. I was a little surprised to see that none of the reports were from the UK.

As the author himself points out, this is a rather eclectic mix of settings. This might be spun either as an advantage (e.g. capturing diversity) or as a limitation (when it comes to drawing universal lessons). Notwithstanding these issues, Avci makes a number of important observations, some of which resonate with my own experience (e.g. see the Notes for the Tutor section, p16 onwards, in my contribution to the 2011 book Effective Learning in the Life Sciences).

AVCI

Taking a step back, there is an initial question before examining the quality of any ethics programme, namely is ethics being taught at all? It is apparent that many courses – even in Medicine, even in the States – do not include a formal ethics component. However, a broad range of subjects are now including some ethics in their teaching. Continue reading

Advertisements

Some tips for developing online educational repositories

As part of my work enthusing about the use of broadcast media in teaching, I am in the process of writing a guide to the use of Learning on Screen’s Box of Broadcasts resource. However my reflections on this project, coupled with the development of other blog-based resources such as Careers After Biological Science, set me thinking about some more generic recommendations for anyone thinking of setting up an online collection of educational resources. These crystallised quite naturally into a series of questions to ask oneself about the purpose, scope and authorship of the materials.

On the advice of a couple of colleagues, I submitted this to the Association for Learning Technology blog. I was delighted when they accepted it, since members of that community are likely to be developing similar resources. My self-check questions can be found via this link.

altcblog

 

When technology models poor practice

Year1 assessmentOne of the difficulties in teaching first year students is to convey the importance of appropriate handling of data, both in terms of data display and degrees of significance. I’ve commented previously on this site about times when technology can produce utterly inappropriate graphic representation of results (see A bonus lesson in my data handling tutorial).

At the end of the first semester we conduct an online exam using the Blackboard quiz tool. The assessment is out of 200, marked automatically and scaled to a percentage. When the students submit their answers at the end of the test, they get  instant reporting of their result. The screenshot on the right shows a section from the gradebook where the results are recorded in exactly the detail each students gets, i.e. up to 5 decimal places! It is unfortunate that this inappropriate “accuracy” gets displayed to the students.

“Please send a photo”

streetrunning2

One recent email exchange related to someone else’s order for running shoes, sent to me in error

I’ve recently had cause to contact three different companies about inadequacies in their service. The reasons for doing so in each case were very different, but there was a common thread to their replies: “Please send a photo of the [relevant item]”. When the third request came in, I started to see a pattern and this set me ruminating on why they were adding this extra step to dealing with my query.

And then it struck me, that this was exactly the reason – it was an extra step. It is part of a filtering process. It is easy enough for all and sundry to fire off email requests willy-nilly. As a mechanism to weed out the serious appellant from the time-waster there needed to be an additional hurdle. [I have vague memories from school history lessons that monasteries used to offer a similar process. Potential novices were never admitted at their first attempt, they were required to return on several occasions before securing entry into the monastic life.]

I mention this here, on my education blog, because I actually operate a similar system when it comes to requests from students. If you are involved in academia I am sure you recognise emails, particularly as exams loom, that go something like: Continue reading

Pedagogic Journal Club 101

In preparation for a journal club I’m leading shortly, I was reflecting on some generic starter questions which could be applied to reading any paper (this would be true in the context of reading an article on your own as well).

To start with, you can apply the 5W1H approach. In the context of reading a journal article I tend to take these in a non-typical order:

  • Who? Who conducted the research?
  • Where? Was it one institution only or a multi-centre project? UK, USA or elsewhere?
  • What? What, briefly, was the main point of the work [you will look in finer detail later on]?
  • When? Not only when was the work published, but when was the work actually conducted? This is especially pertinent if the article is describing the impact of technical innovations.
  • Why? What are the reasons the authors give for conducting the work? These may be generic and/or driven by particular local developments.
  • How? This is the nitty-gritty and will take on the bulk of a journal club discussion.

As part of the “how” there are additional key questions to bear in mind as you work step-by-step through the paper. These are:

  • What key information are we being presented in this section of the paper?
  • What key information are we *not* being presented in this section of the paper?

In both pedagogic research articles and scientific papers these two questions are particularly valuable when examining information that has been presented in figures and/or tables. Sometimes necessary background details to follow the implication of displayed data have to be found elsewhere in the text, and sometimes they are missing entirely (at which point you need to decide for yourself whether this an accidental or a deliberate omission).

For a journal club specifically you also need to remember that it is intended to be a discussion not a presentation of what you have found; you are the guide as you lead a band of intrepid explorers below the surface of the paper. If the journal club is working well you will come away from the process with additional insights they have made about aspects you missed in the text.

Biosummit 2017

The University of East Anglia (Norwich) was the venue for the annual Biosummit, a gathering of UK bioscientists with an active interest in pedagogic research. As usual there was much to reflect upon. A summary of the event is captured in this Storified summary of tweets. My own formal contribution was limited to reflections on the value of using the Royal Society of Biology’s CPD framework as a valuable mechanism for capturing the evidence of activity, and reflection upon that activity, which is increasingly required for appraisals, accreditation and applications. The slides from my talk are available below (and via this link).

This continues to be a bona fide “Community of Practice”. One of the highlights is seeing like-minded friends and catching up on what they’re doing in their lives as well as in their work. The content of the conference, however, remains central. This year there were a number of highlights for me. Continue reading

The NSS and Enhancement (Review)

Coverage of the findings from the recent, new style, National Student Survey drew my attention to the Making it count report for the Higher Education Academy, coordinated by Alex Buckley (I’m afraid I’ve lost details of who pointed me towards the report, so cannot offer credit where credit is due).

make it countMaking it count is not new, it was published by the HEA in 2012, and therefore predates both the new-NSS and the introduction of the TEF. Nevertheless I found it a fascinating and worthwhile read – hence this reflective summary.

As most readers of this blog will know, the UK National Student Survey was introduced in 2005 and draws inspiration from the Australian Course Experience Questionnaire (CEQ), which had been in use since the early 1990s. From inception until 2016 there were a standard set of 23 questions in the NSS (see this link for complete list). The questions were all positively phrased and students in their final year were invited to respond using a standard five-point scale from “definitely agree” through “mostly agree”, “neither agree or disagree”, “mostly disagree” to “definitely disagree”  (“not applicable” was also an option). Following extensive consultation, the questions were changed for the first time in 2017. A total of 27 questions were included, with some original questions retained, some rephrased and some brand new added (see this link for 2017 questions). Continue reading

  • Awards

  • October 2018
    M T W T F S S
    « Sep    
    1234567
    891011121314
    15161718192021
    22232425262728
    293031