Learning and Teaching in the Sciences (conference report, part 1)

The annual Learning and Teaching in the Sciences event at the University of Leicester was held on May 23rd 2007.  Three invited speakers brought very different insights into the effective communication of science. This entry focuses specifically on the first of the presentations.  Other talks, by Melanie Cooper (Clemson University, USA) and Alan Cann (University of Leicester) will follow in subsequent posts.

 

Norman Reid (Professor of Science Education, University of Glasgow) addressed the subject of the ways we can maximise the impact of our teaching by taking into account scientific studies into the factors that influence learning.  I had heard Norman speak previously on the subject of pedagogic research methodology (he has written a very useful booklet on the subject on behalf of the Physical Sciences Centre, Higher Education Academy).  I had high expectations, and I wasn’t disappointed. 

Early on in his talk, Norman emphasised the importance of Working Memory Capacity (WMC), in other words how many ideas are we capable of holding in our short-term memory at any one time.  In an exercise reminiscent of the 1980s gameshow The Krypton Factor, we were asked to convert a date into single digits, and put them in numerical order (without writing them down).  So, for example, 7th April 96 would be 4-6-7-9.  As the number of digits involved increased, the capability to solve the puzzle diminished.  If, therefore, we are presenting students with more distinct pieces of information than they can cope with (in other words, if the information load of our teaching exceeds their working memory capacity, then this is going to have a detrimental impact on their learning. Rather than a linear decline in success as information load increases, there is a sudden collapse in performance.  For most people, the WMC seems to be about 7 items.  This number varies from person to person and, it seems, we can do little to change it. Norman mentioned grouping strategies and pattern recognition as ways in which we can carry more bits of information than our WMC, but this is making the best of what we’ve got, not stretching the underlying capacity.  He didn’t specifically discuss mnemonics, but I guess these are an example of a grouping strategy.

The place of WMC in an information processing model was then fleshed out.  In addition to Working Memory and Long-Term Memory, an important role is also played by a Perception Filter.  I took the latter to be a subconscious self-recognition of the number of bits of information you can cope with.  To draw an analogy (my own, apologies to Prof Reid if I’ve got this wrong!) – if you were the captain of a ship, you would know how much cargo you can carry on board.  You would decline extra items, even if they were on offer.  In similar vein, a perception filter allows you to ‘know your limits’ – there may be extra information on offer, but when you know you are in danger of overload you engage mechanisms that stop taking too much on board, lest the ‘ship’ sinks.  I guess, by extension of my image, there is benefit in being able to distinguish valuable cargo from junk, which is probably one reason why our previous experience and our long-term memory influence the effective working of our perception filter.  Norman used the term field dependency for the ability to see what is important, to distinguish the ‘message’ from the ‘noise’.

Pushing my analogy to its conclusion, I suppose our role as educators would equate to the port authorities or harbour master.  We need to be aware of the number of fresh bits of cargo we are offering to our students, and ration their delivery so that we reduce the risk than anyone tries to set sail with too much on board (suspicion I pushed that too far – Ed). 

In the next phase of his talk, Prof Reid moved on to consider the idea of pre-learning. At its most simple, this might be starting a lesson or a lecture with a couple of minutes of reflection (“ok, who can remember what we discussed last time?”).  This is all about making connections between different nuggets of information.  Having a list of review questions up on the screen at the start of the lecture and asking students to work through them in pairs was a recommended model.  This might be extended to a formal short activity or exercise taking place before a major lecture or laboratory practical to draw attention to what are going to be the main points, thus equipping the students more effectively to distinguish message v noise.

Once again, these ideas rang true for me.  I know I’m not alone in seeing that one of the downsides of modularisation has been the compartmentation of knowledge.  Students do not necessarily see the connections between the different teaching within a module and less so between units.  It is one of the roles of the educator to make explicit the links to previous and future teaching, since they (hopefully!) have a better grasp of how the bits fit together.

Prof Reid emphasised that reducing the working memory load was emphatically not a call for ‘dumbing-down’.  The challenge is not to throw out the hard topics, but rather give conscious consideration to the order in which material is covered, to connections between material more overt and to break down complex items into more comprehensible sizes.

As the session moved towards questions, much of the discussion focussed on the research methodologies employed to produce the scientific data undergirding these views.  In particular, delegates and speaker alike expressed a frustration that the demands for ‘fairness’ meant that it was becoming very difficult to conduct proper comparisons between groups experiencing different teaching.  True, crossover studies (where group A is taught using method X and group B is taught using method Y, and then the two groups are swapped over for a second phase of teaching using the other method) can partially fulfil this need, but there are plenty of occasions when this is not truly feasible.  In consequence, many of the most informative studies have been performed outside of the UK.  Food for thought.

Advertisements

Why is the site called Journal of the left-handed biochemist?

The name of the blog has its origins in lectures I give on referencing for university assignments.  Not surprisingly, undergraduates sometime struggle to recognise that papers published in certain journals are deemed more worth that articles appearing in ‘lesser’ publications.  To avoid potential offence to any particular periodical I would stress that “An article appearing in Nature is generally considered more authoritative than one published in the Journal of the Left-handed Biochemist“.   As of now, the latter is no longer fictional; the sentiment, however, is probably still true.   

What is the “Journal of the left-handed biochemist?”

I’ve been running the Bioethicsbytes site for a while now and am very pleased with the way it has been received by colleagues involved in education.  That site, however, has a very specific niche focus on multimedia resources for teaching about bioethics.  Several times recently it has occurred to me that something I was mulling over would be of general interest, but doesn’t fit with the tight aims of Bioethicsbytes.  Hence the birth of the Journal of the left-handed biochemist.  Whether or not the posts do turn out to be of interest (or indeed whether there are going to be any posts) remains to be seen.

  • Awards

    The Power of Comparative Genomics received a Special Commendation

  • May 2007
    M T W T F S S
        Jun »
     123456
    78910111213
    14151617181920
    21222324252627
    28293031