Are you good at multi-tasking? Are you sure?

I was intrigued by a recent paper Cognitive control in media multitaskers in the highly-regarded journal Proceedings of the National Academy of Sciences. The study looked at the information processing styles of self-reported media multitaskers, defined as users of two or more content streams simultaneously, compared with those who do not multitask in this way. (I will skirt over the irony both that I was supposed to be doing something else at the time I spotted the BBC report of the research, and that my son has just switched on the TV as I type this review on my laptop.)

An electronic copy of the paper was posted online in August 2009, ahead of publication in the print journal (doi: 10.1073/pnas.0903620106)

An electronic copy of the paper was posted online in August 2009, ahead of publication in the print journal (doi: 10.1073/pnas.0903620106)

In their research, Ophir et al asked 262 University students to complete an online self-assessment questionnaire regarding both the total number of hours spent using different media (they specified 12 formats including TV, online video use, music, print media, e-mail and text messaging) and how likely they might be to use some of these concurrently alongside a primary task. The authors then generated a numerical Media Multitasking Index (MMI) and ranked the students. Those with a score one or more standard deviations below the mean (light media multitaskers, LMMs) or one or more SDs above the mean (heavy media multitaskers, HMMs) were then invited to participate in a series of cognitive ability tests. In total there were between 30 and 41 students taking the various tests, evenly split between LMMs and HMMs.

Continue reading


Learning and Teaching in the Sciences (conference report, part 1)

The annual Learning and Teaching in the Sciences event at the University of Leicester was held on May 23rd 2007.  Three invited speakers brought very different insights into the effective communication of science. This entry focuses specifically on the first of the presentations.  Other talks, by Melanie Cooper (Clemson University, USA) and Alan Cann (University of Leicester) will follow in subsequent posts.


Norman Reid (Professor of Science Education, University of Glasgow) addressed the subject of the ways we can maximise the impact of our teaching by taking into account scientific studies into the factors that influence learning.  I had heard Norman speak previously on the subject of pedagogic research methodology (he has written a very useful booklet on the subject on behalf of the Physical Sciences Centre, Higher Education Academy).  I had high expectations, and I wasn’t disappointed. 

Early on in his talk, Norman emphasised the importance of Working Memory Capacity (WMC), in other words how many ideas are we capable of holding in our short-term memory at any one time.  In an exercise reminiscent of the 1980s gameshow The Krypton Factor, we were asked to convert a date into single digits, and put them in numerical order (without writing them down).  So, for example, 7th April 96 would be 4-6-7-9.  As the number of digits involved increased, the capability to solve the puzzle diminished.  If, therefore, we are presenting students with more distinct pieces of information than they can cope with (in other words, if the information load of our teaching exceeds their working memory capacity, then this is going to have a detrimental impact on their learning. Rather than a linear decline in success as information load increases, there is a sudden collapse in performance.  For most people, the WMC seems to be about 7 items.  This number varies from person to person and, it seems, we can do little to change it. Norman mentioned grouping strategies and pattern recognition as ways in which we can carry more bits of information than our WMC, but this is making the best of what we’ve got, not stretching the underlying capacity.  He didn’t specifically discuss mnemonics, but I guess these are an example of a grouping strategy.

The place of WMC in an information processing model was then fleshed out.  In addition to Working Memory and Long-Term Memory, an important role is also played by a Perception Filter.  I took the latter to be a subconscious self-recognition of the number of bits of information you can cope with.  To draw an analogy (my own, apologies to Prof Reid if I’ve got this wrong!) – if you were the captain of a ship, you would know how much cargo you can carry on board.  You would decline extra items, even if they were on offer.  In similar vein, a perception filter allows you to ‘know your limits’ – there may be extra information on offer, but when you know you are in danger of overload you engage mechanisms that stop taking too much on board, lest the ‘ship’ sinks.  I guess, by extension of my image, there is benefit in being able to distinguish valuable cargo from junk, which is probably one reason why our previous experience and our long-term memory influence the effective working of our perception filter.  Norman used the term field dependency for the ability to see what is important, to distinguish the ‘message’ from the ‘noise’.

Pushing my analogy to its conclusion, I suppose our role as educators would equate to the port authorities or harbour master.  We need to be aware of the number of fresh bits of cargo we are offering to our students, and ration their delivery so that we reduce the risk than anyone tries to set sail with too much on board (suspicion I pushed that too far – Ed). 

In the next phase of his talk, Prof Reid moved on to consider the idea of pre-learning. At its most simple, this might be starting a lesson or a lecture with a couple of minutes of reflection (“ok, who can remember what we discussed last time?”).  This is all about making connections between different nuggets of information.  Having a list of review questions up on the screen at the start of the lecture and asking students to work through them in pairs was a recommended model.  This might be extended to a formal short activity or exercise taking place before a major lecture or laboratory practical to draw attention to what are going to be the main points, thus equipping the students more effectively to distinguish message v noise.

Once again, these ideas rang true for me.  I know I’m not alone in seeing that one of the downsides of modularisation has been the compartmentation of knowledge.  Students do not necessarily see the connections between the different teaching within a module and less so between units.  It is one of the roles of the educator to make explicit the links to previous and future teaching, since they (hopefully!) have a better grasp of how the bits fit together.

Prof Reid emphasised that reducing the working memory load was emphatically not a call for ‘dumbing-down’.  The challenge is not to throw out the hard topics, but rather give conscious consideration to the order in which material is covered, to connections between material more overt and to break down complex items into more comprehensible sizes.

As the session moved towards questions, much of the discussion focussed on the research methodologies employed to produce the scientific data undergirding these views.  In particular, delegates and speaker alike expressed a frustration that the demands for ‘fairness’ meant that it was becoming very difficult to conduct proper comparisons between groups experiencing different teaching.  True, crossover studies (where group A is taught using method X and group B is taught using method Y, and then the two groups are swapped over for a second phase of teaching using the other method) can partially fulfil this need, but there are plenty of occasions when this is not truly feasible.  In consequence, many of the most informative studies have been performed outside of the UK.  Food for thought.

  • Awards

  • December 2018
    M T W T F S S
    « Oct