“Please send a photo”


One recent email exchange related to someone else’s order for running shoes, sent to me in error

I’ve recently had cause to contact three different companies about inadequacies in their service. The reasons for doing so in each case were very different, but there was a common thread to their replies: “Please send a photo of the [relevant item]”. When the third request came in, I started to see a pattern and this set me ruminating on why they were adding this extra step to dealing with my query.

And then it struck me, that this was exactly the reason – it was an extra step. It is part of a filtering process. It is easy enough for all and sundry to fire off email requests willy-nilly. As a mechanism to weed out the serious appellant from the time-waster there needed to be an additional hurdle. [I have vague memories from school history lessons that monasteries used to offer a similar process. Potential novices were never admitted at their first attempt, they were required to return on several occasions before securing entry into the monastic life.]

I mention this here, on my education blog, because I actually operate a similar system when it comes to requests from students. If you are involved in academia I am sure you recognise emails, particularly as exams loom, that go something like: Continue reading

Science Education in Europe: Plotting a course for the future?

Somewhat belatedly I have been catching up on a couple of reports about the future of Science teaching in Europe. Both were prompted by widespread concern that school science in its present form is not meeting the needs of society for the 21st Century. The decline in students’ attitudes towards science – apparently universal across Europe – is a particular worry.

Science Education Now: A renewed pedagogy for the future of Europe was published in 2007 and Science Education in Europe: Critical reflections in 2008

Published in 2007, Science Education Now: A renewed pedagogy for the future of Europe was written at the behest of the European Commission with the specific objective “to examine a cross-section of on-going initiatives and to draw from them elements of know-how and good practive that could bring about a radical change in young people’s interest in science” (p2).

The second paper, Science Education in Europe: Critical reflections follows on from two seminars held in 2006 at the Nuffield Foundation in London. The final report was published in January 2008.

Continue reading

Making the best of “Bad Science” (Review)

Harper Perennial edition (2009)

Harper Perennial edition (2009)

If you have not yet read Ben Goldacre’s book Bad Science, then I thoroughly recommend that you do. As readers of his regular Guardian column or his website will already know, Goldacre has embarked on a campaign to root out example of pseudoscience and shoddy science whereever they may be found.

All the usual villians are present – homeopaths, nutritionists, slack journalists, pharmaceutical companies and AIDS dissenters. Some are mentioned by name, but given their alleged predilection for litigation, and since I do not have the time, the money or the inclination to do battle with them in the courts, I shall not repeat their identities here!

It would be wrong, however, to give the impression that Goldacre is merely on a crusade against high profile exponents of “bad science”. True, the author does sometimes betray a little too much glee as he places a bomb under the throne of a media “health expert” (in a way that I found disturbingly reminiscent of the Physiology lecturer, when I was a first year undergraduate, recalling his boyhood experiments on frogs). Nevertheless, Goldacre is keen to emphasise that his purpose is to “teach good science by examining the bad” (p165 in my copy), adding that “the aim of this book is that you should be future-proofed against new variants of bullshit” (p87). Continue reading

“Will this be in the test?”

Amongst the major science research journals, Science magazine has consistently been the most prominent in flying the flag for science education. I was very interested, therefore, in an Editorial by Carl Wieman in the September 4th 2009 issue of the magazine. In his piece Galvanising Science Departments, Wieman describes some fairly radical innovations in Science Education currently underway at the University of Colorado and the University of Bristish Columbia. The aim is to adopt evidence-based teaching methodologies with emphasis on the development of scientific thinking and problem-solving skills rather than fact regurgitation.

I have no direct experience of teaching in the USA, either as provider or recipient. I know, for example, that much greater emphasis is placed on the recommended course text in the USA than in the UK, but beyond that I cannot speak with any authority. It does sound like some of the reported innovations are things that have taken place here for some while, such as the addition of specific (skill-centred) learning goals to modules. A cornerstone of the strategy has been appointment of science education specialists, individuals who not only have expertise in their subject discipline, but are also au fait with educational and cognitive psychology studies, a variety of effective teaching strategies and – I note with some mirth – possess diplomatic skills!  The programme is ongoing, the University of Colorado is in the 4th year of an initial six year project and so the full impact of the developments will not be known for some while. Continue reading

Learning and Teaching in the Sciences (conference report, part 2)

Professor Melanie Cooper from Clemson University, South Carolina came to Leicester’s Learning and Teaching in the Sciences conference as part of a UK tour sponsored by the Physical Sciences Centre of the Higher Education Academy.  In her talk, Using technology to investigate and improve student problem-solving strategies, Prof Cooper began by drawing an important distinction between problems and exercises.  Often when people set ‘problems’ what they are in fact asking students to do are ‘exercises’, activities designed to train the participants to be able to tackle similar future tasks in a formulaic way.  Problem-solving is about developing a range of skills that will equip students to “address novel situations and arrive at a suitable course of action” (Dudley Herron).  It is not, therefore, about knowing how to crank an equation to get the right answer.

In understanding how students approach problem-solving, there would clearly be huge value in directly observing them throughout the duration of a task.  Such ethnographic research methods, however, have a number of difficulties.  Firstly, the time required for the observations themselves, and all the moreso the subsequent evaluation, is a vast commitment.  Secondly, observations tend to be based, for reasons of practicality, on relatively small numbers of individuals.

In her education research, Prof Cooper has been able to access a very much larger cohort (several thousand students at Clemson take general chemistry each year) and has got around the need for direct observation of the students at work by exploiting the IMMEX software, developed principally by Ron Stevens at UCLA.  Not to be confused with any similar-sounding floor-to-ceiling cinematic experiences, IMMEX stands for Interactive Multi-Media EXercises. An example of IMMEX use (in the context of genetics education) can be seen in the open access journal Cell Biology Education (see Stevens, Johnson and Soller, 2005).

IMMEX seems to involve some pretty fearsome computing, but I hope the following catches the essence of it.  Students carry out an on-line activity working from a single start-point towards a specific correct answer.  Along the way they can select from a number of briefing sheets, experimental results and other lab data relating to the problem in order to help them to the solution.  Not all of the available information is equally valuable or necessary to complete the task.  The software records the route taken by each student from start to finish (a so called ‘search path map’), and uses artifical neural network (ANN) clustering to identify and categorise common strategies.  The technology has to be ‘trained’ by exposure to a large number of examples, and then generates a ‘topological map’, for example a 6×6 grid of ‘nodes’ where similar approaches are clustered together.  Rather than contemplating 36 different approaches, these nodes can then be rationalised into a smaller set of ‘states’ representing similar models, in terms of strategies used and/or outcomes achieved.  So, for example, a ‘novice’ strategy might be ineffective (i.e. the student is unable to solve the problem) and/or inefficient (i.e. they visit most or all of the pages before completing the task), whilst an ‘expert’ would take a more efficient and effective route encompassing only the necessary information sources.

The use of ANN allows for helpful categorisation of students’ performance in a particular task.  This can be used to provide them with formative advice on how they might improve their approach.  At this stage a second approach is used to predict and to evaluate the changes in strategies that students make when offered the opportunity to undertake one or more similar tasks.  With sufficient data Hidden Markov Modelling (HMM) can make statistical predictions about the likelihood that students using strategy X will use the same approach again the next time, or whether they will swap to a different tack, and if so whether it will be strategy Y or strategy Z.  The challenge then is whether interventions that we make can move the students on towards a better strategy.

Work by Prof Cooper and others has shown that individual students, be they ‘novice’, ‘competent’ or ‘expert’ at the outset, can improve their competence by repeating activities – but only up to a point.  After five performances, or fewer, none of the participants working on their own exhibited any further improvement in either their ability or strategies employed.  How, therefore, can educators help students to make further refinements in their problem-solving abilities?

Melanie’s evidence shows that an answer lies in group work.  Working with others, particularly those of with different approaches (see below) involves metacognition, i.e. it forces the students into explicit reflection about what they are doing. Tackling a problem collaboratively exposes students to new ways of thinking and/or offers them clarity about why certain approaches are less useful.  What’s more, there is evidence that the improvements made by involvement in group work are retained if the students are subsequently required to work on their own again.

What group arrangements work best? Groupings should be organised by the tutor, not left to the students to choose.  The  maximum group size should be four, and the majority of benefit can be achieved by students working in pairs.  At Clemson, they use the GALT (Group Assessment of Logical Thinking) test as an initial means to identify the type of thinking employed by students – concrete (C), transitional (T) or formal (F), according to the Piagetian model.  Following the GALT assessment, students in Prof Cooper’s research were assigned to pairs according to all possible combinations; FF, FT, FC, TT, TC and CC.  It was clear from the research that there were distinct combinations that afforded greater improvement (to at least one of the pair).  For example, ‘transitional’ students paired with ‘concrete’ improved the most.  Overall female students improve more than males via experience of groupwork, but there was no significant difference based on whether pairings were single sex or mixed gender. Male students, incidentally, improved more as a result of using concept maps than as a result of participation in groups – but that’s probably a story for a different report.

Other talks at the Learning and Teaching in the Sciences conference, by Norman Reid and by Alan Cann,  are discussed elsewhere on this site.