When the sum is better than the parts: combining the power of comparative genomics and experiments on model organisms

I have been doing some reading for a while now on the ethics of research involving model organisms, particularly the potential for studies on lower species to offer insights into human disease (and thereby contribute to the 3Rs). Some of my musings on the topic can be found here.

Aware of this interest, a colleague recommended that I read a 2004 paper published in the journal Cell. I am very grateful that he did, since the study really has the “wow” factor – demonstrating beautifully the potential of comparative genomics, experiments on model organisms and knowledge of human disease to work together to produce new insights that would have been much harder if any one component was missing. The paper is Comparative genomics identifies a flagellar and basal body proteome that includes the BBS5 human disease gene by Li JB et al. The following notes are my attempt to summarise the best bits.

The importance of cilia and basal bodies in disease
The role of cilia in respiration (and the detrimental effects of smoking on their function) were features of the school biology curriculum when I was a child. However, research over the last ten years or so has demonstrated that cilia have surprisingly diverse roles in development, from determination of left-right symmetry in the body, through to formation and function of specific organs such as the kidneys (for more detail see the Wikipedia entry on Ciliopathy or, if you have access permissions,  Badano et al (2006), The ciliopathies: an emerging class of human genetic disorders Annual Review of Genomics and Human Genetics 7:125-148). Bardet-Biedl syndrome (BBS) is one disorder associated with non-functional or malfunctional cilia. The clinical features can be varied, but include obesity, mental retardation, progressive-onset blindness and polydactylism (i.e. possession of extra digits). Continue reading

How widespread is scientific misconduct?

From time to time examples of scientific fraud come to light and raise questions about the integrity of scientific endeavour. The most well-known example of recent years must surely be South Korean stem cell biologist Hwang Woo-Suk, whose ground-breaking discoveries in the field of therapeutic cloning were exposed as bogus (In addition to his science reputation being in tatters, Hwang was convicted in October 2009 of embezzlement and violation of bioethical laws, although he escaped a custodial sentence).

In physics, the multiple re-use of the same graphs as data for entirely different experiments led to the downfall of a leading young nanoscientist (this was the subject of a 2004 episode of the BBC’s Horizon series The dark secret of Hendrik Schön). Are Hwang and Schön rare examples bringing unwarranted criticism to a body of otherwise exemplary scientists, or are their crimes indicative of much wider malpractice within the scientific community?


University of Edinburgh researcher Daniele Fanelli has shed some light on the the extend of scientific fraud in an article How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. Published in the open access journal PLoS ONE in May 2009, the research brought together data from a number of earlier smaller studies on scientific misconduct to generate “the first meta-analysis of these surveys” (p1).

Continue reading

Making the best of “Bad Science” (Review)

Harper Perennial edition (2009)

Harper Perennial edition (2009)

If you have not yet read Ben Goldacre’s book Bad Science, then I thoroughly recommend that you do. As readers of his regular Guardian column or his website will already know, Goldacre has embarked on a campaign to root out example of pseudoscience and shoddy science whereever they may be found.

All the usual villians are present – homeopaths, nutritionists, slack journalists, pharmaceutical companies and AIDS dissenters. Some are mentioned by name, but given their alleged predilection for litigation, and since I do not have the time, the money or the inclination to do battle with them in the courts, I shall not repeat their identities here!

It would be wrong, however, to give the impression that Goldacre is merely on a crusade against high profile exponents of “bad science”. True, the author does sometimes betray a little too much glee as he places a bomb under the throne of a media “health expert” (in a way that I found disturbingly reminiscent of the Physiology lecturer, when I was a first year undergraduate, recalling his boyhood experiments on frogs). Nevertheless, Goldacre is keen to emphasise that his purpose is to “teach good science by examining the bad” (p165 in my copy), adding that “the aim of this book is that you should be future-proofed against new variants of bullshit” (p87). Continue reading

Promoting the ethical conduct of science

Back in 2004, Sir David King (at the time, the Government’s Chief Scientific Adviser) initiated a discussion about generating a Code of Conduct for Scientists. The consultation process led, in 2006, to the publication of Rigour, respect and responsibility: a universal ethical code for scientists. None of the contents was particularly surprising or radical but it brought together in one place a list of seven key principles that ought to be foundational for the ethical conduct and communication of science.

The Code of Conduct emphasises seven key points

The Code of Conduct emphasises seven key points

The Code received a public launch at the BA Festival of Science in September 2007 and was reported in the general press at the time (see, for example, UK science head backs ethics code). During the intervening two years, conversations with scientist colleagues (across a range of institutions) have revealed almost universal ignorance about the existence of the Code, let alone its content. Continue reading

Learning and Teaching in the Sciences (conference report, part 1)

The annual Learning and Teaching in the Sciences event at the University of Leicester was held on May 23rd 2007.  Three invited speakers brought very different insights into the effective communication of science. This entry focuses specifically on the first of the presentations.  Other talks, by Melanie Cooper (Clemson University, USA) and Alan Cann (University of Leicester) will follow in subsequent posts.


Norman Reid (Professor of Science Education, University of Glasgow) addressed the subject of the ways we can maximise the impact of our teaching by taking into account scientific studies into the factors that influence learning.  I had heard Norman speak previously on the subject of pedagogic research methodology (he has written a very useful booklet on the subject on behalf of the Physical Sciences Centre, Higher Education Academy).  I had high expectations, and I wasn’t disappointed. 

Early on in his talk, Norman emphasised the importance of Working Memory Capacity (WMC), in other words how many ideas are we capable of holding in our short-term memory at any one time.  In an exercise reminiscent of the 1980s gameshow The Krypton Factor, we were asked to convert a date into single digits, and put them in numerical order (without writing them down).  So, for example, 7th April 96 would be 4-6-7-9.  As the number of digits involved increased, the capability to solve the puzzle diminished.  If, therefore, we are presenting students with more distinct pieces of information than they can cope with (in other words, if the information load of our teaching exceeds their working memory capacity, then this is going to have a detrimental impact on their learning. Rather than a linear decline in success as information load increases, there is a sudden collapse in performance.  For most people, the WMC seems to be about 7 items.  This number varies from person to person and, it seems, we can do little to change it. Norman mentioned grouping strategies and pattern recognition as ways in which we can carry more bits of information than our WMC, but this is making the best of what we’ve got, not stretching the underlying capacity.  He didn’t specifically discuss mnemonics, but I guess these are an example of a grouping strategy.

The place of WMC in an information processing model was then fleshed out.  In addition to Working Memory and Long-Term Memory, an important role is also played by a Perception Filter.  I took the latter to be a subconscious self-recognition of the number of bits of information you can cope with.  To draw an analogy (my own, apologies to Prof Reid if I’ve got this wrong!) – if you were the captain of a ship, you would know how much cargo you can carry on board.  You would decline extra items, even if they were on offer.  In similar vein, a perception filter allows you to ‘know your limits’ – there may be extra information on offer, but when you know you are in danger of overload you engage mechanisms that stop taking too much on board, lest the ‘ship’ sinks.  I guess, by extension of my image, there is benefit in being able to distinguish valuable cargo from junk, which is probably one reason why our previous experience and our long-term memory influence the effective working of our perception filter.  Norman used the term field dependency for the ability to see what is important, to distinguish the ‘message’ from the ‘noise’.

Pushing my analogy to its conclusion, I suppose our role as educators would equate to the port authorities or harbour master.  We need to be aware of the number of fresh bits of cargo we are offering to our students, and ration their delivery so that we reduce the risk than anyone tries to set sail with too much on board (suspicion I pushed that too far – Ed). 

In the next phase of his talk, Prof Reid moved on to consider the idea of pre-learning. At its most simple, this might be starting a lesson or a lecture with a couple of minutes of reflection (“ok, who can remember what we discussed last time?”).  This is all about making connections between different nuggets of information.  Having a list of review questions up on the screen at the start of the lecture and asking students to work through them in pairs was a recommended model.  This might be extended to a formal short activity or exercise taking place before a major lecture or laboratory practical to draw attention to what are going to be the main points, thus equipping the students more effectively to distinguish message v noise.

Once again, these ideas rang true for me.  I know I’m not alone in seeing that one of the downsides of modularisation has been the compartmentation of knowledge.  Students do not necessarily see the connections between the different teaching within a module and less so between units.  It is one of the roles of the educator to make explicit the links to previous and future teaching, since they (hopefully!) have a better grasp of how the bits fit together.

Prof Reid emphasised that reducing the working memory load was emphatically not a call for ‘dumbing-down’.  The challenge is not to throw out the hard topics, but rather give conscious consideration to the order in which material is covered, to connections between material more overt and to break down complex items into more comprehensible sizes.

As the session moved towards questions, much of the discussion focussed on the research methodologies employed to produce the scientific data undergirding these views.  In particular, delegates and speaker alike expressed a frustration that the demands for ‘fairness’ meant that it was becoming very difficult to conduct proper comparisons between groups experiencing different teaching.  True, crossover studies (where group A is taught using method X and group B is taught using method Y, and then the two groups are swapped over for a second phase of teaching using the other method) can partially fulfil this need, but there are plenty of occasions when this is not truly feasible.  In consequence, many of the most informative studies have been performed outside of the UK.  Food for thought.