Getting referencing right: applying the 4 Cs

For a variety of reasons, I have been reflecting on principles that undergird good citation practice. So far I’ve come up with “the 4 Cs guide” (I say I’ve come up with them, as I’ve not done any reading on this, but I wouldn’t be surprised if this or something similar has co-evolved elsewhere).

When advising students or colleagues on appropriate organisation of their references at the end of a document, I encourage them to check that they’ve followed the 4 Cs guide:

Correct: Have they cited the correct sources? This might mean drilling back to the first occurrence of an observation (e.g. in the primary literature) rather than a review article. Clearly we don’t want to be encouraging people to cite the original paper if they’ve not read it, but you sometime see statements such as “Smith and Bloggs have shown…” when actually Smith and Bloggs wrote the review in which they discussed the experimental work of Ramone and Farnes-Barnes who made the observation described. Continue reading

Letting students know what we expect in essays

At a recent student-staff committee meeting, a first year student rep noted that it was difficult to know what sort of things markers would be looking for in an essay (especially since many people had no cause to write essays at all during their A level science courses).

I was able to point him to the generic guidance we offer in the Undergraduate Handbook, issues to all new students. However, I also wondered whether we ought really to give more information (particularly when we likely give *markers* of the work quite a thorough checklist). So this year I’ve decided to send an email overtly pointing out the kind of things that gain or lose marks (see below). Critics might argue either that (a) this is undue spoon-feeding or (b) that it will make it harder for us to find criteria on which to comment. I would counter this by saying that ironing out of some of these issue should make it clearer for us to actually get into the *content* of the essay we are assessing and not end up so focused on the *production and process* that we barely get into discussion of the substance of the essay proper.

Anyhow, we’ll see how it goes.

Criteria markers may be judging:

  • Has the essay got the correct title (not some vague approximation to it)?
  • Does the essay have a proper introduction and conclusion?
  • Are references cited in the text (using the Harvard system)? Is there a well-organised reference list at the end?
  • Does the essay answer the question posed in the title?
  • Is there a logical flow to what has been written (or is a random collection of points, albeit valid points)?
  • Is the sentence construction good? Are there issues with paragraphing? Is the story “well told”?
  • Has selective or partial coverage of the topic, inevitable in short essays, been justified in any way?
  • Have other instructions been followed e.g. is the essay double-spaced? page numbers? word limit?
  • Are there diagrams? Do they have: Figure number? Title? Legend (if applicable)? Are they referred to in the text? Are they neat and fit for purpose? If “imported” from a source are they cited?
  • Is the title and/or legend “widowed” (on a different page) from the image?
  • Is there inappropriate use of quotes?
  • Is the essay clearly too long (or too short)?

Finding research articles that are worth finding

An enormous amount of scientific literature is generated each month

The world is awash with scientific papers. Even if we restricted a survey to research within biological sciences, I guestimate that there are more papers published each month than your average academic could be expected to read in a lifetime. In these rich fields of information, how are students unfamiliar with the genre  to develop the ability to discern the wheat from the chaff (let alone the weeds)?

At the University of Leicester, we have a task for undergraduates, conducted towards the end of their second year, in which they produce a poster describing a particular research method (selected from a short list relevant to their chosen discipline). As part of the exercise, the students need to choose a primary research article which illustrates one application of that method.

It is fair to say that many of the students initially struggle to select an appropriate paper*. There are several reoccurring  problems:

  • Failure to distinguish between a research article and a review
  • Failure to recognise that all journals are not equal in terms of their academic quality and rigour of the work they publish
  • Selection of papers that do not really utilise the technique that should be the focus of their poster.

In truth, the ability to select the right kind of paper is one of the important learning outcomes from this exercise; these students will commence their final year dissertations immediately after they return from summer vacation and need to avoid wasting hours reading papers that are not worthy of their attention. However, since I frequently find myself making the same points in email correspondence with individual students, I felt it was worth using this forum to share some of my overarching reflections on the fine art of finding appropriate research articles.

Research Article or Review?

Most searches these days are conducted online

When we’ve been involved in academia for many years, knowing whether a paper is a primary research paper or a review comes as second nature. This is not necessarily true for inexperienced students. In their defence, the variation of the naming of research articles in different journals does not help. Cell and EMBO Journal call them “Articles”. Nature also has “Articles” but the majority of original research is labelled as “Letters” and in Science primary literature is mostly “Reports” .

Although it ought to be possible to spot a research article by the sub-sections it contains (i.e. Abstract, Introduction, Materials and Methods, etc…) these are not necessarily given the same names in all journals, and the order in which they included can be different. Of course, if the journal has the word “Review”, “Trends in…” or “Current opinions in…” then it is likely to include, almost exclusively, articles summarising the research of other scientists rather than containing original reports of new experiments . But in other journals a review might actually be called “Perspectives”, “Commentary” or “News and Views” to name but three.

Review articles as a mean to an end

For the specific activity we set our students (finding a primary research paper that demonstrates the use of a particular technique), reviews are not going to be a suitable destination. This does not mean, however, that review articles are of no merit in the search for good quality experimental data. In some senses, the authors of a review article have done crucial legwork for you. Already experts in the field, they have read broadly about the topic and will then have selected what they consider to be the most significant recent experiments for their reflections. Looking at the reference list in a review can therefore be an efficient way to shine a light on the best primary literature.

Is it a good paper?

If you are not familiar with a research technique or particular discipline, how can you know whether a paper is a “good” paper? There are three useful clues. The first is the one we have just described, namely do authors of reviews rate the work as worthy of their attention?

Secondly, where has it been published? Like it or not, there is a hierarchy of journals; some titles are in the Premier League, some are in the Championship, and some are non-league. Experienced heads know that research published in Nature, Science, Cell, Proceedings of the National Academy of Science and EMBO Journal is considered (usually with some justification) to be more worthy than papers coming out in the Journal of Knitting and Spectroscopy, but recognising this is, once again, an aspect of the maturing of undergraduates into fully-fledged scientists.

Thirdly, how many times has the work been cited by other researchers? If work has been in circulation for fifteen years but has only been quoted twice during that period, and both times by the author of the original paper, then it is fair to conclude that it did not contain ground-breaking discoveries. One of the useful features of Web of Knowledge is the citation count. If you don’t have access to Web of Knowledge, Google Scholar has a similar (though slightly less rigorous) feature.

Finally, if you want to refine a search so that it is restricted to only those “Premier league” journals name above,  then why not go to a journal’s own search engine rather than, or in addition to, a more generic search tool? (e.g. for Nature, for Science).

* Note, if you are wondering how this squares with my recent description of a  source evaluation exercise for first years, I need to point out that this is NOT the same cohort.

But is it any good? An information literacy tutorial

At the Higher Education Academy STEM conference in April 2012, I presented a poster offering an outline into a blended-learning tutorial we have produced in order to help undergraduates develop their abilities to evaluate the academic merit of different resources they might find on the internet. The tutorial involves the students working individually to critique eight specially chosen online sources presented as the results of a search on the topic of “mitochondria”. This is followed up by a group tutorial in which the quality and relevance of the materials are discussed more fully.

To see a pdf version of the poster, click on this image

You know when you’ve been viper-ed

viper44A tweet this morning from @jon_scott alerted me to the fact that sometime over the weekend, the University of Leicester has been visited by the PR machine for the Viper service. Paving slabs had been stencilled with the company’s logo and web address. Rather ingeniously, the marketeers have jet-washed the image rather than painting it on, which I presume guards them against accusations of vandalism because all they’ve actually done is remove dirty (thanks to @jobadge for pointing this out, she is obviously more ‘direct action’ savvy than me). The image on the bumpy pavement at the traffic lights makes the wash v paint strategy most clearly.

viper55 viper33

Viper is marketing itself as a way for students to check that their work is not guilty of plagiarism. Several institutions have already wrestled with the question of whether to let students pre-submit their work to Turnitin so that they can see for themselves if it is going to get pinged by that software with the same intentions. However laudible this seems, one of the difficulties is the fact that students will simply learn how to mask their tracks rather than developing bona fide study skills. The subversive nature of the current marketing strategy reinforces the view that this is a way to “beat the system”. I was interested also that one of the recommendations for the software on the Tucows site seems to come from a student who bought a ‘bespoke’ essay for her course and was now asking for a refund as the software showed it was not quite the original work she thought she’d paid for!

Making the best of “Bad Science” (Review)

Harper Perennial edition (2009)

Harper Perennial edition (2009)

If you have not yet read Ben Goldacre’s book Bad Science, then I thoroughly recommend that you do. As readers of his regular Guardian column or his website will already know, Goldacre has embarked on a campaign to root out example of pseudoscience and shoddy science whereever they may be found.

All the usual villians are present – homeopaths, nutritionists, slack journalists, pharmaceutical companies and AIDS dissenters. Some are mentioned by name, but given their alleged predilection for litigation, and since I do not have the time, the money or the inclination to do battle with them in the courts, I shall not repeat their identities here!

It would be wrong, however, to give the impression that Goldacre is merely on a crusade against high profile exponents of “bad science”. True, the author does sometimes betray a little too much glee as he places a bomb under the throne of a media “health expert” (in a way that I found disturbingly reminiscent of the Physiology lecturer, when I was a first year undergraduate, recalling his boyhood experiments on frogs). Nevertheless, Goldacre is keen to emphasise that his purpose is to “teach good science by examining the bad” (p165 in my copy), adding that “the aim of this book is that you should be future-proofed against new variants of bullshit” (p87). Continue reading

Getting to grips with Information Literacy

From time to time I find myself ruminating on exactly how and where I acquired a variety of study skills. I have no recollection, for example, of any formalised training in finding and selecting source materials and yet even as an undergraduate I seemed to be reasonably adept at choosing relevant information.

Back in those days, of course, finding the material was a bigger task than it is today.  The contemporary student is spared the need to leaf through the Index Medicus and then scourer the shelves for the relevant tomes before staggering to the photocopier burdened by the combined weight of several impressively-bound volumes. A multitude of online search engines and databases, coupled with institutional access to a plethora of electronic journals provides the 21st Century undergraduate with more information than they could ever hope to process in time to meet the assignment deadline (to say nothing of the less formal internet sources at their fingertips). 

The challenge today is much less focussed on the location of documents and much more heavily skewed towards the evaluation and selection of appropriate material; discriminating good sources from bad and the evaluating the relevance of the material to the specific task at hand.  Having recently marked a set of essays where several submissions were characterised by failure to achieve these goals, I am reminded that the challenge of panning through tonnes of data in order to extract pertinent nuggets of gold is considerable, and the skills necessary to achieve this are unlikely to be acquired by osmosis.

It was against this backdrop that I was interested to read a paper The annotated bibliography and citation behaviour: enhancing student scholarship in an undergraduate biology course, written by Molly Flashpohler and colleagues, and published in the Winter 2007 edition of CBE – Life Sciences Education. The paper describes an active intervention to develop the “Information Literacy” skills of students taking an immunology and parasitology module at a small college in Minnesota, USA. 

In keeping with many institutions, our programme includes sessions on making the most of Web of Science, PubMed and other such search tools.  In addition to this, I’ve done some work myself on plagiarism-prevention.  What I don’t believe we’ve offered previously is any intervention of the type described by the Flaspohlers, in which students are encouraged to undertake a meta-critique of the reference materials they have assembled for a forthcoming assignment.

In the Minnesota model, the students are required to produce an annotated bibliography.  This is more than a citation list; each resource they are intending to use must be critiqued in a brief (approx 150 word) paragraph.  The students are advised that this annotation should demonstrate, as far as possible: the authority or background of the original author(s); their intended audience; the writing style of the author(s); how this article relates to their project; any bias or point of view apparent in the original work; and to highlight any tables, figures, etc which the student feels will be particularly relevant for their task.  Further to this, the annotations are supposed to cross-reference to one another as the student compares and contrasts the specific work with others that they have chosen to cite.

It strikes me that a task of this kind is “do-able” – it could be added as a training step on the way to submission of essays or dissertations that are already part of our courses.  The authors describe the improvements they have seen in the seven-year evolution of this intervention – including a shift towards better quality and more authoritative sources, and a concommitant fall in instances of plagiarism. Admittedly they have been working with a relatively small cohort (average 17 per annum), but I suspect that the knock-on merits of adding an exercise of this type would justify the effort of introducing something similar to my group, where n is nearer 100.

To re-quote John Porter, author of an earlier work in the same journal (albeit under its former name): “An awareness of the current literature is as important to scientific research as the careful design of adequate controls“.  This being the case, Information Literacy is too significant to be left to osmosis.