At the November 2011 meeting of our Bioscience Pedagogic Research group, attention was focused on Questionnaire Design. Emma Angell, from the University’s SAPPHIRE group (Social science APPlied to Healthcare Improvement REsearch) shared some tips she had picked up during a two-day course which she had attended in May 2011. The course took place at the London School of Economics and was led by Jon Krosnick of Stanford University, and Emma was keen to stress that credit for the insights was his not hers!
As the website advertising the original course points out: “Surveys and questionnaires are a common way of gathering data in the social sciences. The structuring, wording and ordering of questions has traditionally been viewed as an art, not a science, best guided by intuition. But in recent years, it has become clear that this is an antiquated and even dangerous view that does not reflect the accumulation of knowledge throughout the social sciences about effective question-asking. Intuition often leads us astray in the questionnaire design field, as becomes clear when putting intuitions to the test via scientific evaluation. A large body of relevant scientific studies has now accumulated, and when taken together, the findings point to a series of formal rules for how best to design questions.”
Emma talked us through a number of potential problems with questionnaires that can undermine the legitimacy of the data they generate. In gathering questionnaire-based data, we hope that the person surveyed is able to interpret the meaning of the question, searches for the most appropriate pre-set response (or offers a thorough and accurate open text response) and in so doing gives a true reflection of their views and/or experiences. To do so will require them to search thoroughly for an appropriate memory and to convert that information into an answer that correlates with the question asked. If they are doing this, then they are “optimising”.
One key danger is “satisficing”, a term coined by Herbert Simon to cover the combination of “satisfying” and “sufficing” to describe behaviour in which respondents do not fully engaging with the exercise. It may be that they enter into the entire process in a superficial way, choosing the first answer that they think is acceptable, or picking an answer they think is expected. There may be a bias, for example, towards picking “agree” over “disagree”. Alternatively, they may take the easy way out, picking “don’t know” or omitting a question since they can subsequently claim that they did not properly understand what was being asked.
Factors influencing whether a respondent “optimises” or “satisfices” may include the difficulty of the task, their ability to perform the task and their motivation; if any of these are compromised then the respondent may be more prone to satisficing.
- Difficulty might include complexity of language, answer options that do not include their preferred response, the presence of distracting elements (either within the survey or in the environment where the survey is conducted), the overall length of the survey and – for oral questionnaires – the speed at which questions were asked.
- Ability might include prior experience of this type of questioning (it was noted, for example, that parents of children with long-term illness become more adept at answering surveys as they are familiar with the pattern and mental processes involved). If someone has answered similar surveys before, or has actually entered into profound thought on the topic independent of the survey, then they may be able to offer a well-formulated “preconsolidated” answer.
- Motivation includes the respondents personal views on the importance of the information being collected, their accountability for the answers they give, and appropriate encouragement from an interviewer during the course of the task.
Take-home messages from the session included:
- the need to phrase questions in a way that fitted, as far as possible, with conversational norms (e.g. “ladies and gentlemen” not “gentlemen and ladies”, avoiding double-negative where possible).
- If in doubt open questions are preferable to closed questions. It was noted in discussion that this presupposes the necessary resource to subsequently code and interpret the open responses. However, provided this is feasible, an open question ought to generate a truer answer. A set of stated answers plus “other…. please specify” was said to be an unsatisfactory model. If using questions with a pre-determined set of options then it is vital that adequate pilot testing has been undertaken to ensure that all necessary options have been included.
- Question order effects can be significant, with a satisficing participant either picking the first response on the list (primacy effects) or the last option given, especially in oral surveys (recency effects).
- Where a scale of responses if offered, there ought to be an odd number of choices, ideally 7. The options should be offered in full, not as a series of numbers with only then ends and the middle options labelled.
Leave a comment
No comments yet.