The NSS and Enhancement (Review)

Coverage of the findings from the recent, new style, National Student Survey drew my attention to the Making it count report for the Higher Education Academy, coordinated by Alex Buckley (I’m afraid I’ve lost details of who pointed me towards the report, so cannot offer credit where credit is due).

make it countMaking it count is not new, it was published by the HEA in 2012, and therefore predates both the new-NSS and the introduction of the TEF. Nevertheless I found it a fascinating and worthwhile read – hence this reflective summary.

As most readers of this blog will know, the UK National Student Survey was introduced in 2005 and draws inspiration from the Australian Course Experience Questionnaire (CEQ), which had been in use since the early 1990s. From inception until 2016 there were a standard set of 23 questions in the NSS (see this link for complete list). The questions were all positively phrased and students in their final year were invited to respond using a standard five-point scale from “definitely agree” through “mostly agree”, “neither agree or disagree”, “mostly disagree” to “definitely disagree”  (“not applicable” was also an option). Following extensive consultation, the questions were changed for the first time in 2017. A total of 27 questions were included, with some original questions retained, some rephrased and some brand new added (see this link for 2017 questions).

As Buckley points out in Make it Count, the survey was originally conceived as a tool for applicants to compare institutions at which they might potentially study, and as a “light touch” measure of quality assurance. Over time, however, it has become an increasingly important instrument for informing the educational enhancement agenda, and the focus of the report was to look at the appropriateness of this application and to develop some best practice do’s and don’ts.

Before I got into the report proper I found myself disagreeing with the Foreword by Gwen van der Velden (then of Bath University, now at Warwick) when she said “student satisfaction – as measured by the NSS – is not a matter that the corporate level of the university can influence directly” (p4). In my experience central timetabling and allocation of rooms has a fundamental impact on satisfaction – weekly yomping from one end of the campus to the other on wet Monday afternoons in November, to get from your 4pm lecture to your 5pm lecture, leaves an indelible stain on the student experience!

The report is divided into three main sections: Staff-student partnerships; Institutional structures; and Analysis and exploration. Each section is subdivided into further topics.

Staff-student partnerships: Students need to be involved in the response to the NSS as well as its completion (incidentally, some of the new questions in the 2017 NSS are overtly capturing opportunities finalists had during their degree to contribute to “the student voice” and whether their opinions were taken seriously).

Buckley advocates the involvement of students as part of committee and working groups. This is certainly something I would endorse. In recent years I have chaired three working groups for my HEI: on the operation of student-staff committees; on personal tutoring; and on revision of undergraduate regulations. In all cases students have been integral members of the group. Making it Count recommends involvement of students in all levels of committee that have a bearing directly or indirectly on their education. It does, however, acknowledge that they will need training in order to maximise the potential for their contribution. The SU, and their representatives, need access to the full NSS data and other appropriate information, made available in advance of any meeting.

To maximise the involvement of students as genuine partners in the process of improvement attention will need to be given to the sequencing of items on agendas and ensuring that the educational jargon used is not impenetrable. The transient nature of representatives, including elected sabbaticals of the SU, is noted as one of the complicating factors.

Whilst the NSS should not be taken as a sole source of data on the student experience (see later), students can nevertheless play an active role in the analysis of NSS. This might include employing some students as research assistants to interrogate the data and/or including analysis of the data as coursework on statistics or social science courses.

I was interested in the comments in the report about demonstrating to current students that we have taken seriously the feedback from previous cohorts (both within the NSS and from other sources such as end of module questionnaires). In common with other HEIs, we have rolled out the “You said,,, we did” model of response to highlight areas where we have changed aspects of a module as a consequence of listening to “the student voice”. Whilst acknowledging the contribution “You said… we did” can make to “Closing the [feedback] loop”, the report does raise a concern that this phraseology reinforces the impression that students are customers or clients, and that HEIs ought to be doing more to foster a genuine partnership model. This might include promoting opportunities for more ongoing dialogue and feedback so that, where necessary, change can be made sufficiently quickly to impact the current cohort and not just those that will follow in their wake.

Two weaknesses regarding an overemphasis on the NSS as a tool for enhancement are identified. Firstly, it is only a survey of finalists and therefore does not directly capture views of students in earlier years (though anyone who has read the qualitative comments on the survey will know how a particularly bad one-off experience can be seared into the memory of a student and, perhaps, unduly influence their overall views of the course). Recognising this limitation, a number of universities now use the NSS framework of questions in earlier years to allow for monitoring as students progress through their course.

Secondly, the NSS is only capturing a narrow (and rather generic) range of observations about the student experience. Whilst it is a rich source of information about a programme, it needs to be combined with data that has arisen via other routes (end of module surveys, via student-staff committees, etc).

This chapter closes with a challenge to reflect on the importance of both the tone and the method of returning information to students in influencing the staff-student and institution-student relationships.

Institutional structures: The NSS can generate quite negative responses by academic staff. Significant contributory factors to this attitude have been crude handling of the data by the management of some HEIs. It has been all too easy to berate staff for bad news stories emerging from the survey, whilst neglecting to give the same prominence to celebration of, and congratulations for, success stories.

A strong theme in this section of the report, therefore, is that senior management need to be wise about the ways they handle NSS data and the messages – implicit as well as direct – that they pass on to colleagues. Senior staff need to guard against transferring the NSS-related pressures that they [inevitably?] feel onto more junior staff in their area of responsibility. Even if managers feel that the messages they are receiving are all negative, they need to work hard to ensure that they are conveying a better balance of positives onto their colleagues.

Effort needs to be made to switch staff attitudes from seeing the NSS as a tool of Quality Assurance (others checking up on them) to a means of Quality Enhancement (i.e. as a trigger for improvements in which they are going to play an active role). It is recognised that the free text comments by students can be more engaging that statistics, but we will come back to appropriate handling of the “comments” a bit later.

Other ways to bolster staff engagement with the NSS included ensuring that all staff are aware of the questions asked on the survey, especially relevant given that they have just changed, and the possibility of a parallel National Academic Staff Survey. Having seen the level of menacing that students get to complete the survey one time, the prospect of mandatory completion of a survey by staff on an annual basis does fill me with some foreboding!

It is also crucial that staff are given space in which they can respond meaningfully to aspects of the NSS where they could make a difference. Given the general busyness of academic life (apart, of course, from the 3 month holiday Lord Adonis believes we all take in the summer) it may be that institutions wanting staff to be proactive in NSS-driven change will need to remove some other administrative burden (possibly for one academic year) in order to facilitate this.

It is also important that all staff, however low they feel teaching comes on their personal list of priorities, recognises the importance of the NSS. I recall a few years ago when a “research focused” member of staff who “didn’t do any teaching” nevertheless managed to single-handedly decimate the timely feedback score for a cohort completing the survey by returning his one significant piece of coursework so spectacularly late that it coloured the perception of all their feedback.

As staff reflect on answers within the NSS it is worth remembering that the survey inevitably contains limited or skewed data. This is not an excuse for ignoring clear and consistent responses across the piece, but they do need to be measured in their response to criticisms that are overtly wrong. By way of example, a couple of years ago a student used the free text section to state that they had never received feedback on any of their work in less than a month. This was palpable nonsense and it is frustrating to have no right to reply. However on the other hand there is no point getting hot under the collar or writing off all of the other responses simply because someone has elected to use this space to make a ridiculous claim. [However, it IS worth stopping to reflect on what it was about that student’s actual experience that led them to either believe that this was the case, or to consider it a view worth sharing even though they themselves knew it to be a caricature.]

So how should the NSS data be used to drive changes? As the report points out, the introduction of this annual survey of student satisfaction has helped to re-emphasise the importance of students as partners in the evolution of HE. “There is a broad consensus that the NSS has brought genuine benefits for learning and teaching” (p35).

As with any measures, there will be a temptation for staff to prioritise ways to “game”  the survey and to seek ways to improve their scores without actually instigating any real changes. I would draw a distinction, however, between merely aiming to tweak the numbers and acting to improve scores by “expectation management”. For example, it may be that an aspect of the course is scoring badly because of an unrealistic expectation about provision. If a poor score is an indication of the gap between expectation and perceived reality, then this gap can legitimately be shortened by explaining clearly why, with the best will in the world, something will not be feasible.

Used appropriately, the NSS can contribute to development of “a culture where students’ needs, learning and teaching are openly discussed” (p37).

Analysis and exploration: Several of the issues addressed in the third main section have, to some extent, been discussed in the preceding text. Buckley emphasises that his report was not investigating either the validity or the reliability of the NSS as an instrument (which had been considered elsewhere). The point here is what to do with the information generated by the survey.

As noted “the NSS is an unprecedented tool for benchmarking students’ perceptions at institution an subject level” (p48). Senior management want to be see how their courses stand alongside those of “similar” institutions, and/or those geographically close to them. There certainly needs to be some caution with simplistic interpretation of data. Comparison of performance across your subject area in different institutions is likely to miss significant nuances that influenced the outcome (things such as local climate or quality of city nightlife that are beyond the remit of the academics delivering the course).

At the same time, comparisons of different subjects at the same HEI need to he handled carefully. This was brought home to me by Mark Langan (whose work is actually the first cited in the further reading on p61 of Making it Count). Mark is a fellow bioscientist and, like myself, served for a while as the Editor-in-chief of the journal Bioscience Education. When the NSS was a relatively new instrument, Mark was instructed by the senior management at his university to go and talk to the academic lead in a different discipline and find out why the latter had got better satisfaction scores than bioscience.

Looking solely at the crude numbers, Mark had a case to answer. However, to all of our benefit, Mark examined the data a little more carefully. He observed (and here I’m working from memory rather than quoting the exact details) that although his Dept had achieved a lower absolute number than the other subject at the same HEI, if you in fact looked at the position of each relative to the same subjects at different universities, the “lower” achieving biology programme was actually in the top 50% of courses for that subject, whereas the “better” subject, from whom he was supposed to learn, was actually in the bottom half of their discipline.

As a tool used across very diverse subjects at very diverse institutions, the NSS inevitably omits nuance and local factors. Failure to recognise this can lead to inappropriate effort to deal with wrong priorities. A low score on one question does not necessarily mean it ought to be a policy priority (Langan has, again, done good work here – pointing out the lack of correlation between poor scores on the “feedback questions”, which is widespread, and the scores for the overall satisfaction question).

Buckley notes that the NSS does not adequately address issues for part-time students, those taking their course by distance learning and those on joint Honours programmes. There is also a suspicion that student reporting about the quality of the “course” can actually be transference of reflection on their own engagement. An acknowledged weakness of the NSS is that it foregrounds student satisfaction. Alternative instruments such as the National Survey of Student Engagement in the USA and the AUSSE in Australia place greater emphasis on engagement, i.e. what the student themselves has put into the course. A recent local anecdote from an end of module survey; a student gave low rating to all the lectures and criticised the quality of lecturing. In fact the convenor knew who had filled in the questionnaire and knew that the person in question had in fact attended very few of the sessions (ironically, in the free text recommendations for improvements they also included “make attendance at all lectures compulsory” so perhaps they knew that their criticisms were on shaky ground!)

 

The NSS throws little, if any, light on underlying factors leading to low or high scores. More questions could be added (as noted above, the survey has just got slightly longer), but there is well recognised tension in the design of any survey, a balance between adding more questions to get richer information versus crossing a mental barrier where the document is perceived to be taking too much time to complete so that potential participants opt out entirely and hence the data is lost.

To some extent the qualitative comments within the NSS can add valuable insight and, as noted above, some staff actually find this a useful way into the data with the fuzzyness of opinion-based responses more engaging that the stark, black and white,  nature of quantitative answers. A certain caveat is offered (p54) regarding the need for appropriate training or professional experience in interpreting qualitative data.

The importance of additional course information to supplement the NSS is emphasised throughout. We have mentioned other questionnaires already, but some institutions also run focus groups with students. The timing of conducting interviews with finalists is complex – there is a relatively tight window if you want to talks to them after they’ve completed the survey to gain additional insights into why they gave the answers they did.

There is a minimum threshold of 10 respondents before a programme can be included in the NSS. There is also a general reminder that student satisfaction is not a direct proxy for the quality of a university education.

The report also notes the simplistic reporting of the NSS by the media, which assumes it is “an accurate summative measure of teaching quality” (p47). This opportunity for misunderstanding is, of course, all the more prevalent in the recently launched Teaching Excellence Framework (TEF). With scores heavily influence by data on employability and starting salaries the TEF provides potentially valuable information for an applicant, but it certainly isn’t a measure of teaching excellence!

As mentioned at the outset, Making it Count is not a new report. My experience, hopefully captured in the preceding commentary, is that it remains a valuable and important document even with changes to the questions in the NSS and the introduction of the TEF.

 

 

Advertisements

Leave a comment

No comments yet.

Comments RSS TrackBack Identifier URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

  • Awards

  • August 2017
    M T W T F S S
    « Jul   Sep »
     123456
    78910111213
    14151617181920
    21222324252627
    28293031