Capturing more than lectures with “lecture capture” technology (paper review)

The July 2017 edition of British Journal of Educational Technology includes a pilot study The value of capture: Taking an alternative approach to using lecture capture technologies for increased impact on student learning and engagement investigating the potential to exploit lecture capture technologies for the production of teaching resources over and above recording of lectures per se.

BJET

I was keen to read this paper because I am already using Panopto (the same software used in the study) as a means to generate short “flipped classroom” videos on aspects of bioethics which, it is hoped, students will watch before participating in a face-to-face session. I have also produced some ad hoc materials (which author Gemma Witton terms “supplementary materials”), for example to clarify a specific point from my lectures about which several students had independently contacted me. Furthermore, I have also written some reflections on the impact lecture capture is already having on our courses (see Reflecting on lecture capture: the good, the bad and the lonely).

In the study, Witton looks at use of Panopto-generated resources in four disciplines: Maths & Computing; Business; Biomedical Science & Physiology; and Biology, Chemistry & Forensics. In her context (The University of Wolverhampton, UK) the most recent teaching building consists entirely of laboratories, without any traditional front-of-class teaching space. Instead three recording stations are provided for preparation of instructional videos.

A range of video formats were utilised across the disciplines studies – these included pre-laboratory practical demonstrations, unpacking of assessment and group feedback, as well as recorded lectures.

In a finding that will shock no one, Witton found that 100% of responding students wanted the university to continue using the capture technologies. This is not an indication that they will necessarily use it, only that – in common with responses to similar questions on, for example, styles of feedback – students would like every possible option presented to them, from which they can then pick-and-mix the resources they wish to exploit.

Reflections on and reactions to the article:

I enjoyed reading the paper and found it thought-provoking. It serves to shine a spotlight on the existing literature on lecture capture, highlighting some initial studies which I need to investigate in more detail. I did, however, also have a number of frustrations regarding aspects of the study, particularly relating to things that we were not told.

  1. Although labelled a “pilot” study, and therefore allowed a certain degree of leniency, it struck me that the paper was comparing apples and pears. For example (Figure 1) there were 40 sessions of materials captured in Biomedical Science and Physiology amounting to a total of 28.39 hours (28 hours and 23 mins), whereas in Biology, Chemistry & Forensics there were only 3 sessions and a grand total of 0.34 hours (about 20 mins). Comparison of “consumption rates”, i.e. hours recorded: hours views was interesting, and the differences striking (1:0.04 for Business v 1:497 for Biology, Chemistry & Forensics) nevertheless it would have been useful to see more granularity in analysis of duration and usage of the different types of materials used in Biology, Chemistry & Forensics which – we are told – included demonstrations, assessment unpacking and group feedback, plus material captures in the “lecture theatre environment” (does this mean traditional lectures?) We are not told specifically, but I would infer that it does, since live streaming of lectures is included as one of the resources noted in the context of the largest cohort (see point 3, below).
  2. Evaluation surveys were conducted with staff. 13 out of 62 (= 21%) of invited staff responded. The author argues that because this is a relatively low response rate it is not appropriate to analyse differences of opinion in staff from each Department. Whilst I agree this is probably the case, I would have liked to know exactly how the 13 respondents were distributed across the programmes and the absence of this data leaves me suspicious that it was very uneven. If, it is argued, that the anonymous nature of the questionnaire does not facilitate this, then I would contend that this anonymisation was inappropriately offered (a question identifying Dept would not have jeopardised anonymisation of the more substantive parts of the survey).
  3. Similarly, for the student survey there were 111 responses from 650 students (= 17%). Here there is allusion to likely skewing of the data because one module alone represented 400 students ( = 62% of the survey population). We are not told which module it is. If anonymisation has, again, limited examination of the course registration of the respondents then this is a mistake.
  4. A comment that “not all types of captured content were available to all students” indicates that the option “did not access” is inadequate. At minimum a “not applicable” choice ought also to have been offered.
  5. The format of the survey is not made clear. We are told how participants were recruited (via direct email) but not how the survey was implemented. I assume delivery was online (this has implications for completion rates) but this was not specified. Even then it might have been an embedded survey within the VLE, or it might have been (for example) a Surveymonkey study. We are told that “all students who had logged in to the system” (p1015) were invited to participate, but not the percentage of students who had versus had not used the system at all.
  6. Figure 4 has an intriguing graph of “Volume” on the x-axis and “Value” on the y-axis to examine the relative merit of content. If I am interpreting the graph correctly, the word “duration” would have been more appropriate than “volume”. Captured and Live-streamed lectures are shown as high volume but low value items, whereas “assessment unpacking” and “supplementary materials” are considered high value but low volume items. I do not dispute these evaluations, however it is not clear from the paper whether these are actual analysis of the content in the Wolverhampton pilot, or theoretical designations. I believe them to be theoretical (this is the impression given in the discussion, p1017), but I’d have liked this to be clearer (plus, of course, analysis of their actual content would have been more interesting). It is also a little frustrating that “supplementary materials” is used as a category distinct from “assessment unpacking” and “capture on-location” despite the latter two descriptions being identified as sub-categories of “supplementary materials” in the definitions section.
  7. We are told that “questions were categorised into five themes that addressed the project outcomes” (p1015), but I see no identification of what these themes actually were. Similarly “surveys included parallel questions to facilitate comparison of staff and student perspectives” (p1015), but no such comparison is reported.
  8. An argument is made regarding workload implications for staff in rolling out more video-based resources. The observation is made, unquestionably correctly, that there is need for an upfront investment of time by academics to produce the videos. My experience shows this is the case. The suggestion is made that subsequently there will be a time saving, e.g. because you will not have to deliver the introduction to the class practical multiple times in person. This is also true for demonstrations, provided that the content of the session does not alter year-to-year, however I would assume that “group feedback” and “assessment unpacking” will need to be made de novo for each cohort.
  9. I would have liked a little more reflection on the impact of duration of specific videos on their perceived merit. It would also be useful to critique the tone of delivery and the use of engaging visuals within each of the videos (though I acknowledge these are beyond the scope of a “pilot” study.

Overall therefore, I would argue that this is an interesting preliminary study, with a frustratingly large number of flaws in the design and/or the reporting. I hope that the author will accept these are suggestions from a “critical friend” and this might feed into a better paper reporting more fully on the developments.

The citation is:

The value of capture: Taking an alternative approach to using lecture capture technologies for increased impact on student learning and engagement

Gemma Witton (University of Wolverhampton)

British Journal of Educational Technology 48:1010-1019 (2017)

doi:10.1111/bjet.12470

Advertisements

Leave a comment

No comments yet.

Comments RSS TrackBack Identifier URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

  • Awards

    The Power of Comparative Genomics received a Special Commendation

  • July 2017
    M T W T F S S
    « Jun    
     12
    3456789
    10111213141516
    17181920212223
    24252627282930
    31