1 INTRODUCTION
As teaching academics, most of us want to provide an effective environment within which our students can learn. Pedagogical training can contribute to identifying the best approaches to take in our teaching, including what leads to greater levels of student learning and hence assessment attainment. However, few of us have the time to regularly take advantage of such training or to continually engage with the pedagogic literature. We thus find ourselves relying on educational instinct and intuition developed perhaps as the result of experience, reflection, and (informal) discussions with colleagues. While this applies to any teaching-related issue, it is particularly relevant in regards to the use/impact of new technologies, including lecture capture, or to the challenges of teaching an increasingly diverse student body.
It is a widely held view that regularly attending lectures in person is good for student learning and that it leads to better exam performance than non-attendance. Our intuition, therefore, is to think that lecture capture technology and the availability of recorded lectures online may not be such a good thing for students since it may encourage them not to attend class and this is likely to impact negatively on their learning and academic performance. This intuition has led us to encourage students to come to class and to engage in face-to-face activities.
Many universities are currently experiencing and, in some cases, actively courting, increasing diversity of the student body (including increasing awareness of it). Intuition tells most of us that student background is likely to have an impact on academic performance, and that students experiencing physical/mental health issues/disabilities or learning difficulties are not likely to do as well in their studies as other students. This of course is why universities, including our own, have equity strategies in place that are designed to “level the playing field” by helping students who face such challenges. The question is whether these support measures are appropriately effective, or whether they over-/under-compensate affected students. We were surprised to find that such measures had not been previously explored.
We thus decided to test our intuitions and explore these knowledge gaps. Academics may not have the time to engage continually with pedagogical research, but we formed the view that it may be valuable to stop periodically to undertake some formal evaluation of the habits, instincts, and intuitions that are important in our day-to-day teaching responsibilities. This led us not only to explore the literature on the questions raised above but also to scientifically analyze our own experiences with respect to these issues. This paper is the product of those reflections and explores the nature of factors associated with student performance in a face-to-face taught postgraduate (i.e., graduate) Business Economics module at a leading UK university. The prior university experience that these students possess makes our approach rather different from much of the existing pedagogic literature which has tended to focus on undergraduate level contexts (e.g., Stanca 2006; Jones and Olczak 2016). Postgraduates are generally older and better prepared for study, most likely because they already have some experience of university-based learning (and often several years in the workforce). Furthermore, there is a broad cultural expectation that young people participate in undergraduate education, with, for example, over 50% of English youth attending university (Bolton 2020; Universities UK 2018). To these younger students, university is, therefore, just a natural progression of school. However, there is no such expectation beyond gaining an undergraduate degree. Fewer potential students choose to pay the additional fees/costs associated with studying at the postgraduate level, and this suggests that those who do may have a higher level of commitment to learning. Furthermore, there is a greater mix of international students in UK taught postgraduate degree programmes, which is likely to enhance this effect. Fees are generally higher for international students in postgraduate programmes compared to undergraduate programmes, and we might expect that those prepared to pay such high fees will have higher levels of commitment and will engage more seriously with their studies than might be the case in typical undergraduate programmes (Universities UK 2018; Bolton 2020). All of this is consistent with the small amount of existing evidence that highlights potential differences between undergraduate and postgraduate students (Lindsay et al. 2002; Sabbir Rahman et al. 2014; Shukr et al. 2013) so that further exploration of the determinants of student performance at the postgraduate context is warranted.
In light of our starting motivations, this paper explores the impact of the use of lecture capture and of seminar attendance on student performance where there remains considerable uncertainty. Unlike many existing studies (e.g., Lin and Chen, 2006; Edwards and Clinton, 2018) we found we were able to explore both seminar attendance and lecture capture simultaneously. We were also able to explore, for the first time in any study that we are aware of, the impact of a student Disability Action Plan (DAP), which are bespoke plans for any student impacted by health issues, disabilities, or learning difficulties. Finally, we explore the impact of pre-exam feedback through an original investigation as to how an interim formative assessment may improve performance, particularly for weaker students. When conducting these investigations, we control for a number of other factors that might impact performance, including prior subject experience, gender, and whether English is a foreign language.
The paper is set out as follows. Section 2 provides an overview of existing literature on these issues. Section 3 describes the institutional context within which our own investigation was undertaken. Section 4 specifies the econometric model used in the study, and Section 5 describes the data employed to estimate the model. Section 6 reports our results, and Section 7 offers some conclusions.
2 LITERATURE REVIEW
Contributions to the pedagogic literature identify a number of factors which are likely to influence student performance (i.e., grades) on a particular module of study. We discuss the literature on attendance, the nature of lecture capture technology and its effect on student behaviour, and broader factors affecting performance in turn.
2.1 Attendance
Many academics intuitively believe that students benefit from physically attending lectures and other classes. This may be because attendance implies that students have listened to and understood explanations of course content from lecturers, or because students have obtained an overview of content that they then use to guide further reading or revision of notes taken during class. Either way, attendance suggests that students have engaged in some useful way with their studies, and that this will enhance their learning and performance in exams or other assessments. Much of what we do as teachers is underpinned by this kind of belief.
This link between attendance, learning, and performance has been widely explored in the literature. A number of empirical studies have found, for example, that attendance benefits academic achievement (Marburger 2001; Lin and Chen 2006; Stanca 2006; Chen and Lin 2008; Edwards and Clinton 2019). The size of the benefit varies between studies, but it exists in multiple contexts/subjects including education, psychology, and economics (Cohn and Johnson 2006; Chen and Lin 2008). However, other studies have explored the flip side of this relation, examining absenteeism, and have generally found that while students may not suffer from missing a very small number of classes, excessive absenteeism does appear to have a negative impact on academic performance (Durden and Ellis 1995; Lin and Chen 2006; Cohn and Johnson 2006). These results are in line with similar studies which have investigated cumulative attendance (Chen and Lin 2008). The strongest evidence is perhaps reported by Marburger (2001), who found that in exams, students performed worse on the exact content of sessions that they missed.
Despite such studies, Newman-Ford et al. (2008) argue that there remains no clear consensus, with a number of other studies reporting no link between attendance and performance (cf. Durden and Ellis 1995; Moore et al. 2003). Indeed, while some research has found the link to be so strong that it advocates mandatory attendance, other work has not only found a lack of correlation between attendance and performance but has argued that mandatory attendance may actually harm performance (see Moore et al. 2003; Stanca 2006).
2.2 Lecture capture
Evidence that attendance may have a positive effective on learning outcomes raises questions about the increased prevalence of lecture capture technology, since the availability of this technology makes it more possible and more convenient for students not to attend face-to-face classes. The EDUCAUSE Learning Initiative (2008, p. 1, cited in Toppin 2011) defines lecture capture technology as “any technology that allows instructors to record what happens in their classrooms and make it available digitally”. This broad definition allows for a number of different technologies and methods of recording/distributing in-class activities and may include materials beyond a simple recording of the lecturer, such as slide content and audience responses (Dey et al. 2009; Edwards and Clinton 2019). Lectures themselves may include material and/or explanations which go beyond what is available in textbooks, and this could be included in recordings (Cohn and Johnson 2006). However, it is also important to note that not everything that occurs in a class (such as detailed audience participation) is always recorded by lecture capture technology so that watching an online recording is not necessarily the same as being present in the class. Differences between actual and recorded classes can occur because of technical limitations (not all rooms may have the necessary equipment) or because the structure of some classes does not lend itself to comprehensive recording.
Sloan and Lewis (2014) provide a useful summary of what the literature has to say about the potential advantages and disadvantages of lecture capture. In short, they suggest that lecture capture can, amongst other things, act as both a substitute for attending in person and a complement to attendance by allowing students to review some/all of the content taught in the class. A similar literature review by O’Callaghan et al. (2017) found that the potential benefits of lecture capture appear to outweigh the potential disadvantages, noting that further research is needed in contexts such as that considered in this paper.
Students increasingly regard lecture capture (and other online resources) as essential to their studies (Aldamen et al. 2015; Pierce and Carosella 2016; Witton 2017) and evidence suggests that the technologies used to generate these resources are becoming more widely deployed (Witton 2017). Indeed, many studies have shown that irrespective of any link with performance, the mere presence of lecture capture technology is linked with enhanced student satisfaction (Sloan and Lewis 2014; Jones and Olczak 2016; Edwards and Clinton 2019; Witton 2017). However, there remains considerable uncertainty as to whether such technologies provide real benefits to students (Hadgu et al. 2016; Edwards and Clinton 2019).
Research on the use of lecture capture technology generally focuses on two things: its impact on class attendance and its link with student performance. In the case of its impact on physical attendance, there is again little consensus. Much of this research explores how attendance patterns change when some form of lecture capture is introduced. It is beset with issues around the measurement of attendance, since many universities do not monitor attendance (Pierce and Carosella 2016). There is also the issue of how other determinants of attendance are modelled in such studies. Lecture capture is predominantly found to have a small negative effect on attendance (Pierce and Carosella 2016; Edwards and Clinton 2019), although a number of studies have found either no effect (Aldamen et al. 2015; Pierce and Carosella 2016) or even a positive effect (Toppin 2011).
The link between lecture capture and student performance is a more nascent but growing area of research (Terry et al. 2015). There is, once again, little consensus on the existence or size of this link, with Hadgu et al. (2016) citing some studies that find positive links and others that find negative links. Indeed, given the context of the module explored herein, Jones and Olczak (2016) offer a particularly relevant exploration of an economics module and report an overall positive link. Further research explores the recorded lecture viewing patterns of high and low achievers (Pierce and Carosella, 2016), with for example, Sloan and Lewis (2014) finding that high achievers viewed fewer videos than low achievers. Other research has investigated how students use lecture capture and whether they replace physical lectures with lecture capture or supplement physical lectures with lecture capture, for example, by using it to review specific content rather than watching every lecture online (Newton et al. 2014; Edwards and Clinton 2019). Edwards and Clinton (2019) find a negative link between lecture capture and physical attendance, and a positive link between attendance and performance. They argue that lecture capture has no direct effect on performance, but since it reduces attendance, this ultimately harms performance. In effect, lecture capture may lead students astray and thereby compromise their grades.
2.3 Other factors affecting performance
Beyond attendance and lecture capture, there are a number of other factors which may affect student performance, but these have been relatively under-researched. One such factor is prior experience in the subject under consideration. Intuitively, it would be expected that such experience and/or knowledge would operate as an advantage and thus, improve student performance. Durden and Ellis (1995) find evidence for this effect in that previous experience (i.e., high-school-level education) in economics improved college-level economics course performance. Similarly, Jones and Olczak (2016) find that prior economics experience is a key determinant of performance in their economics module. Interestingly, our own anecdotal experience seems to conflict with these findings, in that it appeared to us that students with significant past experience in economics often did relatively poorly on our module relative to students who are new to economics. We return to this issue in the discussion section below.
Another possible factor that may affect academic performance is whether a student has some form of disability (Fuller et al. 2004; Mercer-Mapstone and Bovill 2020). Much of the literature on this issue explores the experience of students with disabilities, including the difficulty of achieving similar academic outcomes as students without disabilities, and suggests measures which are designed to improve equity with respect to educational outcomes (Wilson and Martin 2017). One such measure at our university is the Disability Access Plan (DAP), which is a bespoke strategy that ensures a student is taught in a manner that takes account of their particular situation. This might include giving the student extra time and rest breaks in exams, making deadlines more flexible for the student in coursework across the semester, providing lecture notes in advance, or leaving teaching material on screen for extended periods of time. These plans are designed by an expert team within the university, following an assessment of the circumstances of each student. They select from a standardized menu of options and where the resulting recommendation applies to all modules in a degree. The support is reviewed each academic year and may be changed at any time, as the expert team deems appropriate.
Recording lectures could also be, and often is, included in DAPs at our university because this clearly offers a number of potential benefits for students with disabilities. Recordings provide many of the benefits required in DAPs, including wider access to class materials, greater flexibility on when and how lectures can be viewed, freedom from real-time limitations (i.e., allowing students to listen or watch at their own pace), and acting as a less expensive alternative to in-class support staff whose availability may be limited by budgetary constraints (Holloway 2001; Newton et al. 2014; Sloan and Lewis 2014; Kandler and Thorley 2016). No literature that we know of, has explored the impact of such support on student performance, including whether it accurately, under-, or over-compensates students for their circumstances.
Further factors that may affect student performance, including age, gender, and country of origin (or home/international status), are often considered or included as controls in empirical studies (e.g., Lin and Chen 2006). They are, however, generally found to have little or no impact on performance despite the ex ante expectation that they have the potential to do so. They tend, therefore, to be reported briefly and quickly dismissed. It is nonetheless important to include such controls where possible even if this to rule out their effect. It is worth observing that the country of origin may be better replaced by a student’s proficiency in English (the language utilized on the module in question). Several studies have identified a link between English proficiency and academic performance since one would expect that a student’s capacity to digest, understand, and express concepts addressed in a course would be more closely tied to their ability to understand the lecturer rather than cultural differences proxied by country of origin (Vinke and Jochems 1993; Andrade 2006; Fakeye and Ogunsiji 2009; Martirosyan et al. 2015).
Finally, we note Moore et al.’s (2003) important point that any link between causal factors and performance is likely to be affected by course content itself. Students can be bored easily (perhaps due to poorly designed content) or may simply choose to not pay attention, whether they attend classes or watch recordings. This speaks to their engagement with the subject matter (i.e., business economics in our case) and may affect learning, and ultimately academic performance. However, since data on student engagement is not available in this context (and which is often difficult to measure or acquire in any case), we suggest that a detailed description of module and class structure will instead allow researchers the opportunity to consider for themselves the likelihood of student general engagement. We therefore present such information below.
2.4 Summary
Overall, the link between attendance, learning, and performance has been widely explored in the literature but there appears to be no clear consensus on the size or direction of effects between attendance and academic performance. The literature also suggests that lecture capture can, amongst other things, act as either a substitute for attending classes in person or as a complement to attendance. The impact on student performance from the use of lecture capture is also unclear. Given this range of findings, it appears that there is an opportunity for further work that explores the relationship between attendance, student use of lecture capture, and academic performance, especially in a postgraduate context. The following section therefore outlines our approach to exploring these relationships. In addition, the assessment of measures used to enhance equity for students with disabilities is under-researched. Through extant literature, we have identified a number of wider possible determinates, including prior background in the subject area, age, gender, and country of origin or English proficiency, and these controls will be incorporated into our investigation to the extent the available data allows.
3 INSTITUTIONAL AND MODULE CONTEXT
As suggested above, the central objective of our study was to examine the relationship between attendance, lecture capture, and student performance at the postgraduate level. In addition, the opportunity existed to explore the secondary question of whether the measures used at our university to enhance equity for students with disabilities were effective. The course within which data was collected for this study was a one-semester introduction to Business Economics module taught using a non-mathematical approach. This approach emphasizes critical thinking and the application of general economic concepts to real-world business contexts rather than economic or mathematical rigour for its own sake. The course is taken by approximately 200 postgraduate students that are enrolled in a variety of MSc degree programmes in the broad area of business and management. The diverse student body comprises a mix of those with and without prior economics experience or background, a variety of different undergraduate degrees, and a mix of home/EU/international students (with English as a first or foreign language). The course or module is taught in two cohorts and is the only common module of study between all of these different programmes of study.
Our sample consists of students who undertook the module during either the 2017/18 or the 2018/19 academic years. The teaching team was consistent both throughout the semester and across the two years, and was well-regarded in student feedback. The overall satisfaction score for the module on a 1–5 Likert scale was approximately 4.5 in both years. Furthermore, the “how the module contributed to my overall programme of study” question scored approximately 4.4 on the same Likert scale, suggesting students were generally well-engaged with the material taught in the module.
Teaching consisted of 11 weekly, two-hour lectures (which are each delivered twice, once for each cohort) and four one-hour tutorial seminars, which are taken fortnightly in the last eight weeks of the semester. Attendance at these lectures and seminars is voluntary. The lectures are delivered live and in person, dynamically ensuring that students can keep pace with progress through the curriculum (given their different backgrounds and existing economic knowledge). The content and examples are also updated to be as topical as possible. The lectures are all recorded via the lecture capture system and made available through Moodle (the Virtual Learning Environment used at our university) within 24 hours of the lecture. These recordings, made using Panopto/Re:View, include both a video of the lecturer and a copy of the active presentation slides. Students are regularly reminded of the availability of this resource in lectures but are advised that recordings may stop being provided if lecture attendance falls significantly.1 Lecture attendance is not tracked in line with university policy (and it was felt to be impractical to attempt to record/measure this informally). The Moodle site also provides an extensive array of other resources including lecture slides, seminar worksheets, MCQ (multi-choice question) practice tests, past exam papers, supplementary readings, and useful media articles which illustrate economic theories in action. The final two lectures include significant exam preparation/revision and incorporate MCQ practice via use of an electronic audience response system.
Seminars are structured as informal classes in which students discuss questions linked to the content covered in previous lectures and to which answers with diagrams must be prepared beforehand. Students are thus given a worksheet containing the relevant questions in advance of the seminar. These classes are collaborative in nature, with students collectively charged with the task of developing “model answers” to the worksheet questions. Each student is expected to participate in this group task, thereby building a working knowledge of the relevant economic concepts. Seminars are not recorded but attendance is tracked by the tutor for each session. This informal approach is designed to create an environment in which students feel free to contribute without fear of ridicule, should their prepared answers be incorrect, while ensuring that they leave with the correct solution. The pedagogical objective of the course is to combine the relatively passive lecture format with the more active seminar format, so that students are able to receive intellectual input but also genuinely engage with subject content to develop a deep understanding of it (Moore et al. 2003).
Assessment comprises a two hour, end-of-module exam, which consists of 40 MCQ questions (worth 40%) that assess breadth of knowledge, and one essay-style written answer from a choice of three, to assess depth and application of knowledge (worth 60%). During the module, students also undertake a Formative Assessment task (henceforth referred to as the FA) which consists of two essay-style questions that are indicative of questions on the exam. This task must be hand-written, which also serves as exam preparation. While formally marked, the FA does not count toward the overall grade for the module, but it provides experience and an interim opportunity for students to obtain feedback on how their learning in the module is progressing. It was introduced at the request of students for further information about exam requirements. Assessment items which contribute to a student’s final grade for the module are subject to internal moderation and external review by an independent examiner from a different university to ensure they are appropriate for the module, will allow student performances to be differentiated, and feature original/fresh questions. Furthermore, great care is taken to ensure that the FA and any past exam papers made available do not significantly overlap with these formal assessment items.
As suggested in the previous section, our university has in place a programme which attempts to address the learning equity issues faced by students with physical/mental health issues/disabilities or learning difficulties. This programme is built around the formulation of a DAP after an assessment of a student’s circumstances by the university student services team. Amongst other things, DAPs may include extra exam time and/or rest breaks; the opportunity to use a computer during exams; extensions to coursework deadlines; pre-lecture/tutorial access to learning materials; and access to slides, diagrams, and other presentation materials for longer periods of time than might normally be the case. DAPs are confidential, but once established or updated, relevant instructors are advised of the details that affect their teaching, so that they can implement the necessary adjustments or make appropriate arrangements. A number of the students in our module had DAPs in place.
4 MODEL SPECIFICATION
The primary objective of examining the relationship between attendance, lecture capture, and student performance and the secondary objective of assessing the effectiveness of measures to enhance equity for students with disabilities were pursued using a model for student performance. It contained attendance, lecture capture use, participation in the DAP programme, and the control factors identified in the literature considered earlier in the paper. The model is shown in Equation (1).
The dependent variable in Equation (1) representing our measure of student academic performance is Exam Score, which was the individual student percentage score on the final exam as approved by the Exam Board. This mark was not adjusted from the raw score obtained by the student in the exam. We propose to explain this performance in terms of two key explanatory variables: SA and RVC. SA or Seminar Attendance was measured by the number of seminar sessions which a student attended throughout the course (from 0 to a maximum of 4) as indicated in the attendance registers taken during every class by the tutor. This variable was regarded as an indication of active student engagement with the course and reflected both the instructors’ intuition and the idea discussed widely in the literature that student learning and performance is greatly enhanced by such engagement. We thus expected to find a positive influence of this variable on the dependent variable.
RVC or Recording View Category is a categorical variable indicating the number of times each student viewed lecture recordings through Moodle. This variable was coded as: 0 (the base reading) for 0 views; 1 for 1–5 views; 2 for 6–11 views; and 3 for 12 or more views. These categories were chosen after inspection of the raw data. They represent students who firstly do not use the lecture capture system at all (perhaps because they prefer to attend the lecture in person); those who use it a little (perhaps to accommodate an occasional missed lecture or review an uncertain concept); those who use it regularly (perhaps to review most lectures); and those who appear to extensively rely upon it. It will be recalled that the literature is conflicted on the direction of the effect we should expect from student use of recorded lectures. Recordings can be used as either a substitute for attendance and active engagement with the course, or as a complement for attendance and engagement. We saw earlier that the literature suggests that lower usage of recordings tends to be associated with the complementary function and more heavy usage with their function as a substitute. Given the exploratory nature of our study, we had no prior expectations of what we would find in regards to sign and significance of this variable. Lecture views, as opposed to raw minutes viewed, was chosen as the variable definition as the latter was too broad to allow such categorization as it would not be possible to distinguish between someone who viewed all of a single lecture and someone who viewed a small fraction of multiple lectures. Lecture capture system use information was exported for each lecture once the module was completed and the marks approved.
These two explanatory variables (Seminar Attendance and Recording View Category) were central to the primary question in which we were interested in this study and are listed as the final variables (numbers 8 and 9) on the right-hand side of Equation (1). Our second question of interest, however, was whether measures to enhance equity for students with disabilities at our university were appropriate. We thus also included the variable DAP as an exploratory variable, which was a dummy indicating whether a student had a DAP in place (1) or not (0). Inclusion in this manner allows us to test if they are inappropriate (by identifying under- or over-adjustment) rather than if they are appropriate per se. Given the exploratory nature of our study, and the fact that we were not involved in the assessment of students in regards to their DAPs, we had no prior expectations of what we would find in regards to sign and significance of this variable.
In line with our final area of curiosity, we also included the student’s result on the Formative Assessment (FA) item (described in the previous section) as an explanatory variable. This assessment item did not contribute to the calculation of the final mark for the course and was designed to give students both practice of the kind of task that they would face in the final exam and also feedback on how they were getting on in the course so that they could make any changes to their study patterns that might improve their performance. Our intuition was that the FA could affect the dependent variable in different ways. The first channel of influence is that it could simply be taken as an independent measure of student ability. In this case we would expect a positive effect running from the FA to Exam Score. The second channel of influence would operate via the feedback provided to students performing below their potential. Thus a student scoring a Pass on the Formative Assessment would receive feedback that might help them to improve, and this could lead to a better performance on the Exam Score. The problem, of course, is that student response to feedback is unobserved, and any effect of this kind is likely to be conflated with the first channel of influence which is dominated by overall ability.
Given the stress we placed in communications with students on the potential use of feedback for improvement, and the fact that we thought postgraduates were more likely to use feedback for improvement, we decided to model the FA variable with two separate designs. The first design simply included the overall mark as the FAS variable. This, we thought, was more likely to control for overall ability. The second design broke this single variable into three separate dummy variables defined in terms of grade levels relative to a base grade of Fail. We refer to this more granular approach to representing the effect of the Formative Assessment as the Formative Assessment Grade Category (FAGC). The base grade of Fail, corresponded to a mark of less than 40% on the FA. A grade of Pass corresponded to a mark in the 40–59% range, a grade of Merit corresponded to a mark in the 60–69% range, and a grade of Distinction corresponded to a mark in the 70–100% range. This second approach, in our judgement allowed a more refined examination of the influence that receiving different levels of grade on the Formative Assessment had on the Exam Score. Were the role of the FA variable in controlling for overall ability to be dominant, we would expect to see that the overall variable in the first regression was statistically significant with a positive sign (Dixson and Worrell 2016; Owen 2016). On the same assumptions in the second regression, we would expect to see larger coefficients for grade dummies corresponding to progressively higher grades. If, however, we found that coefficients on lower grade dummies had higher coefficients, this might suggest an additional effect such as that students were responding to the feedback received at these lower levels. We thus ran one regression for each of the FA variable designs.
Other explanatory variables were largely included as controls. The Year variable is a dummy variable indicating whether a student belonged to the 2017/18 cohort (0) or the 2018/19 cohort (1). This variable was included to account for any differences between these year groups or any unintentional differences such as the difficulty of the relevant exams. The Lecture Group variable was a dummy variable indicating which lecture the student attended and the degree in which the student was enrolled. This variable was coded 0 for the Wednesday class, which was the largest class (all on one particular MSc programme), and 1 for the Thursday class, which was for students enrolled in other programmes. The Prior Economics variable was a dummy variable coded 0 for students who self-identified as having no prior economics knowledge and 1 for those who regarded themselves as having some previous exposure to economics. The Gender variable was a dummy variable coded 0 for female students and 1 for males. The EML or English as Main Language dummy variable indicates whether a student originated from a country in which English is recognised as a national language — which is the language of instruction at our university (coded 1), or otherwise 0. This variable was chosen rather than a variable based on home/international status due to evidence that suggests that language proficiency is more important than cultural/distance from home/similar aspects and thus, likely to more effectively predict performance (Andrade 2006).
5 DATA
Data for the variables outlined in Equation (1) was obtained from module and student records (for which university approval was granted). Table 1 presents descriptive statistics for these variables. This table shows the mean exam score to be 58%, indicating a high pass grade (i.e., neither a merit [> 60%] nor distinction [> 70%], but above the pass mark of 40%),2 with a standard deviation of 12.59. This is consistent with a good number of students achieving merit and distinction grades in line with the normal distribution present around the mean value. This mark profile would be considered to be a typical distribution for grades at this university. There is a very small but statistically significant correlation with the Year variable, which is to be expected as the mean Exam Score for 2018/19 was marginally higher than 2017/18. There are also positive correlations between the Formative Assessment Grade Category/Formative Assessment Score and Exam Score, suggesting that in general those who scored higher in the FA went on to also score higher in the exam. This is to be expected as a student performing well in the FA would have confirmation that a similar approach adopted in the final assessment would be rewarded. However, the fact that neither of these were close to being a perfect relationship (at 0.21 for FAGC and 0.24 for FAS) suggests that there may have been some successful utilisation of student learning from the FA. In that regard, we note that 287 students increase their performance in the final exam relative to the FA, while 76 students actually performed worse in the exam than the FA. We might cautiously attribute the latter to students ignoring their FA feedback, or doing well in the FA and then hubristically failing to adequately prepare for the exam. There is a positive correlation between Seminar Attendance and Exam Score, suggesting a link between attendance and student performance. Finally, the mean of Recording View Category indicates a preponderance for categories 1 and 2 (1–5 and 6–11 views, respectively), given the raw mean of 8.04 views (SD = 10.32) and a range from 0 to 60 views.
Descriptive statistics


There is an inherent risk in regressions of this kind, that omitted variable bias could be experienced, given that there may be unidentified factors which influence the dependent variable, or indeed, overall engagement in the module, which in turn could affect both Exam Score and Attendance variables (Durden and Ellis 1995; Hosman et al. 2010). Numerous studies have investigated several different aspects of this, and there remains no consensus as to how the issue should be addressed. For example, some studies include grades from other modules or the student’s overall GPA (Stanca 2006). In our case, the particular module was taken by students on a range of different programmes with no other universally taken module available, thereby ruling out a consistent alternative module to utilise. Furthermore, the diverse nature of the degree programmes taking the module means that overall GPA was likely to be biased in terms of student ability to perform well on the module under consideration (since some students might struggle with economics but may perform better elsewhere, while others might be good with economics but struggle elsewhere). We therefore adopted the perspective of those studies in which attendance is best viewed as an indication of engagement rather than as a result of it (e.g. Edwards and Clinton 2019). This seems appropriate given our postgraduate-taught context, where students are not only paying considerable fees to attend but also have purposely chosen to take the specific programme in light of previous university experience.
Furthermore, we included every variable which we believe may influence the dependent variable and for which data was available. Such an approach is suggested as reasonable in the literature (e.g. Breiman 1992; Clarke 2005, 2009; Mitra and Washington 2012). In addition, the variables employed in our study and the method used are similar to previous work such as Dancer et al. (2015), giving further credence to our approach.
One student with a DAP was excluded from the dataset as an outlier due to their exceptionally high exam performance (the highest over the two years) and the nature of their DAP, which did not include learning deficits or meaningful changes to exam conditions. We therefore believe the DAP did not materially impact their exam performance. However, their inclusion, as an outlier, risked biasing the model. We have also excluded students who did not participate in the FA (only 12.86% of the two-year cohort),3 as by definition they can have no FAS or FAGC and hence would bias the results. An independent assessment of these students found a similar grading profile (to the students included in our model), with mean = 54.61, SD = 13.11, and an Exam Scores range of 14–76%. This is slightly below the requisite figures for the students who did complete the FA. Based on available data, we would cautiously attribute this to these students benefitting from the wider feedback delivered online and in seminars, but lacking any personal/individual feedback. In order to confirm that there is no bias in our model from omitting students who did not do the FA, we ran the same specification of our model but without the FAGC or FAS variables and with a dummy variable to distinguish between those who did/did not do the FA. This dummy was insignificant, suggesting no bias was created by omitted those who did not do the FA.
In addition, we verified the robustness of our model by examining the regression residuals and found them to be randomly distributed with no major deviations from normality (Pallant 2011; Pevalin and Robson 2009; Tabachnick and Fidell 2013). We also explored multicollinearity and found the VIFs to all be within an acceptable range (Hair et al. 2010; Pallant 2011), both of which suggest the model is robust. Further robustness checks using IV regression were not possible with the available data.
6 RESULTS AND DISCUSSION
Table 2 presents results from three OLS regression models for Exam Score described in Section 4. The first model contains only control variables as explanators (Model 1), the second adds the core independent variables Seminar Attendance and RVC described in Section 4 along with the basic FAS variable (Model 2), and the third model includes the same the core variables but replaces the basic FAS variable with the more detailed FAGC category variables (Model 3). All estimations were obtained using STATA v13.0, and results improve with the addition of core predictor variables (i.e., Models 2 and 3), which are also characterized by respectable and consistent values for the adjusted R-squared.
Results from OLS regressions


We first consider the effect of Seminar Attendance on Exam Score. In both Model 2 and Model 3, there is a positive effect that is statistically significant at the 1% level between Seminar Attendance and Exam Score. This suggests that each seminar attended is associated with an approximately 2% higher exam score, and implies that the seminars provide a useful resource for students.4 For students that attended every seminar, this equates to an 8% higher result than would otherwise be the case, almost a full grade difference in performance (e.g., a movement from merit to distinction). This is in line with prior research, which found similar effects of attendance on performance with improvements ranging from 3.5% to 9.4% (Durden and Ellis 1995; Lin and Chen 2006; Chen and Lin 2008). It is important to note that seminar attendance was recorded by the instructor rather than being self-reported by students (which is open to social desirability bias (Karnad 2013; Edwards and Clinton 2019)). It is equally important to note that attendance was measured only for the seminar classes. This may go some way to explaining why our results support this stream of research since these classes were designed to consolidate knowledge and practice skills valuable for the end of module assessment.
Table 2 also demonstrates a weak link between the Recording View Category variables and Exam Score. Model 3 suggests that 6–11 views of the lecture recordings is positively associated with Exam Score at the 10% level. This number of views leads to a 3.21% higher Exam Score relative to those who did not access any lecture recordings. In Model 2, the coefficient is similar but just beyond the threshold for 10% significance (a p value of 0.118, implying significant at the 12% level). Such results suggest that students find it beneficial to have reviewed most lectures once. However, those viewing lectures 1–5 or more than 12 times demonstrated no statistically significant change in their Exam Score according to either Model 2 or Model 3. The raw data suggests that students who viewed 6–11 lectures generally reviewed (almost) every lecture once, or perhaps reviewed certain concepts multiple times to ensure their understanding. This suggests the value of both the lectures themselves (since pretty much all lectures were accessed) and of lecture capture. At the extremes of 1–5 or more than 12 views, students may have reviewed a missed lecture or an occasional point of detail, or looked at some, if not all, lectures multiple times. This may indicate a number of issues. Perhaps they missed the majority of lectures and wanted, therefore, to review them later. While we cannot test for this, it is important to note that there is no correlation between views and seminar attendance, so it is unlikely that students with more than 12 views were completely disengaged from the module. The high view count may also suggest there were lectures or concepts which students found required further study. It is important to note that the association here is not statistically significant (but does have a positive coefficient), which suggests not only that extensive use of recordings may not benefit student performance but also that it does not seem to harm it either. Ultimately, it may not be the best use of students’ time, but it does not imply that students were being allowed to disadvantage themselves and might simply reflect individual student preferences for learning.
The postgraduate nature of the module may provide the reasons for this overall nonlinear relationship (which was somewhat expected). The relevant MSc programmes are generally taken by students interested in the given area, who have paid significant course fees, and which by their nature are intensive (so students face a number of competing responsibilities).
These findings on the use of lecture capture are broadly in line with prior research, although much extant literature (such as Newton et al. 2014; Sloan and Lewis 2014; Terry et al. 2015) focuses on a single overall effect (often around the introduction of lecture capture to a module). Indeed, Jones and Olczak (2016), with a context very similar to ours, report an overall effect, which they note is lessened when students replace live lectures with recordings. Our approach to analyzing the number of views has yielded a more nuanced understanding by looking at different levels of use. A further qualitative study could explore this in greater depth.
The third variable for consideration is DAP, which explores the effectiveness of the disability action plan adjustments. The results in Table 2 show that the presence of a DAP is found to have no statically significant association with Exam Score. If this coefficient had been found to be significant and negative, it would suggest that DAPs are insufficient, in that students with an identified disability are still more likely to achieve poorer grades. Similarly, if the coefficient had been found to be significant and positive, it would suggest that DAPs go too far and offer an advantage to students with disabilities. Our results suggest a more favourable outcome, in that the DAP adjustments seem to provide students with disabilities with a level playing field, in which case they are neither advantaged nor disadvantaged in relation to other students. We also explored the lecture recording viewing patterns of students with and without a DAP using a t-test, and found no significant difference in the mean number of views (or minutes viewed) between the two groups. In addition, we examined the use of lecture recordings by students with a DAP using a multiplicative dummy (to allow for different lecture recording viewing patterns by students with a DAP), but this dummy was also insignificant (and hence was not included in the finalised model for simplicity). To further explore this issue, future research would need to examine how students with a DAP would score without the DAP accommodations, which would require an entirely independent study that would be challenging to design.
We next consider the effect of the Formative Assessment on Exam Score. The results of Model 2 indicate a positive and statistically significant effect of the FAS variable on Exam Score. For each additional 1% attainment in FA, a student will on average attain an additional 0.22% in the final exam. This in line with the correlations in Table 1 and suggests a less-than-perfect relationship between the two assessments. This may simply indicate that FAS operates as a rough proxy for ability and that more able students achieve well in the Formative Assessment and then go on to perform well in the final exam. But the low value of the correlation in Table 1 and the small value for the coefficient in Model 2 suggests that something else might be going on.
Model 3 explores this possibility with the use of grade bands in the FAGC variable. The results indicate that there were positive associations (at the 1% level) with Exam Score for the lower levels of Pass and Merit (relative to a fail) but with no significant association at the Distinction level (relative to a fail). If there was no possibility of students learning from the FA and indeed learning to different extents, then all categories should have been similarly significant and with coefficients that reflect the differences in grade boundaries (e.g., that a distinction is on average at least 11% higher than a pass – 59% is the highest pass and 70% the lowest distinction). A Pass on the FA is associated with a 4.08% increase in Exam Score relative to a Fail, and a Merit on the FA is associated with an 8.34% increase in Exam Score relative to a fail. This is consistent with the proposition that the FA may have been particularly useful for students in the lower to middle part of the FA marks range. In contrast, the results for those with a distinction in the FA suggest there was no particular advantage (or detriment) relative to those having obtained a fail. Students who failed the FA may have already been struggling with the subject matter and hence continued to do relatively poorly in the final exam, while those who achieved a Distinction grade might have been sufficiently satisfied with their performance to ignore feedback provided on the FA. Students receiving a Pass or Merit grade, however, were right in the middle of these two situations. They demonstrated a reasonable level of understanding of the subject but with plenty to gain in terms of improvement, they paid attention to the feedback on their performance, and then performed better on the final exam. In this regard, we note that the mean performance on the FA was 47% and 58% for the exam, which implies that there is at least the possibility that students did benefit to some degree. As such there is at least sufficient consistency of this possibility with our results to explore the effect of formative feedback on academic performance more thoroughly and systematically in further work.
The lack of a statistically significant association between Exam Score and Prior Economics is interesting and runs counter to previous research such as Durden and Ellis (1995) and Jones and Olczak (2016). It may be that the explicit business focus and non-mathematical approach in our course made some difference to the relevance of previous exposure to standard economics content but this would be difficult to demonstrate. Another possibility might be the operation of an over-confidence effect generated by previous economics knowledge where there was simply a lack of preparation by students possessing this experience in the mistaken belief that they already had adequate knowledge of the material. We also explored the recorded lecture viewing patterns of students with and without prior economic experience using a t-test, and found no significant difference in the mean number of views between the two groups. In addition, we compared the use of lecture recordings by students with prior economics experience to those without by including a multiplicative dummy (to allow use by students with prior knowledge to be different to that of students without prior knowledge). This dummy was not significant.
Table 2 identifies a statistically significant association between the Year variable and Exam Score. This is not surprising, given that, as noted earlier, the mean exam score was slightly higher for the second cohort. This is most likely to be due to a slight (unintentional) difference in the perceived difficulty of the end of module exam paper set for each year group.
Finally, there was an absence of any association between Exam Score and Gender, Lecture Group, and English as a main language. This suggests consistency in course delivery, assessment, and marking between lecture groups and, to a lesser extent, between years. Most notably, it also provides evidence of an accessible module that was free from bias relating to either gender or English proficiency. In the case of the latter, this appears in contrast to much of the literature such as Andrade (2006). However, we posit that it speaks to the appropriateness of English language requirements and provisions (both preparatory and continuing) within our university for admission to the relevant taught postgraduate programmes.
7 CONCLUSIONS AND RECOMMENDATIONS
In this paper, we report the outcome of an exercise in which we tested some pedagogical intuitions that had become important for our practice and experience. In doing so, we found results that offer broader insights into a number of issues facing those teaching Business/Management students, and perhaps social sciences more broadly. As suggested by Stanca (2006), it appears safe to claim that as academics, our efforts in providing effective education environments do promote student learning. With the results presented here, there is evidence that both our lectures and seminars provide information which is engaged with and then absorbed by students. It is clear that seminar attendance is linked with enhanced student performance, as is the moderate use of lecture capture technology. This will be an important area of study given the shift toward online delivery due to COVID-19.
This leads us to a number of implications for Higher Education provision/policy and future research. We can conclude that academics should promote both attendance and the use of lecture recordings, while also highlighting students would be unwise to entirely substitute away from live lectures for watching recordings. This should not be difficult as Edwards and Clinton (2018) suggest that students appear to appreciate lecture capture and would likely reject its removal in the future. Therefore, and unlike Edwards and Clinton (2018), we find no need to devise means to degrade the usefulness of recordings to encourage live attendance and instead recommend that future research focus on how best to use it.
Furthermore, we propose that, should the lecture capture technology support it, academics are given the appropriate time and training to more extensively explore how students use the recordings. For example, if a particular lecture has a high number of views, it would suggest the content/delivery of that lecture may benefit from revision to make it more accessible. Indeed, exploring more detailed viewing statistics would provide the timecodes of parts of lectures which are extensively reviewed, thereby highlighting the specific content that students review. A sufficiently sophisticated system could even be set to flag such use automatically, to make this process easier and more efficient.
We found that the DAP system at our university appears to offer an appropriate level of support, ensuring that students are neither disadvantaged nor overly advantaged. Since this is the first study we know of to explore the link between such adjustments and performance, it would be beneficial to explore this further, both from the university perspective (i.e., beyond the confines of the single module in this case) and from a student perspective, perhaps through interviews with students who hold a DAP. For example, much extant literature suggests students with disabilities are not satisfied with (other) university actions, but our results suggest that while the DAPs appear to work as intended, presently we have no way of knowing if students perceive them to be effective. Future studies may wish to explore the effectiveness of the individual accommodations within DAPs, in order to go beyond their mere presence, as we have done at this early stage.
Lastly, the use of a FA was found to have a direct association with student performance and seems to offer the most benefit to those in the middle of the grade scale. Modules with no interim assessment or similar feedback may therefore benefit from the introduction of a FA. However, prior experience of the subject is found to have no impact on performance, although a number of reasons for this have been explored. Given the limited data available to us, this is an area for future research but may help to explain the benefits of the FA.
In order to address an obvious limitation in our data, examining student attendance of both seminars and lectures would allow for easier comparison with prior research and a more nuanced understanding of the impact of attendance. Similarly, detailed attendance records would allow for an assessment of performance on specific exam content like in Marburger (2001), and might also allow further investigation of the effectiveness of lecture recordings as a substitute for physical attendance for individual sessions (which seems to be particularly relevant in the post-pandemic world). However, lecture attendance is more difficult to capture than the smaller, closed environments of seminars, most especially in large modules such as the context herein. Future research may also want to consider student engagement as well as attendance. Indeed, the sole use herein of “available data” has presented a limitation to the possible analyses and verifications we have been able to conduct. While a “fully planned” research project may yield more information, we remain firmly of the belief that academics have much to gain by examining and sharing the information available to them, as we have done here. In particular, the changes that were required due to COVID-19 can be similarly explored, and hence such work, alongside this contribution, can provide a baseline for pre-, peri-, and post-COVID-19 comparisons.
Our findings also suggest a number of wider areas for further pedagogic research. Much of the extant literature, this study included, focuses on a single module, but it would be beneficial to explore a broader range of modules, perhaps at several institutions (Terry et al. 2015). This would, of course, be exponentially more difficult, but would provide evidence with greater generalisability.
Finally, we reflect on the overall nature of the exercise in which we have engaged. We made the point in the introduction that few academics have the time to continually engage with the pedagogic literature or development, and that as a result, we tend to rely on educational instinct and intuition. We would like to suggest that there is value in subjecting these instincts and intuitions to scrutiny at least periodically even if they have been developed as the result of considerable experience. Confirmation of our intuitions examined in this paper, and the wider exploration they inspired, has strengthened our confidence to act in the future, even if that examination has identified the need for further research and reflection.
- 1↑
This reminder is designed to encourage lecture attendance over sole reliance on recordings, but is essentially an idle threat in that it has never been actioned.
- 2↑
These are the standard grade boundaries for our university, over which we have no control. Assessments are written with these grade boundaries in mind so, for instance, only those deserving of a pass are able to score over 40%.
- 3↑
Informal feedback suggests this is mainly due to time constraints, rather than general motivation, which is not usually considered an issue with fee-paying postgraduate students.
- 4↑
We additionally explored the same model and variables with a categorical dependent variable of exam grade (Fail, Pass, Merit, Distinction) using ordered-probit techniques. In this case, we found that for each seminar attended, students had a 3% greater chance of achieving a Merit and 4% greater chance of achieving a Distinction-level grade. However, since ordered-probit results are harder to interpret and require the loss of data variation due to conflating performance to only four possibilities, we adopted the OLS approach for our main results. It is nevertheless important to note that the results of these ordered-probit estimations are consistent with the main findings presented herein. Full details of our ordered-probit findings are available on request.
REFERENCES
Aldamen H., Al-Esmail R. & Hollindale J. , '‘Does lecture capturing impact student performance and attendance in an introductory accounting course?’ ' (2015 ) 24 (4 ) Accounting Education : 291 -317.
Andrade M.S. , '‘International students in English-speaking universities: Adjustment factors’ ' (2006 ) 5 (2 ) Journal of Research in International Education : 131 -154.
Bolton P. , Higher education student numbers , (House of Commons Library , London 2020 ) House of Commons Library Briefing Paper No. 7857.
Breiman L. , '‘The little bootstrap and other methods for dimensionality selection in regression: X-fixed prediction error’ ' (1992 ) 87 (419 ) Journal of the American Statistical Association : 738 -754.
Chen J. & Lin T.F. , '‘Class attendance and exam performance: A randomized experiment’ ' (2008 ) 39 (3 ) Journal of Economic Education : 213 -227.
Clarke K.A. , '‘The phantom menace: Omitted variable bias in econometric research’ ' (2005 ) 22 (4 ) Conflict Management and Peace Science : 341 -352.
Clarke K.A. , '‘Return of the phantom menace’ ' (2009 ) 26 (1 ) Conflict Management and Peace Science : 46 -66.
Cohn E. & Johnson E. , '‘Class attendance and performance in principles of economics’ ' (2006 ) 14 (2 ) Education Economics : 211 -233.
Dancer D., Morrison K. & Tarr G. , '‘Measuring the effects of peer learning on students’ academic achievement in first-year business statistics’ ' (2015 ) 40 (10 ) Studies in Higher Education : 1808 -1828.
Dey E.L., Burn H.E. & Gerdes D. , '‘Bringing the classroom to the web: Effects of using new technologies to capture and deliver lectures’ ' (2009 ) 50 (4 ) Research in Higher Education : 377 -393.
Dixson D.D. & Worrell F.C. , '‘Formative and summative assessment in the classroom’ ' (2016 ) 55 (2 ) Theory Into Practice : 153 -159.
Durden G.C. & Ellis L.V. , '‘The effects of attendance on student learning in principles of economics’ ' (1995 ) 85 (2 ) American Economic Review : 343 -346.
Educause Learning Initiative (2008), Seven things you should know about lecture capture, Friday December 19, Available at https://library.educause.edu/resources/2008/12/7-things-you-should-know-about-lecture-capture.
Edwards M.R. & Clinton M.E. , '‘A study exploring the impact of lecture capture availability and lecture capture usage on student attendance and attainment’ ' (2019 ) 77 Higher Education : 403 -421.
Fakeye D.O. & Ogunsiji Y. , '‘English language proficiency as a predictor of academic achievement among EFL students in Nigeria’ ' (2009 ) 37 (3 ) European Journal of Scientific Research : 490 -495.
Fuller M., Bradley A. & Healey M. , '‘Incorporating disabled students within an inclusive higher education environment’ ' (2004 ) 19 (5 ) Disability and Society : 455 -468.
Hadgu R.M., Huynh S. & Gopalan C. , '‘The use of lecture capture and student performance in physiology’ ' (2016 ) 5 (1 ) Journal of Curriculum and Teaching : 11 -18.
Hair J.F., Black W.C. & Babin B.J. , Multivariate Data Analysis: A Global Perspective , (Pearson Education Limited, London 2010 ).
Holloway S. , '‘The experience of higher education from the perspective of disabled students’ ' (2001 ) 16 (4 ) Disability and Society : 597 -615.
Hosman C.A., Hansen B.B. & Holland P.W. , '‘The sensitivity of linear regression coefficients’ confidence limits to the omission of a confounder’ ' (2010 ) 4 (2 ) The Annals of Applied Statistics : 849 -870.
Jones C. & Olczak M. , '‘The impact of lecture capture on student performance’ ' (2016 ) 13 (1 ) Australasian Journal of Economics Education : 13 -29.
Kandler C. & Thorley M. , '‘Panopto: The potential benefits for disabled students’ ' (2016 ) 8 (12 ) Journal of Learning and Teaching : 1 -5.
Karnad A. , Student use of recorded lectures: A report reviewing recent research into the use of lecture capture technology in higher education, and its impact on teaching methods and attendance , (London School of Economics and Political Science, London 2013 ).
Lin T.F. & Chen J. , '‘Cumulative class attendance and exam performance’ ' (2006 ) 13 (14 ) Applied Economics Letters : 937 -942.
Lindsay R., Breen R. & Jenkins A. , '‘Academic research and teaching quality: The views of undergraduate and postgraduate students’ ' (2002 ) 27 (3 ) Studies in Higher Education : 309 -327.
Marburger D.R. , '‘Absenteeism and undergraduate exam performance’ ' (2001 ) 32 (2 ) Journal of Economic Education : 99 -109.
Martirosyan N.M., Hwang E. & Wanjohi R. , '‘Impact of English proficiency on academic performance of international students’ ' (2015 ) 5 (1 ) Journal of International Students : 60 -71.
Mercer-Mapstone L. & Bovill C. , '‘Equity and diversity in institutional approaches to student–staff partnership schemes in higher education’ ' (2020 ) 45 (12 ) Studies in Higher Education : 2541 -2557.
Mitra S. & Washington S. , '‘On the significance of omitted variables in intersection crash modeling’ ' (2012 ) 49 Accident Analysis and Prevention : 439 -448.
Moore R., Jensen M., Hatch J., Duranczyk I., Staats S. & Koch L. , '‘Showing up: The importance of class attendance for academic success in introductory science courses’ ' (2003 ) 65 The American Biology Teacher : 325 -329.
Newman-Ford L., Fitzgibbon K., Lloyd S. & Thomas S. , '‘A large-scale investigation into the relationship between attendance and attainment: A study using an innovative, electronic attendance monitoring system’ ' (2008 ) 33 (6 ) Studies in Higher Education : 699 -717.
Newton G., Tucker T., Dawson J. & Currie E. , '‘Use of lecture capture in higher education - lessons from the trenches’ ' (2014 ) 58 (2 ) TechTrends : 32 -45.
O’Callaghan F.V., Neumann D.L., Jones L. & Creed P.A. , '‘The use of lecture recordings in higher education: A review of institutional, student, and lecturer issues’ ' (2017 ) 22 (1 ) Education and Information Technologies : 399 -415.
Owen L. , '‘The impact of feedback as formative assessment on student performance’ ' (2016 ) 28 (2 ) International Journal of Teaching and Learning in Higher Education : 168 -175.
Pallant J. , SPSS Survival Manual: A Step by Step Guide to Data Analysis Using the SPSS Program , (Allen & Unwin, Sydney 2011 ).
Pevalin D. & Robson K. , The Stata Survival Manual , (McGraw-Hill Education, New York 2009 ).
Pierce, R. and Carosella, W. (2016), ‘Exam performance and recorded lecture viewing: A closer look’, in Proceedings of EdMedia 2016 – World Conference on Educational Media and Technology, Association for the Advancement of Computing in Education, Vancouver, BC, Canada, 35–41. Available from https://www.learntechlib.org/primary/p/172929/.
Sabbir Rahman M., Khan A.H., Mahabub Alam M.M., Mustamil N. & Chong C.W. , '‘A comparative study of knowledge sharing pattern among the undergraduate and postgraduate students of private universities in Bangladesh’ ' (2014 ) 63 (8/9 ) Library Review : 653 -669.
Shukr I., Zainab R. & Rana M.H. , '‘Learning styles of postgraduate and undergraduate medical students’ ' (2013 ) 23 (1 ) Journal of College of Physicians and Surgeons Pakistan : 25 -30.
Sloan T.W. & Lewis D.A. , '‘Lecture capture technology and student performance in an operations management course’ ' (2014 ) 12 (4 ) Decision Sciences Journal of Innovative Education : 339 -355.
Stanca L. , '‘The effects of attendance on academic performance: Panel data evidence for introductory microeconomics’ ' (2006 ) 37 (3 ) Journal of Economic Education : 251 -266.
Tabachnick B.G. & Fidell L.S. , Using Multivariate Statistics , (Pearson Education, Boston, MA 2013 ).
Terry N., Macy A., Clark R. & Sanders G. , '‘The impact of lecture capture on student performance in business courses’ ' (2015 ) 12 (1 ) Journal of College Teaching and Learning : 65 -74.
Toppin I.N. , '‘Video lecture capture (VLC) system: A comparison of student versus faculty perceptions’ ' (2011 ) 16 (4 ) Education and Information Technologies : 383 -393.
Universities UK , Patterns and Trends in UK Higher Education 2018 , (Universities UK, London 2018 ).
Vinke A.A. & Jochems W.M.G. , '‘English proficiency and academic success in international postgraduate education’ ' (1993 ) 26 (3 ) Higher Education : 275 -285.
Wilson L. & Martin N. , '‘Disabled student support for England in 2017: How did we get here and where are we going? A brief history, commentary on current context and reflections on possible future directions’ ' (2017 ) 9 (1 ) Journal of Inclusive Practice in Further and Higher Education : 5 -10.
Witton G. , '‘The value of capture: Taking an alternative approach to using lecture capture technologies for increased impact on student learning and engagement’ ' (2017 ) 48 (4 ) British Journal of Educational Technology : 1010 -1019.