Show Less
Open access

Regulatory Stewardship of Health Research

Navigating Participant Protection and Research Promotion

Edward S. Dove

This timely book examines the interaction of health research and regulation with law through empirical analysis and the application of key anthropological concepts to reveal the inner workings of human health research. Through ground-breaking empirical inquiry, Regulatory Stewardship of Health Research explores how research ethics committees (RECs) work in practice to both protect research participants and promote ethical research. This thought-provoking book provides a new perspective on the regulation of health research by demonstrating how RECs and other regulatory actors seek to fulfil these two functions by performing a role of ‘regulatory stewardship’.
Show Summary Details
Open access

Appendix 1: Research methods—steps, techniques, and tools

Edward S. Dove

In this appendix, I describe the research methods undertaken for my empirical work and which define an anthropology of regulation, including the justification for undertaking a ‘research trinity’ of document analysis, semi-structured interviews, and naturalistic observation. Specifically, I link anthropology of regulation methodology with its methods by discussing procedural aspects such as recruitment strategy, interview topic design, data analysis, ethical considerations, and potential limitations to my methods.

Data Sources, Types, and Forms

As regards the first arm of the research trinity, I undertook a literature review that centred on qualitative document analysis of legal rules and academic and grey literature from different disciplinary fields—primarily law, anthropology, sociology, and biomedical science—as well as ‘human subjects’ research regulations. These texts were examined both for substance and context through thematic analysis. This document analysis was coupled with obtaining primary data in word and visual form (through interviews and observations) from individuals and groups in natural and semi-natural settings, as I explain below.

I observed REC meetings to gather data on actual behaviours and practices and develop a detailed description of how RECs operate and make decisions. By observing RECs, I aimed to witness what members of these committees do in their natural settings.1 This meant that I observed not only REC members, but also a fluctuating array of other actors that form part of the ethics review system, for example, REC Managers, REC Assistants, investigators, patient advocates, and others. Some of these other actors varied from one meeting to another for different reasons. Individual REC members could be absent for a meeting due to illness or scheduling conflict, investigators and patient advocates would appear only for their specific application, REC Assistants and REC Managers occasionally would be replaced by a substitute, and observers generally would attend only one meeting. On one occasion, for example, the REC Chair was ill and a Chair from another REC in another city came in to replace him, creating an interesting dynamic with the other REC members. Observations took place at the site where full committee REC meetings occur; usually these are in hotel conference rooms, NHS Health Board buildings, or NHS hospital conference rooms. I collected, with permission, some social artefacts of RECs, such as the agendas of each meeting and, occasionally, a REC member’s review of an application as written in the HRA Ethical Review Form.

I use the term ‘naturalistic observation’ in contradistinction to ‘participant observation’, as the latter implies that the observer becomes part of the group being observed to get a deeper insight. As an ‘observer’ of RECs who was required to remain silent during the meetings, the term ‘participant’ seems inappropriate, even if I attended multiple REC meetings over one year. Moreover, naturalistic observation describes the technique of observing people in their natural environment, usually episodically rather than continuously (e.g. REC members at their monthly full committee meetings) without any manipulation by the observer, which more accurately describes the empirical research I conducted.

Selection of Data Sources: REC Observations

The sample size for interviews and observations was largely dictated by resource and time constraints. I determined that it would be sufficient to select four RECs across both England and Scotland for observation over the period of approximately one year, though as I explain below, this eventually increased to five RECs. I identified RECs on both sides of the border. This was not out of an explicit desire for a comparative approach, but rather, to collect data in different settings. Nonetheless, throughout my research, I intended to account for any perceivable cultural and regulatory differences between these two nations.

One REC was identified through a serendipitous encounter with an academic colleague who was a member of a REC in England. When chatting with her at a biobank conference in London a year prior, she suggested that I write to her REC Chair to see if it would be possible to observe it. Accepting her advice and invitation, I did so, and the Chair invited me to observe the REC over the course of the year. The other three RECs I purposively selected through browsing the HRA’s online REC directory:2 I selected one REC in England and two RECs in Scotland. These RECs were deliberately chosen for their geographic differences and for their different ‘committee flags’, which is the term used by the HRA to denote specific areas of health research that RECs are authorized to review (e.g. gene therapy clinical trials, Phase 1 studies involving healthy volunteers). The fifth REC also was added serendipitously. I encountered it after an interviewee suggested I speak with the Chair of this REC; I then did so, and he invited me to observe his REC. I also was invited by two interviewees to observe two of the HRA’s five offices in England: the Skipton House office in London and another in the North of England. A third interviewee (REC Manager) invited me to the NHS Scotland Health Board office where her REC meets to get a sense of how her job and the HARP system works.3 Table A1.1 lists the five RECs observed.

Table A1.1   Attributes of RECs observed

REC pseudonym

Location

Committee type

REC 1

England

• RECs recognized to review CTIMPs in patients—type iii

REC 2

England

• RECs recognized to review CTIMPs in patients—type iii

REC 3

England

• RECs recognized to review CTIMPs in healthy volunteers—type i

• RECs recognized to review CTIMPs in patients—type iii

REC 4

Scotland

• Authorized REC

Scotland A REC

Scotland

• RECs recognized to review CTIMPs in healthy volunteers—type i

• RECs recognized to review CTIMPs in patients—type iii

I agreed with the REC Chairs to not identify the observed RECs in any publications. However, I obtained explicit consent to identify one of the RECs, the Scotland A REC, which meets monthly in Edinburgh.4 This was done because of the unique nature of this REC; indeed, it is the only REC in Scotland that is authorized to review ‘Phase 1 studies in healthy volunteers’ and ‘research involving adults lacking capacity’, as the HRA parlance terms it. Even a brief amount of description of the REC and its dynamic likely would enable someone to identify it. The Scotland A REC was specifically constituted by statutory regulation in 20025 following the enactment of the AWI Act. Uniquely, members of the Scotland A REC are appointed not by a Health Board, but by the Scottish Ministers.

Selection of Data Sources: Interviews

As to the third arm of my ‘research trinity’, I planned to approach targeted RECs and regulatory bodies to interview individuals situated within RECs (as members, Chairs, and Managers) and regulatory bodies (e.g. HRA), or straddling both (Scientific Officers). These were conducted as one-on-one, in-depth, semi-structured interviews. The interviews were conducted in a semi-natural setting, specifically in person at the individual’s office or over Skype, to discuss the activities in which these individuals were engaged in their natural settings: REC(s) or regulatory authorities that oversee RECs.

My strategy for the (managing) regulator-associated interviewees was to accumulate names through snowball sampling. After initially identifying a couple of individuals based on recommendations to me from a Scientific Officer and the HRA’s Head of RES (England), I asked interviewees who else they thought would be valuable to speak with, whether they be regulators or REC members. This strategy worked well in accumulating a list of names, including the Chair of the fifth REC I came to observe. My strategy for the REC-associated interviewees was to approach the Chairs of the two initially identified RECs in England to see if they would be willing to be interviewed. Both obliged. I also asked each Chair if they would be comfortable asking their members and the REC Managers to share their email addresses with me, so that I could then contact those members who responded affirmatively. Again, the REC Chairs obliged with this request, the first one very early on in 2016. A somewhat different strategy was employed in Scotland, where the responsible Scientific Officer requested that I work through them and the REC Managers rather than directly contacting the REC Chairs. This difference signified to me quite early on the crucial gatekeeping role of the Scientific Officer in the Scottish RECs.

Remaining mindful of resource and time constraints, I intended to interview no more than 25 individuals, constituting a mix of REC members and regulators involved in health research ethics and RECs particularly. Ultimately, emails were sent to 30 individuals, some of whom were REC members that contacted me first after my email address and interview request were shared with their REC Chair. In the end, 28 individuals were interviewed across the year 2016 after two individuals failed to respond to follow-up emails after expressing initial interest. Of these 28 interview participants, 7 were affiliated with the HRA (1 was a member of the HRA’s NREAP), and the rest were REC members or Scientific Officers.6 This number exceeds what has been deemed by some scholars as necessary to achieve both ‘code saturation’ (i.e. adequate identification of the range of thematic issues) and ‘meaning saturation’ (i.e. adequate textured understanding of the issues).7 Eleven of the participants were located in Scotland; the remainder were located in England. The average interview time was 65 minutes (ranging from 27 minutes to 99 minutes). I sought and obtained written consent (via email) and verbal consent (prior to the interview commencing) from each interview participant. Table A1.2 lists attributes of each of the interviews. As the chapters of this book indicate, I refer to each interview participant as P1, P2, and so on.

Interview Guides

As these interviews were semi-structured, two interview guides were designed, one for REC members (including Chairs and Managers) and another for the regulators at the HRA and Scientific Officers. The interview guides were formulated based on findings from the document analysis conducted in 2015 and were influenced by an anthropology of regulation methodology: many of the questions were crafted to draw out the experiences of REC members and regulators, and to understand the ways in which they themselves affect and are affected by processes of regulation. Though the structure of questioning was consistent (beginning with biographical background and ending with questions about overall satisfaction with the ethics review system), many of the specific questions were modified as the study progressed to iteratively explore themes that appeared to emerge in prior interviews. Likewise, though the interviews were semi-structured, they were also open-ended, leaving participants free to form and express multiple associations with the concepts of ‘protection’ and ‘promotion’ and how these twin regulatory demands were seen to be operationalized in everyday practice of RECs, again, if at all.

Table A1.2    Attributes of interviews and interview participants

Interview participant (‘P#’)

Location of interview participant

Role of interview participant

Location of interview

P1

England

Regulator (HRA)

In person

P2

England

Regulator (HRA)

In person

P3

England

REC (Chair)

Skype

P4

England

Regulator (NREAP)

In person

P5

England

REC (member)

Skype

P6

England

REC (member)

Skype

P7

England

REC (Vice Chair)

Skype

P8

England

REC (member)

Skype

P9

England

REC (Vice Chair)

Skype

P10

England

REC (Chair)

Skype

P11

England

REC (Chair)

Skype

P12

Scotland

REC (member)

In person

P13

England

Regulator (HRA)

Skype

P14

England

REC (member)

Skype

P15

England

REC (Manager)

Skype

P16

Scotland

Scientific Officer

In person

P17

England

Regulator (HRA)

Skype

P18

Scotland

REC (member)

Skype

P19

Scotland

REC (member)

Skype

P20

Scotland

REC (member)

Skype

P21

Scotland

REC (member)

Skype

P22

Scotland

REC (member)

Skype

P23

Scotland

Scientific Officer

Skype

P24

Scotland

Scientific Officer

Skype

P25

Scotland

REC (Manager)

Skype

P26

England

Regulator (HRA)

Skype

P27

Scotland

Scientific Officer

Skype

P28

England

Regulator (HRA)

Skype

Regulatory Approvals

Following identification of the RECs I wished to observe and drawing up an initial list of interview participants, I made inquiries with the Research Governance and Quality Assurance Office at the University of Edinburgh concerning the regulatory approvals needed for the empirical research. The Office suggested that I contact one of the Scientific Officers responsible for the RECs in the South East Scotland area (which covers Edinburgh), who could advise on the regulatory approvals needed. The Scientific Officer replied stating that Edinburgh Law School’s Research Ethics and Integrity Committee (REIC) would be appropriate and sufficient for ethics approval, and that NHS research ethics approval was unnecessary for my project. The Scientific Officer also informed me that I would need to obtain ‘management’ approval from the HRA, relevant Health Boards in Scotland, and the CSO since Scotland A REC members are appointed directly by the Scottish Ministers and the CSO runs the RES in Scotland. This necessitated completing the electronic IRAS Application Form (Parts A–D),8 along with other documents, for review and approval by both the HRA and the Health Boards and CSO in Scotland.

Following the Scientific Officer’s confirmation, and with the assistance of the Scientific Officer, I was put in contact with the HRA’s Head of RES (England) to begin the process of obtaining HRA management approval to observe the RECs in England and interview individuals. She informed me that she could arrange my observation of REC meetings and interviewing of REC members in England if I let her know which RECs I was interested in; she also suggested that I approach REC members via the REC Chair or Manager, which she also could arrange, and that ultimately it would be up to the individual REC members to decide whether to participate.

I then submitted a ‘Level 2’ ethics application form (and related documents, such as consent forms and interview topic guides) to Edinburgh Law School’s REIC; approval was received in November 2015. I then drafted the IRAS application in consultation with the point person in the Research Governance and Quality Assurance Office at the University of Edinburgh, who commented on draft versions of the 29-page IRAS NHS R&D application form, and informed me of the relevant materials I would need to include with my submission, including a ‘study protocol’, interview topic guides, and consent forms. The Research Governance and Quality Assurance Office then signed off on my IRAS form,9 which enabled me to submit it for review by the HRA and Health Boards. That same day, I received approval from the HRA’s Head of RES (England) and the following day, received R&D acknowledgement from NHS Lothian Health Board in Scotland. Shortly thereafter, I received confirmation from the CSO that they had no objection to my approaching NHS RECs in Scotland for the purpose of my project, enabling me to commence the empirical research.

Data Collection and Timing

Data from the interviews were collected at one-off points in 2016, while data from the REC meeting observations were collected at multiple points in 2016 and early 2017.

RECs meet monthly at full committee meetings up to 11 times per year. Knowing that two of the identified RECs had overlapping meeting dates and that I had cross-competing academic commitments in my diary, I aimed to observe at least four meetings for each REC over 2016, though this would come to depend not only on my own schedule, but in the case of Scotland, unforeseen situations such as one of the RECs cancelling a meeting when no new applications were received. This would also depend on the ongoing need of approval from the REC Chairs via the Scientific Officer and REC Managers, which seemed to turn on whether other observers were already scheduled to attend a meeting (a reoccurring issue for the Scotland A REC), and thus eliminating my ability to do so. The concern was that REC Chairs did not want too many observers attending a meeting, which might distract the REC members and/or the investigators attending in person. In total, I attended 24 REC meetings. The REC observation schedule is reflected in Table A1.3.

Table A1.3 Number of REC observations in 2016/17

REC

Times observed

REC 1

5

REC 2

6

REC 3

5

REC 4

5

Scotland A REC

3

Before each REC meeting commenced, I would greet the REC Chair and Manager, the latter of whom would sometimes hand me a standard HRA confidentiality agreement form (tailored only to state which REC it applied to), which I was asked to sign and date. (Other times, the REC Manager would email the form for me to sign and return by email in advance of the meeting.) The confidentiality agreement required me, as an ‘observer’ (a term discussed in the GAfREC and REC SOPs), to agree to treat in complete confidence all information disclosed to me either in the meeting documentation or matters discussed at the meeting. In addition, some of the Chairs would verbally inform each investigator who attended the meeting that I was an observer conducting research on RECs, and give the investigator an opportunity to object to my presence (if there was an objection, I would have been asked to leave the meeting room for the REC’s face-to-face discussion with the investigator). No investigator ever objected to my presence; indeed, the most common reaction was one of casual indifference, focused as they were on soon being interrogated by the REC members. This action by the Chairs is recommended (phrased as a ‘should’) in the SOPs,10 and indeed, it was not always followed. Some Chairs would never inform investigators of my presence as an observer; others would sometimes inform the first few that would appear at the meeting but then apparently forget my presence as the hours of the meeting progressed.

To ensure that the data were accurate and comprehensive, I audio-recorded the interviews with the permission of each participant. To record behaviours, actions, and settings of the REC meetings, I wrote fieldnotes on a laptop computer. This was not an extraordinary sight; to reflect the increasing digital nature of ethics review, at each of the REC meetings, at least one member (and often several) would operate from a laptop.

Data Analysis

Digital files of the audio-recorded interviews were immediately uploaded securely and transcribed in intelligent verbatim by a digital audio transcription typing specialist company based in Scotland. Via written agreement, the company agreed to treat all transcribed interviews in confidence. Once the transcribed interviews were completed by the professional transcribers, I would compare the transcription with the audio recording to ensure accuracy. The transcripts and fieldnotes were then anonymized by removing all identifying information that enabled indirect or inferential identification. The audio file of the interviews would then be deleted both from my computer and the company’s server within three months from the recording. Once both the interview transcripts and the majority of the fieldnotes were completed, I printed out hard copies of both and put them into binders. Coding was done manually and in multiple stages, with Microsoft Office Spreadsheet and Microsoft Word used as electronic aids (e.g. keeping tabs of codes, development of a systematic and iterative codebook), as I felt I could obtain a deeper connection with the data and see patterns more clearly than I could with qualitative research software, which, though a powerful tool to assist in data analysis, is more prone to overwhelm than enlighten me. Several scholars have noted that simple word processing and spreadsheet applications can be used effectively with qualitative data.11 During the coding process, I took notes in a memo-style format by writing down words and thoughts I considered could be of use during the data analysis and serve as a reference for potential coding ideas.

The analysis was inductive (i.e. data-driven) in that I coded the data without attempting to fit them into a pre-existing coding frame or analytic pre-conceptions. This is not to say that I coded the data absent any theoretical and epistemological commitments, as anthropology of regulation, elaborated in Chapter 4, is underpinned by theoretical concepts drawn from regulatory theory and anthropology. However, I made a conscious effort to strongly link the identified themes discussed to the data themselves, rather than casually map or force the data onto any of my theoretical underpinnings or analytic interests in the area.

The data from both the interviews (transcripts) and observations (fieldnotes) were coded using qualitative thematic analysis. Thematic analysis is a popular qualitative analytic method for ‘identifying, analysing and reporting patterns (themes) within data. It minimally organises and describes [the] data set in (rich) detail. However, frequently it goes further than this, and interprets various aspects of the research topic.’12 Several scholars describe thematic analysis as a process for encoding qualitative data, rather than a theoretically informed model for research and analysis.13 Indeed, thematic analysis is an analytic tool for making sense of the data, whereas anthropology of regulation is underpinned by sensitizing concepts that are brought to bear in the encoding process. The encoding requires explicit ‘codes’, which are ‘a form of shorthand that researchers repeatedly use to identify conceptual reoccurrences and similarities in the patterns in the data’,14 and which are usually situated in a ‘codebook’, which is the compilation of the codes in a study. A theme is ‘a pattern found in the information that at the minimum describes and organizes the possible observations or at the maximum interprets aspects of the phenomenon’;15 it ‘captures something important about the data in relation to the research question and represents some level of patterned response or meaning within the data set’.16 To facilitate coding and the generation (and interpretation) of themes in the data, the empirical investigation was theoretically informed by two key strands of literature that form the theoretical backbone of anthropology of regulation: regulatory theory and liminality.

The process was such that I generated initial codes by comparing each of the transcripts and fieldnotes. I started ‘open coding’ by reading each transcript and the fieldnotes (collated into five bundles for each observed REC) word by word and line by line. After completion of the open coding, I constructed initial codes that emerged from the text and then coded the remaining transcripts and fieldnotes with those codes. When I encountered data that did not fit into an existing code, I added new codes (the total number of codes exceeded 250). I then grouped the similar codes and placed them into categories. These categories were reorganized into broader, higher order categories, then grouped, revised, and refined, and finally checked to determine whether the categories were mutually exclusive. At this point I formed final categories, identifying subthemes both within and across the categories, which were then organized into main themes.17 This process of coding using qualitative thematic analysis enabled me to fulfil the goal of anthropology of regulation: to explain and understand the processual nature of regulation and the experiences of regulatory actors who both regulate and are regulated (i.e. how they understand their own actions), thereby providing larger theoretical insight into regulatory processes within a given space and within a given society.

Research Method Limitations, Challenges, and Successes

There are some limitations with my research design. Regarding the research strategy, analysis revealed from an inductive approach is limited in time and space—in my case, broad generalizations from research regarding RECs, the nature of ethics review, or health research regulation are not possible. I can only present themes and concepts that emerged from the data as situated in the locations under study and in the time period in which the data were collected. As Sally Falk Moore says, ‘life in society should always be conceived in a time-conscious frame, as in process, as in motion, and as a conglomeration of diverse activities noted at a particular time’.18 For anthropology of regulation to have methodological integrity and resonance with liminality, attention to time-conscious frames and processes are a sign of strength rather than weakness. Semi-structured interviews are also necessarily limited to capturing a moment in time. This does not mean, however, that the themes and normative findings discussed in this book cannot be abstracted beyond the RECs observed and individuals interviewed, nor that the findings cannot be situated in their larger political, social, and regulatory contexts (which themselves contain past and present stories).

There are limitations to the naturalistic observations. First, I observed only a snippet of what happens in ethics review processes. The full REC meetings that occur monthly are but one of the many activities that RECs perform; for example, I have noted in this book that there is sub-committee work (e.g. Proportionate Review, substantial amendments) conducted ‘by correspondence’ (i.e. email), and there are multiple documents that circulate among the REC members that I never had access to, the most important of which were the research applications and attendant documents themselves. This limited my ability to understand the intricate details of the discussions during REC meetings; I could only surmise what REC members were talking about for a given research application as I never could see the documents themselves. Second, the observations do not constitute a representative sample of RECs across the UK and may not be reliable as variables cannot be controlled, which also means cause and effect relationships cannot be established (e.g. that the Care Act 2014 and the HRA’s regulations cause RECs to instantiate research promotion in their practices). I did not perceive any pronounced observer effects, however. This was likely due to the fact that observers are a regular presence at REC meetings and I sat quietly either at a corner of the conference table, or in a chair in the corner of the room, taking notes by hand or on my tablet computer. Occasionally, REC members would make a joking remark to the effect of ‘Are you recording that, Edward?’, but my impression was that my presence did not impact the style and substance of meeting dynamics.

Reliability with thematic analysis causes some concern as a limitation (particularly for those within a positivist tradition) because of the wide variety of subjective interpretations that arise from the themes, as well as applying themes to large amounts of data (in my case, a daunting corpus of approximately 1000 pages of transcripts and fieldnotes). To increase reliability as much as possible, I monitored themes and code tables throughout the data analysis process through memos and detailed progress tracking. Regarding limitations to the sampling strategy, it may both under- and over-represent particular groups (e.g. RECs and individuals) within the sample. For instance, many of the REC member interviewees were members of the same REC in England; also, I interviewed only two REC Managers and three REC Chairs, which consequently may not provide a comprehensive portrait of these roles. Since the sample of interviewees and RECs was not chosen at random, there is an inherent selection bias such that the samples are unlikely to be representative of the target population of RECs, REC members, and regulators. Again, this can undermine my ability to make generalizations from my sample to RECs and health research regulation at large. Nonetheless, purposive and snowball sampling afforded me relatively easy access in a short amount of time and yielded significant data that, in my firm belief, addressed the research questions.

One of the challenges anticipated was access to meetings. RECs are notoriously difficult to access for those wishing to make them the object of investigation.19 Similarly, regulators can be difficult to access and may not speak forthrightly about their views. Yet, few access difficulties were encountered. Though I was expecting the HRA, the CSO, or a specific REC Chair to decline my requests, none did, and on the contrary, all were accommodating. I was particularly surprised at how accommodating the HRA was in both allowing me to speak with employees within the Authority, and also expressing interest in my research project. This is not to say that no challenges were encountered during the course of the empirical studies. Gaining ongoing access to RECs and REC members in Scotland, particularly the Scotland A REC, proved more challenging than I had expected. This was due to the Scientific Officer and REC Chairs acting as first-order gatekeepers, something I had not appreciated until I had largely completed the data collection. It was not unusual for the Scientific Officer or REC Manager to inform me that I could not attend a REC meeting, even if previously agreed, because other observers (including from the Scottish Government) had requested to attend the meeting and they took priority. Though this was a frustrating experience in terms of slightly delaying the period of data collection, overall, it did not impact my research findings. I was able to attend each REC several times and gain access to the individual members with whom I wanted to speak.

1   Yvonna Lincoln and Egon Guba, Naturalistic Inquiry (SAGE 1985).

2   Health Research Authority, ‘Search RECs’ http://www.hra.nhs.uk/about-us/committees-and-services/res-and-recs/search-research-ethics-committees/ accessed 23 October 2019.

3   HRA Assessment Review Portal (HARP) http://www.harp.org.uk/Account/Login accessed 23 October 2019.

4   Consent was obtained in the Scotland A REC meeting held on 19 January 2017.

5   Adults with Incapacity (Ethics Committee) (Scotland) Regulations 2002, as amended 2007.

6   Several of these participants emphasized to me that they were speaking in their individual capacity and not on behalf of their organization.

7   Monique Hennink and others, ‘Code Saturation Versus Moneaning Saturation: How Many Interviews Are Enough?’ (2017) 27 Qualitative Health Research 591.

8   Integrated Research Application System (IRAS) http://www.myresearchproject.org.uk/ accessed 23 October 2019.

9   IRAS ID 194243; Study title: ‘The Changing Health Research Regulatory Environment and NHS RECs’. The University of Edinburgh was my project sponsor.

10 REC SOPS para 2.72.

11 Daniel Meyer and Leanne Avery, ‘Excel as a Qualitative Data Analysis Tool’ (2009) 21 Field Methods 91; Johnny Saldaña, The Coding Manual for Qualitative Researchers (3rd edn, SAGE 2016).

12 Virgina Braun and Victoria Clarke, ‘Using Thematic Analysis in Psychology’ (2006) 3 Qualitative Research in Psychology 79.

13 Richard Boyatzis, Transforming Qualitative Information: Thematic Analysis and Code Development (SAGE 1998); Greg Guest and others, Applied Thematic Analysis (SAGE 2012).

14 Melanie Birks and Jane Mills, Grounded Theory: A Practical Guide (2nd edn, SAGE 2015) 89.

15 Boyatzis (n 13) 161.

16 Braun and Clarke (n 12) 82.

17 This inductive approach is adopted from Ji Young Cho and Eun-Hee Lee, ‘Reducing Confusion About Grounded Theory and Qualitative Content Analysis: Similarities and Differences’ (2014) 19 The Qualitative Report 1.

18 Sally Falk Moore, ‘An Unusual Career: Considering Political/Legal Orders and Unofficial Parallel Realities’ (2015) 11 Annual Review of Law and Social Science 1, 2.

19 Such access challenges for empirical investigations of ethics committees are noted, for example, by Will van den Hoonaard, The Seduction of Ethics: Transforming the Social Sciences (University of Toronto Press 2011) 10, 39 and Robert Klitzman, The Ethics Police? The Struggle to Make Human Research Safe (OUP 2015) 360–61.