UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Identifying indicators of respect in maternity care in high resource coutries : a Delphi study Clark, Esther 2019

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
24-ubc_2019_november_clark_esther.pdf [ 1.63MB ]
Metadata
JSON: 24-1.0384605.json
JSON-LD: 24-1.0384605-ld.json
RDF/XML (Pretty): 24-1.0384605-rdf.xml
RDF/JSON: 24-1.0384605-rdf.json
Turtle: 24-1.0384605-turtle.txt
N-Triples: 24-1.0384605-rdf-ntriples.txt
Original Record: 24-1.0384605-source.json
Full Text
24-1.0384605-fulltext.txt
Citation
24-1.0384605.ris

Full Text

IDENTIFYING INDICATORS OF RESPECT IN MATERNITY CARE IN HIGH RESOURCE COUNTRIES: A DELPHI STUDY by  Esther Clark  BScN Bilingual, University of Alberta, 2011  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  MASTER OF SCIENCE IN NURSING in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES   THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  October 2019  © Esther Clark, 2019   ii   The following individuals certify that they have read, and recommend to the Faculty of Graduate and Postdoctoral Studies for acceptance, a thesis entitled:  IDENTIFYING INDICATORS OF RESPECT IN MATERNITY CARE IN HIGH RESOURCE COUNTRIES: A DELPHI STUDY  submitted by Esther Clark in partial fulfillment of the requirements for the degree of Master of Science in Nursing  Examining Committee: Wendy Hall Co-supervisor Saraswathi Vedam Co-supervisor  Emily Jenkins Supervisory Committee Member iii  Abstract Mistreatment and disrespectful care of women in childbirth has been studied and documented internationally. Measurement tools that capture (dis)respectful behavior could support quality care guidelines and policies intended to serve respectful maternity care (RMC). Measurement tools relevant to high-resource contexts are lacking. This Delphi study aimed to identify indicators of RMC, including indicators capturing disrespect, from the point of view of a panel of international researchers, practitioners, and service users while attending to the appropriateness of the indicators for use among populations that experience stigma. These indicators are intended to create a registry of items for use in research on RMC in high-resource contexts. Two rounds of online instrument review and analysis were completed with a Delphi panel. Consensus was assessed according to agreement on the importance and relevance of the indicatorsin the first round, and agreement on the priority of the indicators for inclusion (by rank) in the second round. An initial set of indicators (n = 201) was drawn from the literature to populate the Round One online instrument. In Round Two, the panelists reviewed a revised list of 156 indicators grouped into 17 domains.  In both rounds, consensus was supported using qualitative feedback gathered on individual indicators and groups of indicators. Findings showed that the panelists generally supported the indicators but demonstrated weak to moderate agreement with each other. In Round One, 191 out of 201 indicators exceeded a content validation index of 0.80. In Round Two, Kendall’s W ranged from 0.081 (p = 0.209) to 0.425 (p < 0.001) across the domains. After the two rounds, 14 indicators could be said to be strongly supported by the panel. These indicators represented the domains of verbal mistreatment, stigma and discrimination, physical exams and procedures, family and cultural support, autonomy (in decision making), and health system conditions and constraints (physical).  Stability of panel iv  agreement between rounds was difficult to assess. The strongly supported indicators identify care behaviours that are important for RMC and suggest areas of improvement for practice and education of healthcare providers.   v  Lay Summary Disrespectful care of women during childbirth is an important global issue. This study involved a panel of experienced researchers, care providers, and Canadian maternity care recipients evaluating and identifying relevant survey questions. The goal was to create a registry if items that could be used in research to measure experiences of respectful and disrespectful care experiences of maternity care users in a high-resource country. This review occurred via two online instruments that were distributed to the panel. The panel agreed on the importance of most of the questions presented to evaluate respectful care and identified 14 questions that they strongly supported about verbal behaviours, discrimination, decision making, family support, and conditions of health care facilities. Gathering information about the issues that these questions capture could lead to policies and health care practices that create more respectful care environments for women during pregnancy, labour, birth and post-partum.  vi  Preface This work was completed in collaboration with the UBC Birth Place Lab as part of the project titled “Giving Voice to Mothers; Measuring access to high quality, respectful maternity care in Canada”. Ethics approval for this project was provided by the UBC Ethics Board and is contained within the ethics approval for the above project: certificate ID H12-02418.  I, Esther Clark, was responsible for quantitative and qualitative analysis of the data arising from the two Delphi rounds. Item selection and construction of the online instruments was completed in collaboration between myself, my supervisors Dr. Wendy Hall and Saraswathi Vedam, and members of the Birth Place Lab. Recruitment for the Delphi panel was facilitated by the Birth Place Lab. Thesis writing was completed with editorial and supervisory feedback and input from my committee: Dr. Wendy Hall, Saraswathi Vedam, and Dr. Emily Jenkins.  vii  Table of Contents Abstract ......................................................................................................................................... iii  Lay Summary .................................................................................................................................v  Preface ........................................................................................................................................... vi  Table of Contents ........................................................................................................................ vii  List of Tables ................................................................................................................................ xi  List of Figures .............................................................................................................................. xii  Acknowledgements .................................................................................................................... xiii Dedication ................................................................................................................................... xiv  Chapter 1: Introduction ................................................................................................................1  1.1 Background ..................................................................................................................... 1  1.2 Problem Statement .......................................................................................................... 5  1.3 Significance..................................................................................................................... 6 1.4 Purpose ............................................................................................................................ 6  1.5 Research Question .......................................................................................................... 7 1.6 Summary ......................................................................................................................... 7  Chapter 2: Literature Review .......................................................................................................8  2.1 Introduction ..................................................................................................................... 8  2.2 Methods........................................................................................................................... 8  2.3 Synthesis ......................................................................................................................... 9 2.3.1 Theoretical approaches ............................................................................................. 10 2.3.2 Item generation ......................................................................................................... 12  2.3.3 Survey and tool format.............................................................................................. 13 viii  2.3.4 Validity and reliability .............................................................................................. 14 2.3.5 Domains measured .................................................................................................... 17 2.3.6 Sample characteristics ............................................................................................... 20 2.4 Summary ....................................................................................................................... 20  Chapter 3: Methods .....................................................................................................................22  3.1 Introduction ................................................................................................................... 22  3.2 Design ........................................................................................................................... 22 3.2.1 Sampling ................................................................................................................... 23 3.3 Procedure ...................................................................................................................... 24  3.3.1 Round One online instrument creation ..................................................................... 24 3.3.2 Round One quantitative analysis .............................................................................. 26 3.3.3 Round One qualitative analysis ................................................................................ 27 3.3.4 Round Two online instrument creation..................................................................... 27 3.3.5 Round Two quantitative analysis .............................................................................. 31 3.3.6 Round Two qualitative analysis ................................................................................ 32  3.4 Rigour ........................................................................................................................... 32 3.5 Ethical Considerations .................................................................................................. 34  3.6 Summary ....................................................................................................................... 35  Chapter 4: Findings .....................................................................................................................36  4.1 Introduction ................................................................................................................... 36  4.2 Round One Findings ..................................................................................................... 36 4.2.1 Panel Characteristics and Response Rate ................................................................. 36 4.2.2 Quantitative findings ................................................................................................. 38 ix  4.2.3 Qualitative findings ................................................................................................... 39 4.2.4 Results of combined findings.................................................................................... 41  4.3 Round Two Findings..................................................................................................... 43 4.3.1 Panel characteristics and response rate ..................................................................... 43 4.3.2 Quantitative results ................................................................................................... 45 4.3.3 Qualitative findings ................................................................................................... 48 4.3.3.1 Domain and indicator feedback ........................................................................ 48 4.3.3.2 Generality or specificity of indicators............................................................... 49 4.3.3.3 Option to identify healthcare providers ............................................................ 50 4.3.3.4 Missing indicators or domains .......................................................................... 51 4.4 Summative findings ...................................................................................................... 52 4.5 Conclusion .................................................................................................................... 54 Chapter 5: Discussion ..................................................................................................................55  5.1 Introduction ................................................................................................................... 55  5.2 Study Summary ............................................................................................................. 55 5.3 Discussion ..................................................................................................................... 56  5.3.1 Agreement or consensus of panel ............................................................................. 56 5.3.2 Strongly supported indicators ................................................................................... 60 5.3.3 Termination criteria for the Delphi method .............................................................. 60 5.3.4 Conceptual challenges .............................................................................................. 61 5.3.5 Situating the findings ................................................................................................ 64 5.3.6 Research method ....................................................................................................... 66 5.3.6.1 Strengths ........................................................................................................... 66 x  5.3.6.2 Limitations ........................................................................................................ 67 5.4 Implications................................................................................................................... 69 5.4.1 Research .................................................................................................................... 69  5.4.2 Practice ...................................................................................................................... 70 5.4.3 Education .................................................................................................................. 70 5.5 Summary ....................................................................................................................... 70  References .....................................................................................................................................72  Appendices ....................................................................................................................................84  Appendix A Complete Search Histories ................................................................................... 84 Appendix B List of Round One indicators ............................................................................... 89  Appendix C Abbreviated Round One online instrument ........................................................ 113 Appendix D List of Round Two indicators............................................................................. 121  Appendix E Abbreviated Round Two online instrument ....................................................... 128  Appendix F Delphi Round One report .................................................................................... 133     xi  List of Tables Table 3-1: Round Two online instrument domains ....................................................................... 30 Table 4-1: Round One expert panel regional distribution and backgrounds ............................... 37 Table 4-2: Indicatorss organized by level of panel support ......................................................... 39 Table 4-3: Round Two expert panel regional distribution and backgrounds ............................... 44 Table 4-4: Agreement of panelists on rankings of indicatorsby domain ...................................... 46 Table 4-5: Panel support for indicators by domain ...................................................................... 47 Table 4-6: Characteristics of strongly supported indicators ........................................................ 53 xii  List of Figures  Figure 2-1. Selection process of studies included in literature review ........................................... 9 Figure 4-1. Process of indicator selection for Round Two online instrument .............................. 42  xiii  Acknowledgements I wish to extend immense gratitude to my co-supervisors, and committee member who have supported this work: Dr. Wendy Hall, who supported and advocated for me throughout this project, and offered extensive, thoughtful feedback that led my learning process through this thesis; Saraswathi Vedam, who invited me into the Birth Place Lab and the Speaking of Respect study, and encouraged my involvement in this incredible work to hold up the voices of women; Dr. Emily Jenkins, who asked important questions that strengthened my writing and supported the development of my research question. Additionally, I would like to thank members of the Birth Place Lab who supported the process of this work gave important feedback throughout. Specifically to Kathrin Stoll, who acted as an invaluable resource throughout the whole process; and to Winnie Lo, who provided much needed administrative support.  Many thanks to my husband Robert, who has loved and supported me emotionally through this process; and to my children for their patience and love. xiv  Dedication To my dear family who have given me such grace as I completed this work. To my committee, Dr. Wendy Hall, Saraswathi Vedam, and Dr. Emily Jenkins, for sharing their immense wisdom. To the women that I have been honoured to witness in their vulnerability and power as they birth their children. 1  Chapter 1: Introduction 1.1 Background The ethical treatment of women during care in pregnancy, labour, and childbirth has become an issue of global importance. Phrases such as “disrespect and abuse”, “mistreatment”, and even “obstetric violence” have been used to communicate unethical treatment (Bohren et al., 2015; Freedman & Kruk, 2014; Sadler et al., 2016; WHO, 2015).  With increasing documentation of mistreatment of women during perinatal care, several international bodies have released both a charter of rights of childbearing women (The White Ribbon Alliance for Safe Motherhood, 2011) and quality care guidelines for maternity care (WHO, 2016) which involve recommendations for respectful care in detail. Specifically, the World Health Organization has declared that “Every woman has the right to the highest attainable standard of health, which includes the right to dignified, respectful health care” (WHO, 2015, p. 1). The term “Respectful Maternity Care” (RMC) has been described as a term that refers to a rights-based and equitable approach to care (Shakibazadeh et al., 2017) while also referring to characteristics of caregiver behaviours or beliefs about maternity care (Downe, Lawrie, Finlayson, & Oladapo, 2018). This term attends to mistreatment while being sensitive towards the risk of antagonizing health care providers (Sen, Reddy, & Iyer, 2018). While the absence of mistreatment is not the sole marker of respectful care (Shakibazadeh et al., 2017), by drawing attention to experiences of disrespectful care it is possible to see where fundamental human rights are being ignored (Khosla et al., 2016) and impacting health outcomes (Shakibazadeh et al., 2017; Vedam, Stoll, Mcrae, et al., 2019; Vedam, Stoll, Rubashkin, et al., 2017). A result of increased awareness of mistreatment has been a desire to deepen understanding of how women experience or do not 2  experience respect in maternity care. Documenting and ameliorating disrespectful care are globally important objectives for women’s health research. There are several models and typologies that describe disrespect in the context of maternity care. A landscape analysis reported by Bowser and Hill (2010) used both published academic and grey literature, as well as interviews with a small sample of experts, to generate seven categories of disrespect and abuse. These categories were: physical abuse, non-consented care, non-confidential care, non-dignified care, discrimination based on specific patient attributes, abandonment of care, and detention in facilities (Bowser & Hill, 2010). Also explored in this analysis were factors influencing disrespect and abuse, and several interventions intended to promote respectful care. The interventions discussed in the analysis involved quality improvement designs, strategies to decrease stigma towards specific populations, and the development of measurement tools for the assessment of respect in maternity care.  Although the above landscape analysis has provided an important framework for understanding disrespect and abuse in maternity care, it was criticized for not providing operational definitions for each of the categories; lack of operational definitions created ambiguity about the study and measurement of the concepts (Bohren et al., 2015). The categories created from the landscape analysis also did not specify any structural or institutional practices that are experienced as disrespectful (Freedman et al., 2014). The inclusion of measurement tools in the presentation of interventions is also problematic as it assumes that intervention will take place as a result of measurement. Outcomes from measurement may influence policy or education but do not directly address the underlying structures that lead to disrespectful care (Sen et al., 2018). 3  Using a structured systematic review method, Bohren and colleagues (2015) synthesized findings from 65 quantitative and qualitative studies to develop a multi-level typology of mistreatment. The authors argued for the use of the term “mistreatment” as they felt this concept captured both intentional and non-intentional actions that people experienced as disrespectful. The themes identified through this review were: physical abuse, verbal abuse, stigma and discrimination, failure to meet professional standards of care, poor rapport between women and providers, and health system conditions and constraints. Each of these first order themes incorporated second and third order themes intended to provide a more detailed picture of specific behaviours or examples of types of mistreatment; the themes were extracted from the studies in the review. For example, the theme of stigma and discrimination incorporates second order themes of stigma related to sociodemographic characteristics and medical conditions. These second order themes are further broken down into stigma related to ethnicity/race, age, socioeconomic status and stigma related to HIV status respectively. Bohren and colleagues’ (2015) systematic review has generated an evidence-based framework of mistreatment that is grounded primarily in qualitative research. The resulting framework is comprehensive and broad in nature, but does not clearly distinguish between intentional forms of abuse and structural forms of abuse (Jewkes & Penn-Kekana, 2015). Aiming to create a working definition of disrespect and abuse, Freedman and colleagues developed a definition drawing on both the importance of experiential data as well as a critical institutional approach (Freedman et al., 2014). The authors developed a model to illustrate different levels of the experience of disrespect and abuse, as well as different contributors to this experience. They were careful to demonstrate both the overt and hidden nature of this concept by differentiating between what is intended and not intended as abuse, as well as what actions 4  would be generally understood as abuse and what abusive behaviours may be accepted as normative. Disrespect and abuse are defined from this model as “interactions or facility conditions that local consensus deems to be humiliating or undignified, and those interactions or conditions that are experienced as or intended to be humiliating or undignified.” (Freedman et al., 2014, p. 916). Their definition has been commended as attending to variations between contexts (Jewkes & Penn-Kekana, 2015), and drawing more attention to the structural forces involved in the phenomenon. The typologies and definitions detailed above have provided an important starting point for the measurement of respect in maternity care (Savage & Castro, 2017; Vedam, Stoll, Rubashkin, et al., 2017). An important goal in this field of inquiry is to better estimate the prevalence of this phenomenon; however, this task has been difficult because definitions or typologies have not been taken up in a homogenous way that would allow for meta-analysis (Bohren et al., 2015). Measurement of mistreatment is critical to the development of RMC policies at the political and institutional level (Jewkes & Penn-Kekana, 2015). The development of measurement tools has been affected by both a lack of discussion around the best practice for measurement and an unclear definition of RMC (Savage & Castro, 2017). Concerns have been raised about the need to understand the ethical considerations inherent to this field of research, as a subset of violence research, as well as acceptable procedures for measurement (Jewkes & Penn-Kekana, 2015). Geographic differences also have implications for measuring mistreatment in contexts that are neither reflected in nor sharing characteristics associated with the majority of this research (Savage & Castro, 2017) While there has been some work done to measure RMC in a high resource country context (ex. Taavoni, Goldani, Rostami Gooran, Haghani, & Gooran, 2018; Vedam, Stoll, Taiwo, et al., 2019) low resource contexts are more strongly represented in this 5  literature than high income contexts (Vedam, Stoll, Rubashkin, et al., 2017). RMC in the Canadian context has specifically been examined in the Changing Childbirth in British Columbia (BC) study (Vedam, Stoll, Martin, et al., 2017; Vedam, Stoll, Rubashkin, et al., 2017, Vedam, Stoll, Mcrae, et al., 2019) but these studies have not addressed the breadth of domains articulated by the above models and typologies. Furthermore, the considerable diversity of the Canadian population, including unique groups that experience various forms of stigma, calls for a more comprehensive analysis. 1.2 Problem Statement Indicators that can be used to comprehensively measure the phenomenon of respect in maternity care in the context of high resource countries are necessary for our understanding of the experience of disrespect in these locations. Outcomes from a measurement tool using indicators designed to capture the unique presentation of RMC in the high resource context are important to inform policies and interventions to improve RMC in these countries. Recognizing diversity in the population of birthing women is particularly important because of the possibility of intersections between mistreatment during maternity care with other forms of discrimination and stereotyping (Shaw et al., 2016). In fact, an example of this intersectionality has been presented in the Canadian maternity care context as an opportunity to reconsider how structural inequities impact care for women that are traditionally stigmatized due to multiple characteristics  (Hankivsky et al., 2014). Additionally, increased empirical data is necessary to represent the experiences of childbearing women living in high income countries (Savage & Castro, 2017; Vedam, Stoll, Rubashkin, et al., 2017) so that empirically-based interventions can be developed.  6  1.3 Significance Respectful care is an important aspect of maternity care. Women have described themes related to care provider behavior as leading to the experience of birth as traumatic (Beck, 2004). Furthermore, negative childbirth experience has been linked with the development of post-partum post-traumatic stress disorder (PPTSD) (Verreault et al., 2012). Research on the effects of symptoms of PPTSD has reported that these symptoms impact parent-baby attachment (Parfitt & Ayers, 2009). Others have noted that respectful care underpins care quality (McConville, 2014). Mistreatment is also considered  a violation of human rights in childbirth (Khosla et al., 2016; The White Ribbon Alliance for Safe Motherhood, 2011).It has been argued that respectful care, in addition to being an important contributor to safety in patient care, is a human right in terms of autonomy and dignity and should therefore be independently considered as a health outcome (Vedam, Stoll, Mcrae, et al., 2019; Vedam, Stoll, Rubashkin, et al., 2017). Thus, demonstrating respectful care is an important part of safe and ethical practice in maternity care.  1.4 Purpose The purpose of this research is to develop consensus around key indicators of respectful care in the context of high resource countries, using Canada as an example of this type of context, by drawing on an international panel of experts and Canadian service users. These indicators will be used to create a registry of items that can be used for research in these contexts. This registry is an important resource when it comes to collecting data about the care experiences of women and their babies in pregnancy, labour, birth, and post-partum.  7  1.5 Research Question The question guiding this study was: What are indicators that can be used to measure RMC in high resource countries based on consensus from a panel of international experts and Canadian service users? 1.6 Summary RMC is an important component of providing high quality, safe and ethical care to women in high resource countries. Relevant measurement tools that capture exposure to disrespectful care are lacking. Furthermore, existing measures do not comprehensively assess the concerns or experiences of diverse groups, particularly those who experience stigma. Facilitating consensus on key indicators of RMC which can be used to develop a registry of items for use in research in high resource contexts, as represented by the Canadian context, is an important task to address these limitations. Measuring disrespectful maternity care is important to provide empirical data to inform the development of meaningful policy and strategies to ensure ethical practice and access to respectful care in the childbearing year. The next chapter describes a literature review of current measurement strategies used for measuring RMC. 8  Chapter 2: Literature Review 2.1  Introduction In this chapter, I present a synthesis of the current body of literature on the measurement of RMC in middle and high-resource countries. A description of the methods used to search for appropriate literature opens this section. 2.2 Methods My literature search focused on identifying studies describing measurement of RMC in middle- and high-income countries. Studies describing tool development or scale validation as well as studies describing surveys of respectful maternity care were retained in the final review. The databases CINAHL and Medline (Ovid) were searched in October and November of 2018 using the following search terms combined by Boolean operators: Childbirth, Birth, Labour, Pregnancy, Women, Respect, Disrespect, Discrimination, Mistreatment, Rights, Questionnaire, Survey, Scale, Instrument, Development, Validation (see Appendix A for complete search histories). Additionally, subject headings and MESH headings in these respective databases were used as part of the search strategy. Searches were limited to English language and published, peer-reviewed journal articles. Studies were excluded if they did not address maternity care, did not address respect, and if they were situated in low-income countries. Studies were included if they described the development of  tools or surveys focused on the exploration of patient experiences and/or if any items in the tools related to the experience of respect, mistreatment, interactions with care providers or decision making. Fifteen studies were retained for analysis and included in the following review (see Figure 1).  9   Figure 2-1. Selection process of studies included in literature review  2.3 Synthesis The following synthesis discusses the selected tools in terms of the theoretical approaches used, item generation and survey and tool format, validity and reliability analysis, domains measured, and the characteristics of the samples within which these tools were used. The tools represented in articles retained in this review were published between 2006 and 2018 in a variety of high- and middle-income countries including Canada, the United States, Iran, the Netherlands, 1714 articles identified between databases 1698 articles retained after removing duplicates 295 articles reviewed at abstract level 20 articles reviewed in full text 15 articles included in literature review 1403 eliminated at title review 275 eliminated after review of abstract 5 eliminated after full text review 10  Hungary, the United Kingdom, Australia, and Turkey. The authors represented a variety of disciplines including midwifery, nursing, health systems and policy, obstetrics and gynecology, global health, and population health. 2.3.1  Theoretical approaches Several theoretical approaches to measuring the concept of respect in maternity care are represented in the retained studies. The most common approach noted in the literature was patient satisfaction (Gungor & Beji, 2012; Heaman et al., 2014a; Hollins Martin & Martin, 2014; Janssen, Dennis, & Reime, 2006; Stevens, Wallston, & Hamilton, 2012). In these studies a variety of items related to satisfaction with the care experience. Items commonly described aspects of interpersonal care and respect for privacy and decision making. Another theoretical approach used was woman-centred care (Attanasio & Kozhimannil, 2015; Rubashkin et al., 2017; Vedam, Stoll, Martin, et al., 2017); these papers included a stronger focus on interpersonal respect and support for decision making.  Only two tools were identified that explicitly measured respect in maternity care: the Quality of Respectful Maternity Care Questionnaire in Iran (QRMCQI) (Taavoni et al., 2018) and the Mothers on Respect Index (MORi) (Vedam, Stoll, Rubashkin, et al., 2017). The tools drew from the descriptions of disrespect in maternity care articulated by Freedman et al. (2014) and Bowser and Hill’s (2010) review. Other theoretical lenses used to underpin the tools reviewed included those of system responsiveness, which refers to the quality of non-clinical aspects of care (Scheerhagen, Van Stel, Birnie, Franx, & Bonsel, 2015; van der Kooy et al., 2014), communication (Heatley, Watson, Gallois, & Miller, 2015), discrimination (Yelland, Sutherland, & Brown, 2012), and care quality (Garrard & Narayan, 2013).  11   The approaches of existing tools encompass various strengths and weaknesses. Measuring patient experience by using satisfaction surveys has received several criticisms. For example, the concept of patient satisfaction or satisfaction with care has lacked a common definition across tools and questionnaires (Hekkert, Cihangir, Kleefstra, van den Berg, & Kool, 2009; Manary, Boulding, Staelin, & Glickman, 2013). Furthermore, satisfaction research has provided conflicting evidence about whether patient satisfaction is related to positive health outcomes or due to other aspects of care such as interpersonal experiences (Manary et al., 2013). Other authors have identified satisfaction as difficult to measure (Escuriet et al., 2015) and  vulnerable to a positive bias, indicated by minimal variation in scores (Janssen et al., 2006). A Dutch study which sought to explore variance in patient satisfaction scores found that the variance was accounted for by patient characteristics such as age, education, and health status as opposed to hospital or department factors (Hekkert et al., 2009). In light of these criticisms, suggestions have been made to design satisfaction surveys to measure specific care practices or to generate evaluative feedback on care models (Manary et al., 2013).   The approach of woman-centred care comes closer to measuring the concept of respect.  Several key characteristics of woman-centred care are collaboration, sharing responsibility, respectful actions, shared decision making, and open communication (Maputle & Donavon, 2013). These characteristics capture similar but opposing experiences to some of the categories described in typologies of mistreatment or disrespect and abuse such as poor rapport between women and providers (Bohren et al., 2015) and non-consented care and non-dignified care (Bowser & Hill, 2010).  Finally, as discussed above, two tools were underpinned by typologies and models of disrespect in maternity care. These typologies, and the language they use to describe disrespect, 12  have developed over time to highlight different approaches to the concept such as the role of intentionality and whether underlying drivers for disrespect are considered (Sadler et al., 2016). The variations in language use have been criticized as detracting from conceptual clarity (Sen et al., 2018). 2.3.2 Item generation Many authors have described literature review processes that led to item creation for respectful maternity care tools. Primarily, these literature reviews were directed towards previous surveys or tools that aligned with the theoretical approach of a tool that authors were creating (Rubashkin et al., 2017; Scheerhagen et al., 2015; Stevens et al., 2012; Taavoni et al., 2018; van der Kooy et al., 2014; Vedam, Stoll, Martin, et al., 2017; Vedam, Stoll, Rubashkin, et al., 2017; Yelland et al., 2012). For example, the QRMCQI, described by Taavoni et al. (2018), developed items primarily from a focused literature review of WHO documents describing respectful maternity care. Scheerhagen et al. (2015) described a four-step process of reviewing responsiveness items; they drew on previously developed interview questions, reviewed items from previously developed surveys, and used client experience items from the Dutch Consumer Quality Index.  Other strategies for item creation have been used. Janssen and colleagues (2006) used a combination of focus group data and previously used satisfaction surveys to develop items for the Care in Obstetrics: Measure for Testing Satisfaction (COMFORTS) scale. The items for the Quality of Prenatal Care Questionnaire (QPCQ) were generated using both interview data and existing quality guidelines (Heaman et al., 2014a). A SERVQUAL questionnaire developed in the UK to assess the how the patient experience fit with their expectations was developed directly by the authors through the use of observational data of the care service they wished to 13  evaluate and other similar SERVQUAL measures (Garrard & Narayan, 2013). Several articles did not describe processes used to develop the surveys presented (Attanasio & Kozhimannil, 2015; Heatley et al., 2015; Hollins Martin & Martin, 2014).  2.3.3 Survey and tool format Lengths of the various tools range from 5 to 111 items. In some cases, especially in the context of surveys aimed at measuring satisfaction, only a portion of the items applied specifically to the experience of disrespect or mistreatment during care. For example, the Scale for Measuring Maternal Satisfaction (SMMS), developed by Gungor and Beji (2012), included several factors that were intended to measure satisfaction in a comprehensive way. Not all of these factors, for example “hospital room” and “meeting baby” could be said to be applicable to a measure of mistreatment (Gungor & Beji, 2012). In other cases, all the items in the scale were applicable to measurement of respect: the scales developed with the goals to measure respect or mistreatment (Rubashkin et al., 2017; Taavoni et al. 2018; Vedam, Stoll, Martin, et al., 2017; Vedam, Stoll, Rubashkin, et al., 2017); the scales measuring system responsiveness (Scheerhagen et al., 2015; van der Kooy et al., 2014); and the scales measuring discrimination (Attanasio & Kozhimannil, 2015; Yelland et al., 2012).   Most survey items were developed with Likert- style responses ranging from 3- to 7-point scales. Several tools used dichotomous yes/no responses (Attanasio & Kozhimannil, 2015; Heatley et al., 2015; Vedam, Stoll, Rubashkin, et al., 2017) and one survey also included 2 items which were open ended response questions (Rubashkin et al., 2017); a thematic analysis of the participants’ responses was presented in the report.   Tools to measure patient experience and RMC have been administered in a variety of ways. Several tools were administered as online surveys (Attanasio & Kozhimannil, 2015; 14  Rubashkin et al., 2017; Scheerhagen et al., 2015; Stevens et al., 2012; Vedam, Stoll, Martin, et al., 2017; Vedam, Stoll, Rubashkin, et al., 2017); others were paper and pencil measures presented as part of survey packages, which were either given to individuals during late pregnancy or early post-partum or distributed by mail (Garrard & Narayan, 2013; Gungor & Beji, 2012; Heaman et al., 2014; Heatley et al., 2015; Hollins Martin & Martin, 2014; Janssen et al., 2006; Yelland et al., 2012). Two of the tools were administered via interview (Taavoni et al., 2018; van der Kooy et al., 2014) and one of the tools used interviews during a scale validation phase with a small sample (Scheerhagen et al., 2015).  Publications show some diversity in the timing of administration of tools measuring respectful maternity care. Timing of measurement was considered by Heaman and colleagues (2014) in the administration of the QPCQ but was not found to significantly impact the scores generated by this scale. Other authors have raised concerns about the timing of measurement surrounding birth experiences creating a “halo effect” that may cause inaccuracies in reporting on experiences directly following a birth (Fereday, Collins, Turnbull, Pincombe, & Oster, 2009). While it is convenient to recruit a sample in the early post-partum period when most individuals are easily accessed through care-providers, it is also important to consider whether this timing or context for measurement (associated with health care providers) biases the data generated.  2.3.4 Validity and reliability Several strategies have been used to attend to rigour in tool development. About half of the reports described using a panel of experts or a steering committee to aid in item choice and development. A number of these panels included consumer partners (van der Kooy et al., 2014; Vedam, Stoll, Martin, et al., 2017; Vedam, Stoll, Rubashkin, et al., 2017). Content validation, defined as assessment of the relevance of the items in a tool, was assessed in several tools 15  through discussion with an expert panel (Vedam, Stoll, Martin, et al., 2017; Vedam, Stoll, Rubashkin, et al., 2017) or through the measurement using a content validation index (Gungor & Beji, 2012; Rubashkin et al., 2017; Taavoni et al., 2018). In assessing validity and reliability for other tools, content validation as well as testing for clarity was completed using a small sample of the target population  (Heaman et al., 2014; Scheerhagen et al., 2015; van der Kooy et al., 2014). Several authors used pilot testing of tools with a small sample of a target population as a strategy to test for item importance (Garrard & Narayan, 2013) and item clarity (Gungor & Beji, 2012) and as an additional content validation step (Janssen et al., 2006). Validity testing was described by most authors. Face validity was specifically attended to in the development of three of the tools, the QPCQ (Heaman et al., 2014), the QRMCQI (Taavoni et al., 2018), and Yelland et al.’s (2012) discrimination items. Face validity was addressed through pilot testing with a sample of women (Heaman et al., 2014; Yelland et al., 2012) and through consultation with an expert panel (Taavoni et al., 2018). Most authors described analysis to assess construct validity. Factor analysis may be used as a strategy to analyze whether tools correspond to theoretical understandings of a construct (Polit, 2010). Most tools that were identified by the authors as scales were subjected to factor analysis to demonstrate construct validity. Tools that were identified as surveys or survey instruments did not undergo this type of testing (Garrard & Narayan, 2013; Heatley et al., 2015; Rubashkin et al., 2017; Yelland et al., 2012). Some authors presented analyses that supported discriminant validity; that is, ensuring outcomes from the tool do not correlate with unassociated variables (Streiner & Norman, 2008) through doing subgroup analysis (van der Kooy et al., 2014) and known group comparisons (Scheerhagen et al., 2015). Some authors also presented analyses that supported convergent validity; testing the correlation of the tool with known measures of the 16  construct or a similar construct (Streiner & Norman, 2008). The strategies used were to test tools against other scales measuring similar constructs (Gungor & Beji, 2012; Scheerhagen et al., 2015) and to test the correlations of the subscale scores (Heaman et al., 2014).   In determining the validity of tools, the inclusion of the perspectives of service-users, specifically those who have experienced maternity care, has been an important consideration. Two of the tools very clearly involved service-users in their creation (Vedam, Stoll, Martin, et al., 2017; Vedam, Stoll, Rubashkin, et al., 2017); the authors attributed a component of the validity of the tool to their participatory processes. Authors have argued that the inclusion of women consumers’ voices is an ethical way to engage in the measurement of RMC due to the concept’s similarity to violence against women (Jewkes & Penn-Kekana, 2015). However, other authors have expressed concern about the potential for misrepresenting women’s experiences either because of recall bias, that is not accurately remembering events, or because of social desirability bias, that is feeling unable to describe the full experience or obscuring details to minimize complaint (Savage & Castro, 2017). These considerations affect the creation of tools and their administration.  Almost all of the authors have reported a Cronbach’s alpha for the tools they created. Cronbach’s alpha is a measure of the internal consistency reliability of a set of items and reflects how much of the variability of the scores from a scale are due to individual differences versus random variation in scores (Polit, 2010). The closer this value is to 1.00, the higher the reliability of the scale, but the value of alpha can be affected by the length of the scale with longer scales being more likely to have items that correspond to the sample variability (Polit & Beck, 2017). Values over 0.80 are considered desirable. Among the tools presenting full scale analysis, Cronbach’s alpha scores ranged from 0.79 to 0.97. Some tools were broken down into subscales 17  either theoretically or by timing of care; prenatal, labour and delivery, and postpartum. (Attanasio & Kozhimannil, 2015; Heatley et al., 2015; Taavoni et al., 2018; van der Kooy et al., 2014). Cronbach’s alpha scores among the subscales ranged from 0.62 to 0.94.  2.3.5 Domains measured For each of the tools reviewed, the authors presented the domains of the primary theoretical concept that the tool measured. These domains were either identified as components of the theory leading the creation of the tool or as developed and identified through exploratory factor analysis (EFA). Theoretical domains included those from the WHO responsiveness scale for the tools measuring healthcare system responsiveness: dignity, autonomy, confidentiality, communication, prompt attention, social consideration, quality of basic amenities, choice, and continuity (Scheerhagen et al., 2015; van der Kooy et al., 2014). Heatley and colleagues (2015) used Street’s Linguistic Model of Patient Participation in Care for their tool measuring communication. The domains used were self-confidence, client-centred communication, and communication about choices. The domains identified through EFA varied based on the theoretical approach used to develop the tool. The patient satisfaction  measures tended to include perceptions of the quality of care received and environmental or structural factors, in addition to interpersonal care and support in decision-making factors (Gungor & Beji, 2012; Heaman et al., 2014; Hollins Martin & Martin, 2014; Janssen et al., 2006). Although some of the patient satisfaction domains are somewhat applicable to the measurement of disrespect or mistreatment, the domains in these tools are not as explicit as those in tools designed to measure disrespect or discrimination. The tools intended to measure disrespect or discrimination had direct domains that captured autonomy, perceptions of discrimination, barriers to communication, and changes to behaviour 18  as a result of disrespect (Attanasio & Kozhimannil, 2015; Vedam, Stoll, Martin, et al., 2017; Vedam, Stoll, Rubashkin, et al., 2017; Yelland et al., 2012).  Some of the most commonly measured domains identified across the tools were communication, autonomy in decision making, and quality of interpersonal care. These domains have commonalities with definitions and typologies of respectful maternity care (Bohren et al., 2015; Bowser & Hill, 2010). However, only the QRMCQI (Taavoni et al., 2018) that was developed using the Bowser and Hill framework (2010) had items that specifically addressed experiences of mistreatment such as verbal abuse, physical harm, or non-consent. Other tools had items that were worded more broadly. For example, one tool had an item: “Were you treated with respect by your care provider?” (van der Kooy et al., 2014, p. 4) and another tool had an item: “Overall while making decisions during pregnancy I felt my personal preferences were respected” (Vedam, Stoll, Rubashkin, et al., 2017). These examples demonstrate the difficulty in measuring the phenomenon of respect in maternity care in a comprehensive and direct way. In contrast, existing typologies of mistreatment have informed the development of items that measure specific care experiences, such as restriction of movement in labour (Taavoni et al., 2018). As a result of these different approaches, RMC measurement remains conceptually inconsistent. In most cases, authors developed items for a tool by following theoretical domains  (Garrard & Narayan, 2013; Gungor & Beji, 2012; Heaman et al., 2014; Heatley et al., 2015; Hollins Martin & Martin, 2014; Scheerhagen et al., 2015; Taavoni et al., 2018; van der Kooy et al., 2014; Vedam, Stoll, Martin, et al., 2017). Some authors followed this initial step with EFA to determine how items represented those domains in the tool (Gungor & Beji, 2012; Janssen et al., 2006; Stevens et al., 2012). For example, in the PCCh scale (Stevens et al., 2012), factor analysis 19  was used after item creation to demonstrate that the scale items all measured the theoretical construct of control that the authors intended to measure. Over half of the originally developed items were excluded after this analysis because they did not fit with the conceptual definition that the authors had specified. Other authors presented domains as a result of post-hoc factor analysis. For some tools, the domains identified through EFA were more specific for describing the underlying construct than the theoretical domains that were used to develop the tool (Gungor & Beji, 2012; Heaman et al., 2014a; Vedam, Stoll, Rubashkin, et al., 2017).  While analyzing the domains of measurement across these tools, I identified an important assumption. Primarily, I found that the quality of communication and involvement in decision making are viewed as key components of the experience of quality care and respect. The presence of these domains in multiple tools suggests that these domains can be reliably broken down into measurement items and that experiences of mistreatment may be easily recognized from experiences of poor communication and restricted autonomy in decision making  (Heatley et al., 2015b; Vedam, Stoll, Martin, et al., 2017). This assumption underpins types of mistreatment that have been culturally normalized despite their effects on patient experience of respectful care, such as power dynamics between providers and service users that can negatively influence care provision and the woman’s agency (Freedman et al., 2014; Vedam, Stoll, Mcrae, et al., 2019). This can include decision making about care which is directed primarily by health care providers using the biomedical model (Sadler et al., 2016). Following a review of quantitative measurement and qualitative description of respect in maternity care, Savage and Castro (2017) suggested including observational items in measurement of respectful care as a strategy to document these normalized behaviours of mistreatment. Socially normalized 20  mistreatment and structural components that contribute to mistreatment are important factors that require more consideration in measurement (Jewkes & Penn-Kekana, 2015). 2.3.6  Sample characteristics The samples recruited within the studies reviewed were generally homogenous. Most studies recruited women who had experienced an uncomplicated pregnancy and birth at term (greater than 37 weeks gestation). Two studies limited their sample inclusion to women who had experienced vaginal birth (Janssen et al., 2006; Taavoni et al., 2018). Demographic data from the samples showed that, in some of the studies, participants represented limited diversity in terms of ethnicity and education level; they included women who were generally well-educated and Caucasian (Scheerhagen et al., 2015; Stevens et al., 2012; Vedam, Stoll, Martin, et al., 2017; Vedam, Stoll, Rubashkin, et al., 2017).  The two studies that were focused on measuring discrimination used data from large national samples that enabled them to focus their analysis on sub-populations of individuals who experience stigma such as racialized groups, people with existing medical conditions including obesity, and people representing lower socioeconomic status (Attanasio & Kozhimannil, 2015; Yelland et al., 2012). I was unable to locate any studies that reported on experiences of LGBTQ+ individuals. The use of these types of samples indicates that existing tools may not be appropriately designed to account for the unique maternity care experiences of individuals who experience stigma in different ways. 2.4 Summary The tools that I reviewed were developed using a variety of sources and methods and described analysis supporting their validity and reliability. Respectful care as a concept lacked comprehensive approaches in the tools that formed the basis of my review. Some features of 21  respectful care, such as autonomy in decision making and communication, were thoughtfully approached in measurement and analysis. However, incidences of abusive behaviours were measured in only one tool, the QRMCQI (Taavoni et al., 2018). Some concepts, such as informed consent for care, were not specifically measured in the published papers I reviewed. Further development of measures identifying respectful care that includes perspectives and experiences of a diversity of childbearing women, as well as more domains of respect, is needed. In the next chapter I will describe my method for generating consensus on indicators of RMC in a high-resource context. 22  Chapter 3: Methods 3.1 Introduction In this chapter I present a Delphi study undertaken with the aim of developing consensus on key indicators of respectful maternity care in a high resource country context. The purpose of the Delphi study that I undertook was to determine consensus regarding core indicators of respect in maternity care. 3.2 Design The Delphi method is generally employed as an iterative method to determine consensus among members of an expert panel (Boulkedid, Abdoul, Loustau, Sibony, & Alberti, 2011). It is a useful method to gather feedback and opinions from a panel of individuals that are spread geographically in a way that prevents the domination of the conversation by any one individual (Boulkedid et al., 2011). This study design has been useful for identifying quality indicators (Boulkedid et al., 2011), for tool development (Li et al., 2016), as well as in situations where the heterogeneity of a panel contributes to study validity by capturing a broad range of expertise and perspectives (Hallowell & Gambatese, 2010; Skulmoski et al., 2007).  Members of a Delphi panel are usually identified and recruited because they are considered to be experts in the topic under study; however, representation of a variety of stakeholders in a more heterogeneous panel may lead to a more meaningful consensus (Boulkedid et al., 2011).  The Delphi method relies on an initial instrument being developed by the investigators; that instrument is then used to gather opinions and feedback from a panel on the subject of the research (Trevelyan & Robinson, 2015). The instrument is anonymously completed by panel members. Following receipt of panel responses, their responses are analyzed to determine any consensus, either quantitatively, qualitatively, or in a mixed-methods approach, and reported 23  back to the panel with a next round instrument (Skulmoski, Hartman, & Krahn, 2007). This process is repeated over several rounds; either a pre-determined number of rounds or until consensus or stability in responses is reached (Hallowell & Gambatese, 2010; von der Gracht, 2012). Reaching consensus may include in-person or telephone meetings among the group of panel members if significant disagreement is present in the responses (Boulkedid et al., 2011).  The above characteristics are highly applicable to my research aim, which was to generate key indicators of RMC from the points of view of diverse experts and service users. These indicators are intended to be used as a registry of items for use in research on RMC in high resource countries. The investigative team is paying particular attention to appropriateness of the registry for use among populations that experience stigmatization in the context of high resource countries. Two rounds of online instrument review and analysis were completed with the Delphi panel. For this study, consensus was assessed according to agreement on importance and relevance of the indicators in the first round and agreement on the priority of the indicators for inclusion (by rank) in the second round. Consensus was further supported using qualitative feedback on individual indicators and the overall group of indicators gathered in both rounds of review.  3.2.1 Sampling This Delphi study is situated within a larger investigation, Giving Voice to Mothers – Canada, which is being carried out by researchers at the Birth Place Lab; a research group affiliated with the Division of Midwifery at the University of British Columbia. The primary investigator and co-investigators of this study nominated members for a Delphi expert panel. The investigative team sought members internationally for their expertise in RMC measurement or for their practice expertise in working with populations that experience stigma. They sought 24  members with prolonged engagement in the field (10 - 20 years of experience). The investigative team and project collaborators also recruited and nominated Canadian maternity care service users for inclusion in the panel. These service users were invited to join the panel to represent a diversity of viewpoints and communities, such as recent immigrants, women with disabilities, indigenous women, rural populations, and LGBTQ+ groups.   The final panel represented a heterogeneous approach to the Delphi panel because it included researchers, health care practitioners, and service users of maternity care. Authors undertaking a systematic review of the Delphi method reported that Delphi panel member sizes could range from 3 to 418 members (Boulkedid et al., 2011). The total number of panel members invited to the first round was n = 37. Additional experts were invited to the panel in the second round, raising the invited panel size to n = 56.  3.3 Procedure The Delphi study was completed over two rounds of online instrument review. The following section describes the process of developing the Round One and Round Two online instruments and the analysis of the data from each round. Analytic approaches to express the levels of consensus, both quantitative and qualitative, between panel members are described. The software I used for the quantitative analysis was IBM SPSS Statistics Version 25. 3.3.1 Round One online instrument creation I collaborated with the team at the Birth Place Lab to develop the initial online instrument for Round One of the Delphi process. It included indicators of RMC identified through an environmental scan of previous international studies focused on measurement of RMC. Inclusion criteria were quantitative RMC indicators that were developed using patient input in middle and high-resource countries, and indicators that aligned with the domains identified in Bohlen and 25  colleagues’ (2015) mistreatment typology. These inclusion criteria were expanded to include indicators developed in low-resource countries and without patient input because we found few indicators that met the initial criteria. Indicators were also included from surveys such as the Changing Childbirth in BC survey, and the Giving Voice to Mothers – US survey (Vedam, Stoll, Taiwo, et al., 2019). Additionally, indicators were aligned with WHO quality indicators, and a few were identified and/or adapted from national surveys of hospital inpatient care (“Canadian Patient Experiences Survey - Inpatient Care (CPES-IC),” 2017), and primary healthcare experience (Wong & Haggerty, 2013). Exclusion criteria were any indicators designed for low-resource countries that did not apply to the Canadian context, such as indicators regarding whether there was electricity available at the birthplace.  Our team organized the indicators using the mistreatment typology (Bohren et al., 2015) and completed an initial review of indicators to remove those that were repetitive. The final list of indicators for panel review (n = 201) with sources can be seen in Appendix B.  The online instrument was created by loading indicators onto a secure online survey tool platform (Qualtrics) for electronic distribution to the members of the panel. Panel members were given instructions to rate the indicators for importance to the concept of RMC, relevance to their community (of practice, research, or geographically), and clarity. Indicators were first rated on importance and relevance using 4-point Likert scales ([1] not important/relevant, [2] unable to assess without revision, [3] important/relevant but needs minor revision, [4] very important/relevant), with logic branching at that point to either rate indicators on clarity or not. If indicators were rated as 1 or 2 for either importance or relevance, participants were asked if they would support discarding this indicator or if they had suggestions for editing the indicator. If indicators were rated as 3 or 4 on importance and relevance, panel members were asked to rate 26  the indicators on clarity using a 4-point Likert scale ([1] not clear, [2] unable to assess without revision, [3] clear but needs minor revision, [4] very clear and succinct). After rating an indicator on clarity, panelists were asked if they would support leaving the indicator as it appeared or if they would edit it in any way. Thus, panel members were given an opportunity to provide qualitative feedback if they chose for each type of rating they gave. At the conclusion of the online instrument panel members were also given an opportunity to recommend additional indicators. See Appendix C for an abbreviated example of the online instrument. 3.3.2 Round One quantitative analysis I used the data collected in the first round to calculate a content validation index (CVI) for each indicator, as well as to calculate the proportion of respondents that supported the indicators with no change. Item level CVI can be used to express the proportion of agreement between raters in terms of the relevance to the underlying construct (Polit & Beck, 2006). Because the overall goal was to create a registry of items for measurement of respectful care in high-resource countries, using a content validation framework in the first round was helpful as an initial step to identify whether the indicators identified through the literature review needed revision to suit the context or whether they adequately described the concept (Polit, Beck, & Owen, 2007).  I asked the Delphi panel members to rate both the importance of each indicator to the measurement of the underlying construct, RMC, as well as each indicator’s relevance to the respondents’ associated community (geographic and/or social). Thus, the two CVI values calculated reflected relevance of the indicators to the concept of respect as well as the relevance of the indicator to context. The chosen cut off value was 0.80 because we had an expert panel of n > 6 (Lynn, 1986). Additionally, I used the frequency data collected from the question asking 27  whether the panel members supported each indicator as it appeared to calculate a proportion of panel members who supported the indicators without revision. This value could be said to reflect a level of acceptability of the indicator. 3.3.3 Round One qualitative analysis Panel members had the opportunity to provide qualitative comments for each indicator or groups of indicators in the online instrument. I analyzed the comment data using an inductive approach for applied qualitative research by identifying themes and using these themes to develop an interpretation of the feedback (Pope, Ziebland, & Mays, 2000). The interpretation of the feedback was used for adjustment to indicators and was primarily focused on themes of clarity, relevance, and priority for inclusion. Because other themes around missing domains and over-representation of some domains were identified, I responded by developing new indicators that were then included in the second round online instrument. This iterative step which occurred after the first round invited further qualitative analysis from the panel members in the second round (Pope et al., 2000).   3.3.4 Round Two online instrument creation After data collection and analysis from Round One, the Round Two online instrument was developed. Firstly, findings from the quantitative and qualitative analysis were used to discard, merge, reword, or retain indicators reviewed during Round One. These decisions and indicator adjustments were done in consultation with my supervisors. Qualitative feedback also led to the addition of new indicators. I carefully considered panelists’ suggestions about adding indicators and created five indicators based on these suggestions. Two indicators were added from previously used surveys identified in the initial literature review (Vedam, Stoll, Taiwo, et 28  al., 2019; Wool, 2015). The total number of indicators that populated the Round Two online instrument was n = 156.  I regarded decreasing the size of the online instrument as an important consideration in enhancing response rates. In consultation my supervisors and the investigative team, a prioritization of indicators drove the data collection method in Round Two. Prioritization permitted the collection of rank data that could describe panel consensus using different statistical techniques. Thus, indicators were presented in Round Two as groups of indicators to be prioritized according to the domains in which they were placed. Rank style questions have been presented as a method to collect data on participant values (Alwin & Krosnick, 1988; Smyth, Olson, & Burke, 2018) which was appropriate considering the goals of the Delphi study. Qualitative feedback from Round One informed the process of grouping indicators, which was primarily completed by me and a small team. An important consideration was the size of the domain groups because large groups of indicators can be more difficult for participants to rank; they require considerably more effort to compare indicators against each other (Smyth et al., 2018). I began placing the Round Two indicators into groups as a deductive process; that is, I used an existing typology to organize the indicators. Indicators were initially organized using the third order themes of Bohren and colleagues’ (2015) mistreatment typology in Round One and so were grouped initially according to these themes. I regrouped several indicators that fit better with other themes based on participants’ feedback and indicator wording. When I found that two of the themes, “failure to meet professional standards of care” and “poor rapport with healthcare providers”, had large groups of indicators, I used the second-order themes of the Bohren et al. (2015) typology to break down these larger groups into smaller groups. Those themes were lack 29  of informed consent and confidentiality, physical exams and procedures, neglect and abandonment, ineffective communication, lack of supportive care, and loss of autonomy. Following, I created new categories of indicators inductively to attend to the size of the groups of indicators that would be presented to the Delphi panel. This was done by carrying out a content analysis of the indicators, seeking to identify commonalities based on the concepts being measured by the indicators (Morse, 2015). When I carefully reviewed the indicators, I found that they either described specific experiences of patients, such as communication between health care providers and patients, or experiences of supportive care, or reflected systemic respect or disrespect of a woman or birthing experiences. Along with my supervisors, I created categories of indicators that fit with these divisions; and a small investigative team with content knowledge and clinical expertise at the Birth Place Lab refined these categories and indicator groupings. We also reworded several domains, with attention to the importance of our use of language to promote a stronger focus on positive aspects of experiences of respect. The domains that were drawn from the mistreatment typology (Bohren et al., 2015) and created inductively (see Table 3-1) were provided to the panel members in the Round Two online instrument. These domains ranged in size from five to 19 indicators. See Appendix D for a complete list of the Round Two indicators sorted by domain.   30  Table 3-1 Round Two online instrument domains Deductive themes Inductive themes Bohren et al. (2015) themes Reworded themes  Physical Abuse Physical mistreatment  Sexual Abuse*   Verbal Abuse Verbal mistreatment  Stigma and Discrimination   Failure to meet professional standards of care   Lack of informed consent and confidentiality Information and consent   Privacy and confidentiality  Physical exams and procedures   Neglect and abandonment Availability and responsiveness of healthcare providers  Poor rapport between women and providers  Patient reactions to experiences of care Ineffective communication  Verbal communication   Non-verbal communication Lack of supportive care Supportive behaviours of healthcare providers Cultural support and family involvement Loss of autonomy Autonomy (about care decisions) Choice of evidence-based care options Health system conditions and constraints  Health system conditions and constraints – physical   Health system conditions and constraints – human resources Notes: Bolded phrases indicate domains that were used to group indicators and presented to panelists in the Round Two online instrument, *absorbed into physical mistreatment  31  Panel members were given the instructions to rank the indicators from high priority (top of the list) to low priority (bottom of the list). Along with the drag-and-drop style rank questions, comment boxes were provided to invite panel members to provide any qualitative feedback they wished along with their rankings.  Three open-ended questions concluded the Round Two online instrument. Firstly, panel members were asked at the end of the online instrument to give feedback about their preferences to organize indicators according to care timing (prenatal, intrapartum, postnatal). Secondly, in response to panel member feedback in the first round, I streamlined the language in all of the indicators to read “healthcare provider” whereas in the first round, several indicators used the language “doctor” or “midwife” specifically. Because this was previously a common theme in the panel members’ feedback, the investigative team and I felt it was important to collect their views through an open-ended question on how to address the issue of identifying care providers. Finally, a concluding question asked whether the panel members felt that there were any missing indicators or domains. See Appendix E for an abbreviated example of the Round Two online instrument. With the provision of the Round Two online instrument, a report of the findings from Round One was also circulated to the panel members (See Appendix F).  3.3.5 Round Two quantitative analysis In Round Two, I used the rankings data to calculate a minimum, maximum, and median rank for each indicator as well as to calculate Kendall’s W coefficient of concordance for each group of indicators. Kendall’s W is a non-parametric statistic that reflects the level of agreement among raters ranging from 0 (no agreement) to 1 (complete agreement) (von der Gracht, 2012). This statistic has been identified as a useful measure of consensus for content validation (Slocumb & Cole, 1991), as well as in Delphi studies (Schmidt, 1997; von der Gracht, 2012). It 32  was appropriate for this analysis given the non-random nature of the sample of panel members as well as the non-normal distribution of the data. Interpretation of this statistic was guided by Schmidt’s guidelines (1997, p. 767). I recorded another measure of agreement by calculating the percentage of panel members that ranked each indicator in the top half of the list in each domain (Schmidt, 1997). 3.3.6 Round Two qualitative analysis In Round Two, qualitative feedback was sought to address additional tool-building related concerns that were identified in the feedback from the first round. The same process used to analyze the feedback in the first-round analysis was applied to the second round feedback. I analyzed the panelist’s comments on the indicators or groups of indicators for common themes. I applied the same process to the feedback that was given for the final three open-ended questions on the Round Two online instrument by generating themes of the responses to these questions.  3.4 Rigour  Several steps in the design affected the rigour of results. Features of the Delphi method, such as the number of rounds, quality of the feedback given to the panel members, and the quality of the ongoing analysis are important to the development of the group consensus (Hallowell & Gambatese, 2010; Keeney, Hasson, & McKenna, 2001). Other authors have described these features as the components of anonymity, iteration, provision of feedback, and statistical aggregation of group response (Skulmoski et al., 2007; Trevelyan & Robinson, 2015). In the method employed for this study, I attended to these features in the following ways. Anonymity of the process allowed for each panel member to express open and honest feedback and prevented domination by influential panel members that could have occurred in a 33  face-to-face group meeting (Keeney et al., 2001). I protected panel members’ anonymity by using the online survey tool and reporting collated feedback to the group.  Iteration refers to the use of multiple rounds of review to allow panel members to refine their feedback and take nuances into account while developing consensus (Skulmoski et al., 2007). Two rounds were completed in this study. The number of rounds has been argued as very important in allowing meaningful consensus to build and to avoid prematurely ending the process, unnecessarily complicating the analysis, or increasing attrition rates (Boulkedid et al., 2011; Keeney et al., 2001; Trevelyan & Robinson, 2015). Having clearly defined limits for consensus was an important consideration that guided the number of rounds needed for this Delphi study. Other considerations that affected the number of rounds were the logistics of how the Delphi study fit in with the larger project, and the response rates of the invited panel members.  Provision of feedback during the Delphi rounds allowed each panel member to see where their opinions fit with those of the larger group, and to consider the range of opinions in the group (Boulkedid et al., 2011; Hallowell & Gambatese, 2010). This step has been described as essential to the Delphi process; it builds in a qualitative element of rigour because it represents a form of “member checking”. I provided feedback to the panel members in the form of a round one report along with the second-round online instrument; the report summarized the outcomes of the quantitative analysis and themes developed from the qualitative analysis. Judgement-based bias comes in many forms that must be considered in a Delphi study (Hallowell & Gambatese, 2010). Two practices carried out in the procedure and analysis of this Delphi study attended to bias. Firstly, anonymity of the panel members controlled for bias introduced by dominance of any panel members (Hallowell & Gambatese, 2010). Secondly, 34  anonymizing the data prevented any responses from being privileged during the data analysis by blinding me to which panel member provided what feedback. Finally, statistical aggregation of the group response refers to the quantitative analysis of the collected scores on the online instruments. My analysis detailed above shows both the use of descriptive and inferential statistics over the two rounds to show the level of consensus among the panel members.  The formation of the Delphi panel is a critical element of rigour that affects the outcome of the entire process. Panel formation is significant due to the importance of the participation of the panel members in the consensus process and the potential to introduce sources of bias through the selection of the panel members (Boulkedid et al., 2011; Keeney et al., 2001). Care was taken during sampling to recruit researchers, practitioners, and community members that represented a broad range of experience and a diversity of communities. This type of heterogeneous panel fit with the study objective of giving attention to perspectives of diverse communities in the selection of the indicators. 3.5 Ethical Considerations Because the Delphi study component of the larger Giving Voice to Mothers – Canada project falls under survey development and pilot testing, ethics board review was not required for this phase of the project. Nonetheless, ethical considerations remain important during the research process. Because anonymity is an important aspect of this method, my feedback reports did not identify participants, and I protected confidentiality of participants throughout the review process.  35  3.6 Summary This chapter has described my study design and methods to address the research question: What are indicators for respectful maternity care based on consensus from a panel of international experts and Canadian consumers? Using the Delphi method allowed for input from a variety of experts and service users to determine indicators and for the collection of both quantitative data and qualitative data to identify and refine indicators for creation of an item registry. Rigour was supported throughout the process through attention to anonymity, transparent reporting, and using analytic techniques that allowed identification of consensus over the rounds. Ethical considerations were taken into account to protect anonymity.36  Chapter 4: Findings 4.1 Introduction  This chapter details the outcomes of the Delphi online instrument review rounds, including a description of the study sample and presenting the quantitative and qualitative results of my analysis. The findings are organized according to Delphi round. 4.2 Round One Findings 4.2.1 Panel Characteristics and Response Rate Invitations to complete the Round One online instrument (in two parts) were sent to 37 expert panel members. Response rates were 49% (n = 18) and 54% (n = 20) for part one and part two, respectively. Demographic data collected from the members of the expert panel that responded to one or both parts of the Round One online instrument are detailed in Table 4-1. Panelists’ characteristics were described by region and self-identification of background; researcher/academic, care provider, service user, or other. Panelists also had the option of selecting an additional label to describe their background. In both samples secondary background data were provided by some respondents, n = 12 (part 1) and n = 14 (part 2). Qualitative descriptions were available for panelists who used the “other” option to identify their backgrounds. Those respondents who chose this option reported backgrounds of being fertility care users, doulas, or members of minority populations.      37  Table 4-1  Round One expert panel regional distribution and backgrounds Sample Characteristic Part 1 Respondents  Part 2 Respondents  Region N (%) N (%) British Columbia 6 (33.3) 6 (30) Alberta  1 (5) Ontario 5 (27.8) 4 (20) Quebec 1 (5.6) 2 (10) Yukon 1 (5.6) 1 (5) United States 3 (16.6) 4 (20) United Kingdom 2 (11.1) 2 (10)  Total  18 (100)  20 (100) Sample Characteristic Part 1 Respondents  Part 2 Respondents   N (%) N (%)    Panelist Background Primary Additional Primary Additional Care Provider 1 (5.6) 6 (33.3) 1 (5) 5 (25) Researcher/Academic 11 (61.1) 5 (27.8) 12 (60) 3 (15) Service User 6 (33.3) 0  7 (35) 1 (5) Other  1 (5.6)  5 (25)      Total 18 (100) 12 (66.7) 20 (100) 15 (75)   Beyond the heterogeneity in expert backgrounds detailed above, the panel respondents represented national and international content experts on respectful maternity care (n = 5), measurement experts (n = 1), and researchers whose work is focused on populations that commonly experience stigma including LGBTQ+ groups, obese women, women with disabilities, visible minorities, indigenous groups, refugees, and mothers under age 20 (n = 7). Community service user experts represented a variety of viewpoints including women with disabilities (n = 1), visible minorities (n = 1), indigenous groups (n = 1), rural groups (n = 1), single parents (n = 1), and the LGBTQ+ community (n = 1). 38  4.2.2 Quantitative findings  The set of indicators generated from literature review and previously developed surveys was assessed in the two parts of the Round One online instrument. Four-point Likert scales were used to generate ratings of importance, relevance, and clarity (not important/not relevant/unclear [1] to very important/very relevant/very clear and succinct [4]). Additionally, respondents indicated whether they would support keeping the indicator as presented or wished to discard the indicator. A text box was provided for feedback on editing the items.  The total number of indicators included in both parts of the Round One online instrument was 201. I calculated median ratings and the indicator level content validation index (CVI) using the Likert ratings for importance and relevance for each indicator. Most indicators had a median rating of four for importance (n = 196), relevance (n = 194), and clarity (n = 195). Indicator level CVIs calculated for importance ranged from 0.78 to 1.00, and for relevance from 0.67 to 1.00. Several indicators (n = 10) did not meet the cut-off value of 0.80 for the CVI for importance or relevance or both.  In addition to the CVI, I calculated the proportion of respondents supporting the indicator as presented to reflect a measure of acceptability of each indicator. This proportion was calculated using the number of participants who chose the response of “Leave item as is” to the question “What would you like to do with this item?” Across both parts of the online instrument, indicators received from 31% to 94% panel support; stratifications can be seen in Table 4-2.    39  Table 4-2  Indicators organized by level of panel support Percentage of respondents supporting indicator “as is” (%) Number of indicators  Actual Indicators* 90-100 8 57, 139, 149, 155, 181, 182, 200, 201 80-89 40 10, 11, 13, 14, 18, 28, 55, 58, 62, 70, 72, 81, 82, 88, 93, 94, 111, 115, 117, 126, 127, 131, 140, 141, 146, 150, 151, 152, 154, 157, 158, 159, 160, 162, 168, 178, 179, 183, 185, 192 70-79 55 6, 12, 17, 20, 25, 30, 59, 60, 65, 67, 69, 71, 80, 83, 84, 85, 87, 89, 101, 106, 107, 118, 119, 120, 121, 122, 123, 125, 128, 129, 130, 132, 143, 145, 147, 148, 153, 156, 161, 163, 164, 165, 167, 170, 174, 176, 177, 186, 188, 191, 194, 195, 196, 198, 199 60-69 50 3, 5, 9, 15, 16, 21, 22, 23, 24, 26, 31, 33, 39, 40, 42, 43, 44, 45, 52, 54, 56, 61, 66, 74, 75, 77, 78, 79, 86, 90, 92, 95, 97, 102, 103, 105, 112, 116, 133, 134, 135, 142, 166, 169, 173, 175, 180, 184, 187, 190 50-59 33 2, 4, 7, 8, 19, 27, 29, 32, 34, 35, 37, 38, 41, 46, 53, 64, 68, 76, 91, 96, 98, 99, 100, 104, 114, 124, 137, 138, 144, 171, 172, 193, 197 40-49 11 48, 49, 50, 51, 63, 73, 108, 109, 110, 113, 189 30-39 4 1, 36, 47, 136 Notes:*Please see Appendix B for list of round one indicators 4.2.3 Qualitative findings The indicator-by-indicator feedback, as well as responses to the final summary questions in the second part of the Round One online instrument, provided the qualitative data. The primary themes derived were identifying timing of care, identifying a specific type of provider, and improvements to clarity of the indicators including their response options. For many indicators, several panel members commented about whether the indicators would apply to all 40  interactions with healthcare providers over the course of pregnancy, childbirth, and postpartum care or would be better suited to specific types of care or interactions. Another primary theme repeated throughout the feedback was about how to identify care providers by profession (e.g., Doctor, Midwife, or Nurse) or whether care providers should be individually identified in each indicator. Responses to requests for overall feedback primarily identified overlapping indicators and indicators that the panelists felt could belong in other groups. Two comments indicated that the presented group of indicators was “comprehensive” and “extensive”. The most common type of feedback received was about improving the clarity and specificity of the indicators. Many panelists gave suggestions for re-wording indicators or simply pointed out phrases that were unclear or not applicable to all birthing contexts or types of birthing experiences. Their feedback also included suggestions for indicator structure and presentation, such as combining indicators into a matrix format, or providing more opportunity for open-ended response indicators that could allow for the collection of detailed data. Some panelist feedback pointed out that some indicators had better wording than others, or that certain indicators could be combined or discarded in favour of other indicators.  Some comments that recommended re-wording drew attention to the predominantly negative phrasing of the indicators. This concern was echoed by one panel member who sent additional feedback by a separate e-mail indicating the need to focus more heavily on respect as the underlying construct rather than mistreatment.  Respondents also described concepts relevant to respectful care that were not captured with existing indicators. Such concepts included coercion, care philosophy, and setting priorities for care. Some panel members suggested the addition of indicators that were more specific to rural, indigenous, and LGBTQ+ populations. 41  Several panelists also provided feedback about the structure of the Round One online instrument. Specifically, several panelists commented that the response options to the questions about importance and relevance were not on a scale that would allow them to differentiate between indicators that were important but not very important or indicators that were relevant but not very relevant. Some panelists also commented that it was difficult to rate individual indicators without being able to see all the indicators and compare the indicators to each other.  A response to these criticisms about the response options for the evaluation of importance and relevance as well as the length of the Round One online instrument was required to attend to response rates.  4.2.4 Results of combined findings   The initial indicators (n = 201) were considered based on the quantitative and qualitative findings from the Round One online instrument. Using a cut-off point of 0.80 for CVI in importance or relevance or both, I discarded 10 indicators. I also combined the indicator-by-indicator feedback, general themes, and acceptability proportion to merge and edit indicators to improve their clarity and to reduce indicators that were repetitive.  Figure 4 – 1 shows the process I used to select the indicators that populated the Round Two online instrument. 42   Figure 4-1. Process of indicator selection for Round Two online instrument      The creation of the categories for the Round Two online instrument involved a content analysis of the remaining indicators. In Round One, although the indicators were grouped by the third-order themes of Bohren and colleagues’ (2015) mistreatment typology (physical and sexual abuse, verbal abuse, stigma and discrimination, failure to meet professional standards of care, poor rapport with healthcare providers, and health system conditions and constraints), the panel members only saw these grouping labels at the end of the online instrument. The labels primarily served to organize the Round One indicators. By searching for common concepts measured by 43  the indicators that I retained and adjusted for Round Two, I was able to analyze how the indicators were conceptually organized, especially the indicators already sorted into the Bohren themes of failure to meet professional standards of care and poor rapport with healthcare providers. The outcome was the creation of the domains (n = 17) detailed in section 3.4.4 of the method chapter. The sizes of these domains ranged from five to 19 indicators.  4.3 Round Two Findings 4.3.1 Panel characteristics and response rate  Invitations to complete the Round Two online instrument were sent to 56 individuals. The response rate was 46% (n = 26). Similar demographic and background data were collected in this round as in Round One with panelists responding to questions about geographic distribution as well as areas of primary background and any additional background identifications.  44  Table 4-3  Round Two expert panel regional distribution and backgrounds    Panelists who chose the “other” category indicated that their backgrounds were a policy maker (n = 1), a researcher not primarily focused on maternity care (n = 1), a community-based researcher and activist (n = 1), an educator (n = 1), member of LGBTQ+ community (n=1) and a fertility treatment service user (n = 1). Content expertise of the members of the expert panel covered a range from previous experience in research in RMC (n = 6) or activism in RMC (n = 1), measurement expertise (n = 5), and representation of communities that commonly experience Sample Characteristic Respondents   Region N (%)  British Columbia 10 (38.5)  Alberta 1 (3.8)  Ontario 6 (23.1)  Quebec 2 (7.7)  Yukon 1 (3.8)  United States 4 (15.4)  Chile 1 (3.8)  Netherlands 1 (3.8)   Total  26 (100.0)     Sample Characteristic Respondents    N (%)     Panelist Background Primary Additional Care Provider 3 (11.5) 1 (3.8) Researcher/Academic 14 (53.8) 5 (19.2) Service User 7 (26.9) 1 (3.8) Other 2 (7.7) 4 (15.4)    Total 26 (100) 11 (42.3) 45  stigma including LGBTQ+ populations (n = 3), obese women (n = 1), visible minorities (n = 3), and Indigenous populations (n = 3). 4.3.2 Quantitative results  Indicators (n = 156) were presented to the panelists in lists sorted into 17 domains of respect, disrespect, and mistreatment. Panelists ranked the priority of the indicators from greatest priority (rank 1) to least priority within one of the 17 domains. Ranking data were used to calculate mean, median, minimum and maximum rank for each indicator.  Level of agreement between panel members was expressed in each domain by calculating Kendall’s W. In addition to agreement by domain, agreement about priority indicator-by-indicator was expressed through calculating the percentage of respondents ranking the indicator in the top half of the group of indicators in each domain. Table 4-4 reports the agreement by domain along with the range of support for the indicators in the top half of each group. Table 4-5 reports the level of support for each indicator per domain.    46  Table 4-4  Agreement of panelists on rankings of indicators by domain Domain Total indicators (n) Kendall’s W (p) Percentages of panel item support* (% range) Verbal Communication 13 0.114 (0.001) 20 - 68 Information and Consent 9 0.102 (0.009) 12 - 64 Privacy and Confidentiality 6 0.131 (0.006) 20 - 68 Physical Exams and Procedures 8 0.369 (<0.001) 12 - 92 Availability and Responsiveness of Healthcare Providers 8 0.208 (<0.001) 16.7 - 75 Patient Reactions to Experiences of Care 9 0.146 (<0.001) 16.7 – 79.2 Non-Verbal Communication 10 0.164 (<0.001) 25 – 70.8 Cultural Support and Family Involvement 8 0.425 (<0.001) 0 – 91.7 Stigma and Discrimination 5 0.248 (0.002) 5.9 – 76.5 Stigma and Discrimination (personal characteristics) 19 0.639 (<0.001) 0 - 100 Verbal Mistreatment 12 0.110 (0.008) 23.8 – 85.7 Physical Mistreatment 5 0.081 (0.209) 22.2 – 61.1 Supportive Behaviours of Healthcare Providers 9 0.207 (<0.001) 13 – 73.9 Choice of Evidence-Based Care Options 9 0.266 (<0.001) 8.7 – 69.6  Autonomy (about care decisions) 6 0.524 (<0.001) 0 – 86.4 Health System Conditions and Constraints (Physical) 10 0.382 (<0.001) 4.8 – 95.2 Health System Conditions and Constrains (Human Resources) 9 0.184 (<0.001) 22.7 – 68.2 *indicates the lowest and highest percentage of panelists per indicator that ranked the indicator in the top half of each domain group 47  Table 4-5  Panel support for indicators by domain Domain Indicators ranked in top half by ≥50% of panelists Indicators ranked in top half by <50% of panelists Verbal Communication 2, 4, 5, 7 1, 3, 6, 8, 9, 10 Information and Consent 1, 2, 3, 4 5, 6, 7, 8, 9 Privacy and Confidentiality 1, 2, 3, 6 4, 5 Physical Exams and Procedures 1, 2, 7, 8 3, 4, 5, 6 Availability and Responsiveness of Healthcare Providers 1, 2, 4, 8 3, 5, 6, 7 Patient Reactions to Experiences of Care 1, 2, 3, 6 4, 5, 7, 8, 9 Non-Verbal Communication 1, 2, 3, 4, 6, 9 5, 7, 8, 10 Cultural Support and Family Involvement 1, 2, 3, 4, 7 5, 6, 8 Stigma and Discrimination 1, 2, 3 4, 5 Stigma and Discrimination (personal characteristics) 1, 2, 3, 4, 5, 6, 7, 8 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 Verbal Mistreatment 1, 2, 4, 5, 6, 11 3, 7, 8, 9, 10, 12 Physical Mistreatment 3 1, 2, 4, 5 Supportive Behaviours of Healthcare Providers 6, 7, 8 1, 2, 3, 4, 5, 9 Choice of Evidence-Based Care Options 3, 4, 5, 6, 7,8  1, 2, 9 Autonomy (about care decisions) 1, 2, 4, 5 3, 6 Health System Conditions and Constraints (Physical) 1, 2, 3, 5 4, 6, 7, 8, 9, 10 Health System Conditions and Constrains (Human Resources) 1, 2, 3, 4 5, 6, 7, 8, 9 Notes: See Appendix D for Round Two online instrument indicators 48  4.3.3 Qualitative findings  Respondents were given opportunities to comment on indicators or domains of indicators after each ranking question. These comments were analyzed for feedback on specific indicators and groups of indicators. At the end of the online instrument, two questions sought specific feedback on concerns raised during Round One: the applicability of indicators to types of care received at different times over the course of childbearing (e.g. pregnancy, birth, or postpartum), and how or whether to identify different care providers. In addition, a final open-ended question gave panel members an opportunity to identify any concepts or indicators they felt were missing from the groups of indicators they had prioritized.  4.3.3.1 Domain and indicator feedback  The panelists gave feedback, along with the ranking questions, that was directed both to the domains as a whole and to specific indicators. Feedback directed at specific indicators commonly identifying them as unclear, overlapping or sharing major characteristics with other indicators, and identifying them as particularly important or relevant to the measurement of RMC. Some panelists also raised concerns through their comments about whether certain indicators belonged in the domains they had been placed in. Some panelists provided suggestions to reword indicators to reflect a broader view of a particular concept. For example, one panelist pointed out that simply asking about consent assumes the person’s answer is positive and suggested rewording the indicator to ask about consent or refusal.   Panelist feedback was also directed at the domains as a whole. The most common theme seen in the feedback, across all the domains, was that all the indicators in the domain were important, the indicators were difficult to rank, or the panelists did not feel they could rank the indicators in a meaningful way. Similar to this feedback were comments that explained or 49  justified how the panelist had ranked the indicators. One panelist wrote a very reflective comment about the tensions experienced in practice that impacted their ranking of the indicators: I recently provided care in acute emergency (severe [Post-Partum Hemorrhage]) in a remote setting and am aware of how acuity impacts the informed choice process – I found myself focused (perhaps too much) on performing urgent procedures vs communicating well – it is hard to balance all of the responsibilities of being a care provider in some situations – I have answered the above as if in a non-urgent situation but feeling very humble. (Round 2 panel member - In response to ranking Information and Consent items)  The other main comment made at the domain level was regarding overlap between domains or whether the domain names were accurately represented by the indicators provided.  4.3.3.2 Generality or specificity of indicators  A concern was raised from the panelists in the Round One feedback, as well as from the investigative team through the process of building the Round Two online instrument, about how general or specific the indicators should be to different care timing (prenatal, labour, birth, and postpartum). The panelists were invited to give feedback on this issue at the end of the Round Two online instrument by answering the question: “Should items be separated or specified for different care timing (such as prenatal, labour, birth, and postpartum), repeated for different care timings, or left general (leave it to the participants to answer for whenever they feel this item or experience was relevant for them)?” The responses received were split between leaving the indicators as general statements (n = 6) and specifying the indicators for different care timing; more panel members (n = 15) responded in favour of separating indicators by care timing for different reasons.  Several panelists made comments about the importance of balancing the number of indicators necessary for collecting comprehensive data with the respondent burden of a long survey, especially if indicators were repeated for different care timing. Panelists suggested that a 50  balance could be achieved by careful selection of items that were more important at different stages over the course of pregnancy, labour and birth, as well as by continuing to reduce the current number of indicators. Those panelists in favour of leaving the indicators as general to the childbearing period also indicated that staying general would allow respondents to report on disrespect that happened at any stage of care, and that the conceptual nature of the indicators would be better suited for this approach. The feedback supporting separating the indicators by different care timing gave different rationales for this approach. One rationale given was for the collection of data that could identify when disrespect was most likely to be encountered during the course of care. This was also important to some panelists because they pointed out that some women receive care in different communities or contexts at different times and from different providers. Other rationales given were that women often experience contact with different providers at different times, and that the women may be feeling increasingly vulnerable at different points in their childbearing trajectories.  4.3.3.3 Option to identify healthcare providers  Another concern raised by the panel members in the Round One feedback that we chose to explore in Round Two was how particular healthcare providers would or would not be identified in a potential survey using these indicators. The question posed to the panel members was: “What is a good approach to take when it comes to specifying between different care-providers in the items?” Most of the panelists gave feedback that indicated that identifying the healthcare providers in some way was important. Some panel members suggested that identifying whether women received care primarily from physicians or midwives would be sufficient, while others suggested ways to collect data that would be more detailed in cases 51  where respondents had encountered multiple types of healthcare providers throughout their course of care. One panelist suggested including a way to determine whether participants experienced nursing care because nurses generally play a much different role than physicians or midwives. In some feedback, the panelists suggested methods of survey construction such as drop-down menus, logic branching, or options to ask about each type of health care provider over the course of a survey. One panelist suggested adding an indicator that would identify whether disrespect was encountered from one provider, some providers, or all of them.   Other panel members wanted to keep the indicators general. They provided several rationales indicating that: adding indicators or steps to identify different health care providers would lengthen a questionnaire and increase respondent burden; all healthcare providers can learn from the outcomes of these measures; and identifying a source of disrespect may be difficult if the “culture or team” is disrespectful. Some panelists gave other suggestions to consider how the data would be used in analysis or implementation when wording indicators and to simply consider whether the respondents had interactions with hospital settings or not.  4.3.3.4 Missing indicators or domains  At the end of the online instrument review, panelists were invited to suggest indicators or concepts they felt were missing. Several comments indicated that the indicators were comprehensive in measuring the concept of RMC, and that the indicators were general enough to apply to a variety of populations in a variety of contexts. There were also a number of comments that suggested adding indicators. Suggestions for additions were primarily regarding indicators that would be important to particular populations, such as women undergoing fertility treatments, women who had experienced a fetal loss, or indigenous women. One comment was made about including indicators about recourse for those who experienced disrespectful care. 52   In addition to feedback about the addition of specific indicators, several panel members used the final question to make additional comments about the current indicators and the online instrument as a whole. Two comments identified areas of overlap between several indicators. One panel member remarked that the ranking exercise did not make sense. Another panel member provided additional feedback via e-mail to the investigative team indicating that they found the variation between positive and negative wording in the indicators made assigning a ranking difficult.  4.4 Summative Findings  Of the 156 indicators reviewed in the Round Two online instrument, 74 indicators received rankings in the top half of their respective domains by more than 50% of the panelists, and 14 indicators received rankings in the top half of their respective domains more than 80% of the panelists (see Table 4-6). The domains within which the rankings showed the most agreement among panelists were Autonomy (about care decisions), Stigma and Discrimination (personal characteristics), and Cultural Support and Family Involvement. The lowest levels of agreement were seen in the domains of Physical Mistreatment, Verbal Mistreatment, and Information and Consent.  53  Table 4-6  Characteristics of strongly supported indicators Indicator Panelists rating indicator in top half of domain (Round Two) (%) Median Rank Indicator retained unchanged from Round One Proportion of panelists supporting indicator unchanged (Round One) The healthcare provider(s) or other staff member(s) made negative comments about my physical appearance (such as my weight, private parts, cleanliness, or other parts of my body)a 85.7 5 Y 0.81 During my pregnancy, I felt that I was treated poorly by my healthcare provider(s) BECAUSE of…     My race, ethnicity, cultural background or languageb 100.0 1 Y 0.56 My sexual orientation and/or gender identityb 90.5 2 Y 0.75 A difference in opinion with my caregivers about the right care for myself or my babyb 81.0 5 Y 0.56 My ageb 85.7 6 Y 0.56 My healthcare provider(s) explained to me why they were doing examinations or procedures on mec 92.0 2 Y 0.81 The healthcare provider(s) asked for permission before performing a vaginal examinationc 84.0 2 Y 0.88 I was able to have the people I wanted supporting me during labour and birthd 91.7 1.5 N  My partner, family, or friends were involved as much as I wanted in decisions about care and treatment for me or my babyd 83.3 3 N  I felt like an active participant in my labour and deliverye 86.4 2 N  Curtains, partitions, or other measures were used to provide privacy for mef 95.2 3 N  I had to discuss my care with my healthcare provider(s) in a place that was not privatef 95.2 2 N  I was not able to be admitted to the facility of my choice because it was overfilled or did not have enough bedsf 81.0 4 N  The birth setting where I gave birth was adequately cleanf 81.0 3 Y 0.66 Note. Indicator domains as follows aVerbal Mistreatment bStigma and Discrimination (personal characteristics)) cPhysical Exams and Procedures dCultural Support and Family Involvement eAutonomy (about care decisions) fHealth System Conditions and Constraints (Physical)54  The feedback received in relation to survey development, specifically about whether to separate indicators according to the trajectory of care and whether or how to identify healthcare providers, was varied; however, feedback did provide an indication of what the majority of the panelists thought would be good approaches for these challenges. The majority of the panelists suggested considering separating the indicators by care timing (prenatal, labour and birth, and postpartum) with careful consideration of respondent burden if indicators would be repeated for different timing. In terms of identifying healthcare providers, the majority of the panelists supported using some method to identify the primary care provider from whom the respondent had received care and gave a variety of suggestions for strategies that could support this survey structure.  4.5 Conclusion  This chapter has described the samples of the Delphi panel members that reviewed the Round One and the Round Two online instruments. The analysis of quantitative and qualitative data collected in both rounds was presented. In general, there were a small number of indicators identified by a high proportion of panelists as important. Panelists had stronger agreement supporting indicators related to autonomy, stigma, and culture and family, and physical conditions of healthcare services, with less agreement among indicators capturing mistreatment, and information and consent. Interpretations of the analytic methods to determine the level of consensus among panel members in addition with the qualitative feedback will be explored in the next chapter along with a discussion of the implications of the findings and strengths and weaknesses of the research method. 55  Chapter 5: Discussion 5.1 Introduction  This chapter opens with a summary of the findings followed by discussion of the key findings. Reflections on the research method including strengths and limitations are included in the discussion. The chapter concludes with implications of these findings for nursing research, practice, and education.  5.2 Study Summary  The Delphi study involved a process with a panel of purposefully selected experts to identify core indicators of RMC in Canadian and other high resource contexts. An online instrument was populated with indicators selected from a literature review of previously developed tools and surveys measuring RMC and perinatal care experience. Two rounds of online instrument review to elicit panelist feedback were completed. In Round One, 18 (part one) and 20 (part two) panelists rated the indicators’ importance for the concept of RMC, relevance in a high-income country context, and clarity of expression using a four-point Likert scale. Panelists were also encouraged to indicate whether there were any missing indicators that could be applicable to RMC. Based on the Round One findings, 10 indicators were eliminated, 79 indicators were retained unchanged, and 112 indicators were modified. Seven indicators were added based on the feedback from the first round. This process resulted in 156 indicators that populated the Round Two online instrument. I sorted the items into 17 domains with input from my supervisors and a small team from the Birth Place Lab. In Round Two, 26 panelists ranked the priority of the indicators from highest to lowest within each domain. In addition to the ranking of the indicators, panelists were also asked in this round to comment on whether indicators should be separated based on care timing, how or whether care providers should be 56  identified, and whether there were any indicators missing. Findings from the Round Two online instrument showed that 74 indicators were ranked in the top half of their respective domains by greater than 50% of the panelists, and 14 indicators were ranked in the top half of their respective domains by greater than 80% of the panelists.  5.3 Discussion 5.3.1 Agreement or consensus of panel  In the first Delphi round, agreement among the panel members was expressed through calculating the CVI of both importance and relevance of each indicator. Using a cut-off value of 0.80, only 10 indicators were eliminated on the basis of low indexes of importance and relevance. The small number of low indexed indicators suggests that most participants found the indicators included in Round One to be both important and relevant to the measurement of RMC in a Canadian or other high resource country context. However, this finding could also be because the response options that the panelists were provided in the Likert scale limited the panelists from providing a range of ratings for the importance and relevance of the indicators. In terms of clarity of the indicators, the proportion of panel members who supported keeping the indicators as they were presented was more useful as a discriminatory value in distinguishing which indicators were significant to the panel. For about half the indicators in Round One (n = 98), less than 70% of the panel supported the indicator unchanged. This finding suggests that many indicators were seen by the panelists as unclear or repetitive. In conjunction with the qualitative feedback given for each indicator, indicators that were repetitive, that were confusing as worded, or could be merged with other indicators were identified. These findings were vital to the development of the indicators for the Round Two online instrument.  57  In a traditional Delphi study, round one is comprised of open-ended questions, which generates statements from the panel members that are rated in subsequent rounds (Keeney et al., 2001; Trevelyan & Robinson, 2015). In this study, this initial step was modified to have panelists review indicators identified through a literature review; this has been previously identified as a modification that may help to control the amount of feedback that panelists give in the first round (Trevelyan & Robinson, 2015). Many modifications of the Delphi method exist (Keeney, Hasson, & McKenna, 2006), and there is debate about whether statements or items should be repeated in subsequent rounds or eliminated based on the level of consensus the items achieve (either low or high levels) (Trevelyan & Robinson, 2015). Because of this flexibility, it is important, as emphasized by some authors (Keeney et al., 2006), to consider research goals making methodological decisions in the Delphi process.  In the Round Two online instrument, indicators were grouped into seventeen domains, some drawn from the Bohren et al. (2015) mistreatment typology, while others were my inductively created domains. The panelists were then asked to rank the indicators from highest priority to lowest priority within these domains. Kendall’s W was used as a measure of agreement among the panelists. Across the domains, the Kendall’s W ranged from 0.081 (p = 0.209) to 0.425 (p < 0.001). That result showed very weak, to weak/moderate agreement among the panelists (Schmidt, 1997).  It has been argued that Kendall’s W, as a measure of agreement between panelists, reflects individuals applying the same judgement in ranking the indicators (Brancheau & Wetherbe, 1987). Additionally, Schmidt (1997) has suggested that in panels of greater than 10 individuals, low values of W can be significant, and thus it is important to interpret consensus based on the value of W and not only the significance level. The values of Kendall’s W in the current study suggest that the panelists found it difficult to rank the indicators 58  according to their priority in measuring RMC or were inconsistent in the criteria they applied to assigning the rankings. This interpretation is strengthened by the qualitative feedback received with the rankings; panelists felt that the indicators were “all important” or “difficult to rank” and made comments detailing their rationales for how the ranks were assigned. A particularly problematic domain was “Stigma and Discrimination (personal characteristics)”. Several of the panelists left comments indicating that they did not rank these indicators as they were “all important” or could not rank these indicators “in a meaningful manner”. Overall, the number of indicators that were ranked in this round as well as conceptual challenges with the sorting of the indicators into domains could also have contributed to the inconsistency in ranking between panelists.  Further analysis from the Round Two online instrument involved an indicator-by-indicator calculation of the percentage of panelists that ranked the indicator in the top half of the list. The distribution of the indicators that were ranked in the top half of the domain (as presented in Table 4-5) shows the difficulty the panelists had in ranking the indicators. In nine out of the 17 domains, the first three indicators presented in the list were ranked in the top half of the domain by over 50% of the panelists. This finding raises concerns about a potential judgement bias operating in the rankings of the indicators either due to the presentation of the indicators (primacy effect) (Hallowell & Gambatese, 2010) or fatigue due to the number of the indicators in each domain (Smyth et al., 2018).  In the domains where the Kendall’s W showed stronger agreement between panelists, a larger proportion of panelists supported the indicators ranked in the top half of the domain. These strongly supported indicators also had higher median rankings. In the domains where the Kendall’s W showed weaker agreement, the proportions of panelists supporting indicators in the 59  top half of the domain were not as high and the median rankings were more similar between all indicators in the domain.  These findings suggest that, within some domains, there are certain indicators that were more highly prioritized by the panelists and could be said to be more significant to the measurement of RMC than others. These findings also suggest that despite the generally low agreement shown by the Kendall’s W, this analysis of concordance was helpful in identifying the most highly supported indicators.  One goal of the Delphi process is to determine whether consensus is achieved across the rounds. Although the quantitative analysis of the responses was different for the Round One and Round Two online instruments, these analyses suggested that the panel members found the majority of the indicators presented to them to be valid for RMC. Since this outcome was present in both rounds, stability of the response between the two rounds can be said to have been achieved. This is supported by the feedback given by the panelists that the indicators presented in the Round Two online instrument were comprehensive and did not lack any major concepts related to the experience of RMC in high income countries.    The panel members provided a variety of responses to the additional questions at the end of the Round Two online instrument related to specifying care providers and timing of care. Their responses indicated a range of opinions from the panel members about how these considerations could be used to structure a potential survey. Some panelists favoured keeping the indicators as general as possible whereas other panelists wanted to make the indicators very specific to care providers and care timing. Two themes seen in the panelists’ comments were that women see different care providers at different times and separating indicators by care timing may aid in distinguishing between care providers. Based on these themes, it is possible that these two issues could be linked.  Specifying care providers in the indicators could potentially also 60  distinguish the care timing. Identifying care providers in an assessment of RMC is an important question because RMC is seen as an issue of care provider behaviour (Downe et al., 2018). Nonetheless identifying care providers is complicated by the diversity of maternity care structures. Because feedback on the responses to these questions was not returned to the Delphi panel and further feedback was not sought on these issues from the Delphi panel, stability or consensus cannot be said to have been reached on these issues. 5.3.2 Strongly supported indicators Table 4-6 presented the 14 indicators that were the most strongly supported by the Delphi panel in Round Two. These indicators were those that were ranked in the top half of their respective domains by greater than 80% of the panel members. Eight out of 14 of these indicators were kept unchanged from the Round One online instrument, and for these indicators, the proportion of panelists who wished to keep the indicator unchanged was also presented. This allowed for a comparison of panel support between the two rounds. Three of these indicators had strong panel support (over 80%) in both rounds showing good consensus from the panel on these indicators. The inclusion of the indicator “I was able to have the people I wanted supporting me during labour and birth” shows similarities between this indicator and the WHO recommendation of choice of birth companion for a positive childbirth experience (WHO, 2015b). 5.3.3 Termination criteria for the Delphi method Stability of response is an important consideration when determining whether additional rounds are needed during a Delphi method study (von der Gracht, 2012). After considering the interpretation of the quantitative and qualitative data, it is possible to say that there is enough stability in responses to justify stopping the review of the indicators after two rounds. In addition 61  to this criterion, the requirements to meet a study timeline affected the decision to stop after two rounds.   5.3.4 Conceptual challenges The initial group of indicators were selected based on a literature search of previously developed tools and surveys and mapped according to an evidence-based typology of mistreatment in maternal care (Bohren et al., 2015). These themes initially provided an organizational framework for selecting indicators for the first-round online instrument. However, while sorting the indicators for the Round Two online instrument, I found the language of the mistreatment typology limiting in terms of describing domains of respectful care. Analytic challenges with grouping the indicators for the second round emphasized the limitations of using this typology. Several panel members also expressed concerns both about the limited focus of the indicators on mistreatment and the confusion experienced while trying to evaluate positively worded indicators along with negatively worded indicators.  Conceptual concerns have been identified and discussed in several ways in the literature surrounding RMC. Firstly, other authors have argued that Bohren et al.’s (2015)  typology encompasses interactions between women and health care providers, health system constraints, and care quality standards; the combination of those elements makes measurement tools based on this typology conceptually vague (Sen et al., 2018). Each of these layers can be said to contribute to disrespect and abuse in some way (Freedman & Kruk, 2014) but they may be difficult to measure using one tool (Sen et al., 2018). The limitations of identifying intentionality and underlying causes is a critique of the concept of “respectful maternity care” (Sen et al., 2018). In addition to the absence of conceptual clarity, these critiques expose an important challenge to the development of measurement tools (Sen et al., 2018). 62  In this Delphi process, I found that panelists overwhelmingly agreed on the importance of the indicators and the comprehensive way in which they addressed respectful maternity care but also struggled with the overlapping concepts of intentionality and systemic structures that lead to mistreatment or disrespectful care. An example of this could be seen in the feedback on the indicator “I was physically injured by a health worker or other staff” in Round One. In the feedback on this indicator, panelists identified that this experience could refer to an intentional injury, an injury sustained through non-consented care, an unintentional injury, or an injury sustained through a medical procedure such as episiotomy. Each of these experiences involves different levels of intentionality on the part of the health care provider but also exposes the structural component of RMC of power being held by the health care provider. Other studies have found that some of the most common types of disrespectful care that nurses and doulas witness during labour and birth is carrying out procedures without informed consent from the woman and giving the woman inadequate time to consider proposed interventions before they are carried out (Morton, Henley, Seacrist, & Roth, 2018). These examples show some of the complexity of evaluating RMC, which was articulated in some panelists’ qualitative feedback.   What we were able to do with our indicators was to grapple conceptually with indicators for respectful care in addition to disrespectful care. While our indicators were initially selected to cover the themes identified in a typology of mistreatment (Bohren et al., 2015), our resulting domains expanded from these themes over the course of the Delphi process. When compared to other lists of indicators such as the domains of respectful care developed from a qualitative synthesis of 67 studies (Shakibazadeh et al., 2017) and the WHO list of elements of health system responsiveness (Gostin, Hodge, Valentine, & Nygren-Krug, 2003), the indicators identified in this Delphi study reflect these domains of respectful care and system responsiveness 63  and are not limited to indicators measuring mistreatment. The indicators also reflect the patient perspective of the 10 criteria listed in the FIGO Mother-baby friendly birthing facilities guidelines which are founded in a rights-based approach to women receiving positive childbirth experiences (International Federation of Gynecology and Obstetrics, International Confederation of Midwives, White Ribbon Alliance, International Pediatric Association, & World Helath Organization, 2015). While the domains and the corresponding indicators developed for the Round Two online instrument appear to address domains identified in previously developed frameworks, further analysis such as an exploratory factor analysis would be needed to determine if these indicators function together to measure these domains and to demonstrate construct validity (Polit, 2010). Some panelists gave feedback indicating that they felt some indicators belonged in different domains than those to which they were assigned. This feedback suggests that the current organization of these indicators may be problematic. Additionally, a valid question is whether some indicators would have been rated differently if they were grouped in different domains.  The problem of conceptual clarity persists with these indicators because the items and domains cover a broad range of concepts rooted in a typology of mistreatment (Bohren et al., 2015) and also expanded domains that reflect respectful care. This may beg the question of whether these indicators truly measure experiences of respect, and whether this will be a valid tool to measure this concept when tested for concurrent validity or discriminatory validity (Streiner & Norman, 2008). Some panel members suggested focusing on positive and strength-based wording of the indicators. This may be one way of bringing more conceptual clarity to this group of indicators. However, the absence of mistreatment is an important factor in respectful 64  care (Shakibazadeh et al., 2017). Mistreatment indicators could be reverse-coded to address the need to include this concept among the measurement indicators.  5.3.5 Situating the findings Several tools have been specifically designed to measure elements of mistreatment and respect in maternity care. The Mothers Autonomy in Decision Making (MADM) scale (Vedam, Stoll, Martin, et al., 2017) and the Mothers on Respect Index (Vedam, Stoll, Rubashkin, et al., 2017) were both developed in Canada and were based on frameworks of person-centred care. Internationally, tools specific to respectful maternity care have been developed in Iran (Taavoni et al., 2018) and during the recent WHO multi-country study of treatment of women in facility-based care (Bohren et al., 2018). The indicators identified through this Delphi process represent a more comprehensive approach to measuring respect/disrespect because they could be applied across the full spectrum of care that takes place in the childbearing year (antenatal, labour and delivery, postpartum, and newborn care) and touch on domains beyond those of Bohren et al.’s (2015) mistreatment typology. Much of the work around respectful maternity care measurement has been done in low and middle income countries and in relation to facility-based childbirth (e.g. Bohren et al., 2018) and this limitation has been recognized by several authors (Savage & Castro, 2017; Vedam, Stoll, Macrae et al., 2019). This study sought to consider this phenomenon specifically in high resource contexts, especially the Canadian context. Shakibazadeh and colleagues (2017) noted, in their qualitative evidence synthesis, that experiences of respectful care were more related to involvement in and right to decision making in care in high resource settings. It can be argued that view is reflected in the distribution of indicators in the Second Round online instrument where 47 indicators relate to decision making and involvement (domains of autonomy, choice of 65  evidence-based care options, non-verbal communication, verbal communication, and information and consent) but only 22 indicators relate to mistreatment (domains of physical and verbal mistreatment, and stigma and discrimination). Additionally, panel feedback suggested that these indicators be considered with the intention of them being applicable to births whether they take place in health care facilities or in homes. The Public Health Agency of Canada reported in 2009 that 1.2% of births in Canada happened at home (Public Health Agency of Canada, 2009). More recently, the 2016/2017 Perinatal Health Report for BC report indicated that 3.1% of births in the province occurred at home with midwifery care (Perinatal Services BC, 2018). Statistics Canada data from 2017 reports that of the total amount of births by place of birth, 1.98% of birth occurred in non-hospital locations (private homes or other care locations not registered as hospitals) (Statistics Canada, n.d.). Although homebirth still represents a minority of the births occurring in Canada, it is important to include these cases in the assessment of respectful maternity care.  I found that many of the indicators in the Round Two online instrument were similar to those that are found in tools developed for use in low resource settings (ex. Afulani, Diamond-Smith, Golub, & Sudhinaraset, 2017; Bohren, 2018). It has been argued that the roots of disrespectful care come from complex structures of ingrained understandings about gender and the biomedical approach to birth (Sadler et al., 2016) which, while they manifest to a different extents in different geopolitical contexts, affect maternity care in a way that is not addressed in the indicators that are present in current tools and measures that seek to address RMC (Sen et al., 2018). This gap may limit the ability of RMC measurement to attend to these structures. Strategies to address RMC within the current healthcare structure could be to use RMC measurement as an assessment of patient safety (Vedam, Stoll, Rubashkin, et al., 2017) by 66  exposing disrespect, and to develop and promote adherence to context-appropriate care guidelines for respectful care (Miller et al., 2016).  An important consideration in the assessment of indicators in this study was their appropriateness for use in populations that experience stigma. Recent studies in high-resource contexts which have sought to describe a prevalence of mistreatment have focused on these populations (Vedam, Stoll, Mcrae, et al., 2019; Vedam, Stoll, Taiwo, et al., 2019). Specifically, the Giving Voice to Mothers study carried out with an American population reported findings that linked ethnicity with higher rates of mistreatment in maternity care (Vedam, Stoll, Taiwo, et al., 2019). The intersectional nature of characteristics that may be associated with respect or disrespect requires further investigation (Vedam, Stoll, Taiwo, et al., 2019), and the indicators reviewed in this study could be useful for measurement of this experience.  5.3.6 Research method 5.3.6.1 Strengths  The Delphi method has been used in many contexts of health care research, including to develop quality indicators (Boulkedid et al., 2011) and survey tools (ex. Li et al., 2016). The method involves iterative rounds with a panel of experts whose opinions are kept anonymous (Trevelyan & Robinson, 2015). In this study, along with the investigative team, I chose to include experts from a range of backgrounds in RMC and women’s health research and practice, as well as community members to create a heterogeneous panel. This can be said to contribute to the validity of the results of this Delphi procedure as a wide variety of stakeholders were involved in the assessment of the indicators (Boulkedid et al., 2011).   A review of 80 studies using the Delphi method to select health care quality indicators identified validity as the most common criterion used to select the indicators over the rounds 67  (Boulkedid et al., 2011). These authors defined validity as “the extent to which the characteristics of the indicator are appropriate for the concept being assessed” (Boulkedid et al., 2011, p. 5). The present study applied this criterion by using ratings of importance, relevance, and clarity followed by a prioritization round to identify indicators of RMC. Both the Round One and Round Two online instruments collected data used to identify the most appropriate indicators. The two-round method presented here attended to feasibility around attrition and response rates, which is another key consideration of the Delphi method (Trevelyan & Robinson, 2015).  5.3.6.2 Limitations  Several panel members indicated that the Round One online instrument measuring the importance, relevance, and clarity of the items, using the response options that we gave (see Appendix C) was very difficult to complete due to its length and the difficulty of identifying overlap and redundancy with so many indicators. They also indicated there was limited room for nuance in the ranking of the indicators (for example, saying an item was important but not most important). Additionally, the use of logic branching to allow panelists who gave indicators low rankings of importance and relevance to skip evaluating the clarity of the items affected the reliability of the clarity rankings. The indicators in the Round One online instrument were presented to the panelists using identical wording and response options to their source. The direct wording of some of the indicators and the focus on mistreatment were troublesome to some panel members who were concerned that the indicators did not seem to be focusing the concept of respectful care. Furthermore, many panelists commented on the appropriateness of the response options and the language used in the indicators to refer to the care providers (i.e. doctor or midwife, hospital 68  staff, care workers, etc.). While some of these comments were used to inspire the open-ended questions in Round Two about how or whether to specify care providers, I argue that the form that these indicators took distracted many panelists from assessing the importance and relevance of the concepts represented by these indicators. Furthermore, the assessment of nursing care was lacking; a gap that was identified by one panelist in Round Two who stated that “Nurses have a very different role than MDs or midwives…they have the most contact with patients and have higher potential for disrespect/mistreatment in many settings.”  In the development of the Round One online instrument, indicators were drawn individually from existing surveys and tools and selected for inclusion in the online instrument based on their applicability to the domains of the mistreatment typology (Bohren et al., 2015). The exceptions to this practice were two scales, the Mothers On Respect index (MORi) (Vedam, Stoll, Rubashkin, et al., 2017) and the Mother’s Autonomy in Decision Making (MADM) scale (Vedam, Stoll, Martin, et al., 2017). These scales were assessed as wholes in both the Round One and Round Two online instruments instead of by indicator. While they were well-rated in their entirety by the expert panel, there is the potential for bias because there was no opportunity to break the scales into parts. Thus, their presentation may have changed the way that the panel members evaluated these indicators in comparison with other individual independent indicators.  In the Delphi method, it is important to compare the findings between the rounds to assess for panel consensus and stability of responses (von der Gracht, 2012) . Because the quantitative data collected in the first and second rounds were different in nature, there is no way to compare them statistically to test for change in agreement. The findings would have been strengthened by carrying out both rounds as prioritization rounds, or by adding a third round for prioritizing items that would have also incorporated feedback about timing and care providers. 69  That approach would have allowed both the identification of the most important indicators, and the ability to compare the findings statistically for consensus between the rounds. The change in assessment between the rounds in this study threatened the rigour of the Delphi method as consensus between the rounds is more difficult to demonstrate (Boulkedid et al., 2011).  5.4 Implications 5.4.1 Research  The findings of this Delphi study contribute to reducing a geographical gap in RMC measurement (Vedam, Stoll, Rubashkin, et al., 2017) by identifying indicators of RMC that are relevant to the high-resource context and, further, by intentionally including diverse (and marginalized) perspectives in this process. The broader set of domains developed in this Delphi process also contribute to the conceptual understanding of RMC, a concept that is described as using a variety of language ranging from considering the lack of RMC as a form of violence to language that promotes cooperation with providers (Sen et al., 2018).  While the experience of respect is important to measure from a patient’s perspective, in the context of structurally embedded disrespect in the maternity care system, some authors have suggested that observational tools are necessary for a full description of this phenomenon (Savage & Castro, 2017). A recent tool developed to measure RMC has included both a survey component and an observational component (Bohren et al., 2018). The indicators and domains included in this Delphi study could have been expanded to include observational indicators or to assess the perspective of the care providers about what they feel is enabling or preventing them from providing respectful care. A possible research question that could address this perspective might be: what are the barriers and facilitators of respectful care from the point of view of maternity care providers? 70  5.4.2 Practice  The indicators identified through this Delphi process draw attention to practices in maternity care that contribute to experiences of respect or disrespect during women’s pregnancies, births, and postpartum periods. Feedback from one panel member spoke to concerns about the management of risk in the obstetric context and how this may impact the ability of the care provider to provide respectful care according to these indicators. Other research has explored how decision making is affected by relationships with providers, and the way evidence is communicated from a risk perspective; both act as important aspects for how both women and providers manage birth care (Hall, Tomkinson, & Klein, 2012). The indicators identified through this study can be used to further describe this phenomenon.  5.4.3 Education  Education and socialization of health care professionals is an important component of underlying structures that allow for systemic disrespect of women undergoing reproductive care (Sadler et al., 2016). The indicators identified through this Delphi method can point out areas of awareness required to critically examine and undermine socialized structures. This practice could be incorporated into both formative and continuing education for health care providers. An example of these areas might be communication techniques in practice based on attention to the domains of Verbal Communication and Information and Consent. Teaching models that focus on woman-centred care during the childbearing cycle (Sen et al., 2018) could be used to adapt formative education to promote RMC. 5.5 Summary  The findings of this Delphi study indicate that an expert panel reached some consensus on indicators for RMC measurement in a high resource context. The heterogeneity of the panel 71  experts contributed to validity of the consensus, and the method followed several standards of rigour associated with the Delphi method. Findings would have been strengthened by applying the same format to the online instruments and type of analysis in both rounds and through more attention to developing conceptual clarity. Implications of naming these indicators as determinants of RMC draw attention to disrespectful practices in perinatal care and gaps in health care provider education. Implications for further research include expanding indicators to include observations of care and caregiver experiences.  .72  References Abuya, T., Warren, C. E., Miller, N., Njuki, R., Ndwiga, C., Maranga, A., … Bellows, B. (2015). Exploring the prevalence of disrespect and abuse during childbirth in Kenya. PloS One, 10(4). https://doi.org/10.1371/journal.pone.0123606 Afulani, P. A., Diamond-Smith, N., Golub, G., & Sudhinaraset, M. (2017). Development of a tool to measure person-centered maternity care in developing settings: validation in a rural and urban Kenyan population. Reproductive Health, 14(1), 118. https://doi.org/https://dx.doi.org/10.1186/s12978-017-0381-7 Alwin, D. F., & Krosnick, J. A. (1988). A test of the form-resistant correlation hypothesis: Ratings, rankings, and the measurement of values. The Public Opinion Quarterly, 52(4), 526–538. Attanasio, L., & Kozhimannil, K. B. (2015). Patient-reported communication quality and perceived discrimination in maternity care. Medical Care, 53(10), 863–871. https://doi.org/10.1097/MLR.0000000000000411 Beck, C. T. (2004). Birth trauma: in the eye of the beholder. Nursing Research, 53(1), 28–35. https://doi.org/10.2106/JBJS.M.00621 Bohren, M. A., Vogel, J. P., Fawole, B., Maya, E. T., Maung, T. M., Baldé, M. D., … Tunçalp, Ö. (2018). Methodological development of tools to measure how women are treated during facility-based childbirth in four countries: labor observation and community survey. BMC Medical Research Methodology, 18(1), N.PAG-N.PAG. https://doi.org/10.1186/s12874-018-0603-x Bohren, M. A., Vogel, J. P., Hunter, E. C., Lutsiv, O., Makh, S. K., Souza, J. P., … Gülmezoglu, A. M. (2015). The mistreatment of women during childbirth in health facilities globally: A 73  mixed-methods systematic review. PLoS Medicine, 12(6), 1–32. https://doi.org/10.1371/journal.pmed.1001847 Boulkedid, R., Abdoul, H., Loustau, M., Sibony, O., & Alberti, C. (2011). Using and reporting the Delphi method for selecting healthcare quality indicators: A systematic review. PLoS ONE, 6(6). https://doi.org/10.1371/journal.pone.0020476 Bowser, D., & Hill, K. (2010). Exploring evidence for disrespect and abuse in facility-based childbirth. Brancheau, J. C., & Wetherbe, J. C. (1987). Key issues in information systems management. MIS Quarterly, 11(1), 23–45. https://doi.org/10.4018/jgim.2010100102 Canadian Patient Experiences Survey - Inpatient Care (CPES-IC). (2017). Canadian Institute for Health Information (CIHI), (December). Retrieved from https://www.cihi.ca/en/access-data-reports/results?f%5B0%5D=field_primary_theme%3A2065 Clark, K., Beatty, S., & Reibel, T. (2016). Maternity-care: measuring women’s perceptions. International Journal of Health Care Quality Assurance, 29(1), 89–99. Dencker, A., Taft, C., Bergqvist, L., Lilja, H., & Berg, M. (2010). Childbirth experience questionnaire ( CEQ): development and evaluation of a multidimensional instrument. BMC Pregnancy and Childbirth, 10(81). https://doi.org/10.1186/1471-2393-10-81 Downe, S., Lawrie, T. A., Finlayson, K., & Oladapo, O. T. (2018). Effectiveness of respectful care policies for women using routine intrapartum services: A systematic review. Reproductive Health, 15(1), 1–13. https://doi.org/10.1186/s12978-018-0466-y Escuriet, R. R., White, J., Beeckman, K., Frith, L., Leon-Larios, F., Loytved, C., … EU COST Action IS0907. “Childbirth Cultures  and Consequences,” C. (2015). Assessing the performance of maternity care in Europe: a critical exploration of tools and indicators. BMC 74  Health Services Research, 15(1), 491. https://doi.org/https://dx.doi.org/10.1186/s12913-015-1151-2 Fereday, J., Collins, C., Turnbull, D., Pincombe, J., & Oster, C. (2009). An evaluation of midwifery group practice. Part II: women’s satisfaction. Women and Birth : Journal of the Australian College of Midwives, 22(1), 11–16. https://doi.org/https://dx.doi.org/10.1016/j.wombi.2008.08.001 Freedman, L. P., & Kruk, M. E. (2014). Disrespect and abuse of women in childbirth: Challenging the global quality and accountability agendas. The Lancet, 384(9948), e42–e44. https://doi.org/10.1016/S0140-6736(14)60859-X Freedman, L. P., Ramsey, K., Abuya, T., Bellows, B., Ndwiga, C., Warren, C. E., … Mbaruku, G. (2014). Defining disrespect and abuse of women in childbirth: a research policy and rights agenda. Bulletin of the World Health Organization, 92(August), 915–917. https://doi.org/10.2471/BLT.14.137869 Garrard, F., & Narayan, H. (2013). Assessing obstetric patient experience: a SERVQUAL questionnaire. International Journal of Health Care Quality Assurance, 26(7), 582–592. https://doi.org/10.1108/IJHCQA-08-2011-0049 Gostin, L., Hodge, J. G. J., Valentine, N., & Nygren-Krug, H. (2003). The domains of health responsiveness – A human rights analysis (EIP Discussion Paper No. 53). Retrieved from https://www.who.int/responsiveness/papers/human_rights.pdf?ua=1 Gungor, I., & Beji, N. K. (2012). Development and psychometric testing of the scales for measuring maternal satisfaction in normal and caesarean birth. Midwifery, 28(3), 348–357. https://doi.org/https://dx.doi.org/10.1016/j.midw.2011.03.009 Hall, W. A., Tomkinson, J., & Klein, M. C. (2012). Canadian care providers’ and pregnant 75  women’s approaches to managing birth: Minimizing risk while maximizing integrity. Qualitative Health Research, 22(5), 575–586. https://doi.org/10.1177/1049732311424292 Hallowell, M. R., & Gambatese, J. A. (2010). Qualitative research: Application of the Delphi method to CEM research. Journal of Construction Engineering and Management, 136(1), 99–107. https://doi.org/10.1061/(ASCE)CO.1943-7862.0000137 Hankivsky, O., Grace, D., Hunting, G., Giesbrecht, M., Fridkin, A., Rudrum, S., … Clark, N. (2014). An intersectionality-based policy analysis framework: Critical reflections on a methodology for advancing equity. International Journal for Equity in Health, 13(1), 1–16. https://doi.org/10.1186/s12939-014-0119-x Heaman, M. I., Sword, W. A., Akhtar-Danesh, N., Bradford, A., Tough, S., Janssen, P. A., … Helewa, M. E. (2014a). Quality of prenatal care questionnaire: instrument development and testing. BMC Pregnancy and Childbirth, 14(188), 1–16. https://doi.org/https://dx.doi.org/10.1186/1471-2393-14-188 Heaman, M. I., Sword, W. A., Akhtar-Danesh, N., Bradford, A., Tough, S., Janssen, P. A., … Helewa, M. E. (2014b). Quality of prenatal care questionnaire: Instrument development and testing. BMC Pregnancy and Childbirth, 14(1), 1–16. https://doi.org/10.1186/1471-2393-14-188 Heatley, M. L., Watson, B., Gallois, C., & Miller, Y. D. (2015a). Women’s Perceptions of Communication in Pregnancy and Childbirth: Influences on Participation and Satisfaction With Care. Journal of Health Communication, 20(7), 827–834. https://doi.org/https://dx.doi.org/10.1080/10810730.2015.1018587 Heatley, M. L., Watson, B., Gallois, C., & Miller, Y. D. (2015b). Women’s perceptions of communication in pregnancy and childbirth: Influences on participation and satisfaction 76  with care. Journal of Health Communication, 20(7), 827–834. https://doi.org/10.1080/10810730.2015.1018587 Hekkert, K. D., Cihangir, S., Kleefstra, S. M., van den Berg, B., & Kool, R. B. (2009). Patient satisfaction revisited: A multilevel approach. Social Science and Medicine, 69(1), 68–75. https://doi.org/10.1016/j.socscimed.2009.04.016 Hollins Martin, C. J., & Martin, C. R. (2014). Development and psychometric properties of the Birth Satisfaction Scale-Revised (BSS-R). Midwifery, 30(6), 610–619. https://doi.org/10.1016/j.midw.2013.10.006 International Federation of Gynecology and Obstetrics, International Confederation of Midwives, White Ribbon Alliance, International Pediatric Association, & World Helath Organization. (2015). International Journal of Gynecology and Obstetrics Mother − baby friendly birthing facilities ☆ . 128, 95–99. Janssen, P. A., Dennis, C.-L., & Reime, B. (2006). Development and psychometric testing of The Care in Obstetrics: Measure for Testing Satisfaction (COMFORTS) scale. Research in Nursing & Health, 29(1), 51–60. https://doi.org/10.1002/nur Jewkes, R., & Penn-Kekana, L. (2015). Mistreatment of women in childbirth: Time for action on this important dimension of violence against women. PLoS Medicine, 12(6), 6–9. https://doi.org/10.1371/journal.pmed.1001849 Keeney, S., Hasson, F., & McKenna, H. (2006). Consulting the oracle: ten lessons from using the Delphi technique in nursing research. Journal of Advanced Nursing, 52(2), 205–212. https://doi.org/10.1111/j.1365-2648.2006.03716.x Keeney, S., Hasson, F., & McKenna, H. P. (2001). A critical review of the Delphi technique as a research methodology for nursing. International Journal of Nursing Studies, 38(2), 195–77  200. https://doi.org/10.1016/S0020-7489(00)00044-4 Khosla, R., Zampas, C., Joshua, P., Bohren, M. A., Roseman, M., Erdman, J. N., … Erdman, J. N. (2016). International human rights and the mistreatment of women during childbirth. Li, H., Liu, Y. L., Qiu, L., Chen, Q. L., Wu, J. B., Chen, L. L., & Li, N. (2016). Nurses’ empowerment scale for ICU patients’ families: an instrument development study. Nursing in Critical Care, 21(5), e11–e21. https://doi.org/10.1111/nicc.12106 Lynn, M. R. (1986). Determination and quantification of content validity. Nursing Research, Vol. 35, pp. 382–386. https://doi.org/10.1097/00006199-198611000-00017 Manary, M. P., Boulding, W., Staelin, R., & Glickman, S. W. (2013). The patient experience and health outcomes. New England Journal of Medicine, 368(3), 201–203. https://doi.org/10.1056/NEJMp1213134 Maputle, M. S., & Donavon, H. (2013). Woman-centred care in childbirth: A concept analysis (Part 1). Curationis, 36(1), 1–9. https://doi.org/10.4102/curationis.v36i1.49 McConville, B. (2014). Respectful maternity care--how the UK is learning from the developing world. Midwifery, 30(2), 154–157. https://doi.org/10.1016/j.midw.2013.12.002 Miller, S., Abalos, E., Chamillard, M., Ciapponi, A., Colaci, D., Comandé, D., … Althabe, F. (2016). Beyond too little, too late and too much, too soon: a pathway towards evidence-based, respectful maternity care worldwide. The Lancet, 388(10056), 2176–2192. https://doi.org/10.1016/S0140-6736(16)31472-6 Morse, J. M. (2015). Analytic strategies and sample size. Qualitative Health Research, 25(10), 1317–1318. https://doi.org/10.1177/1049732315602867 Morton, C. H., Henley, M. M., Seacrist, M., & Roth, L. M. (2018). Bearing witness: United States and Canadian maternity support workers’ observations of disrespectful care in 78  childbirth. Birth, 45(3), 263–274. https://doi.org/10.1111/birt.12373 Parfitt, Y. M., & Ayers, S. (2009). The effect of post-natal symptoms of post-traumatic stress and depression on the couple’s relationship and parent-baby bond. Journal of Reproductive and Infant Psychology, 27(2), 127–142. https://doi.org/10.1080/02646830802350831 Perinatal Services BC. (2018). Perinatal Health Report: Deliveries in Vancouver Coastal Health 2014/155. Retrieved from http://www.perinatalservicesbc.ca/Documents/Data-Surveillance/Reports/PHR/PHR_VCH_Deliveries_2014_15.pdf Polit, D. F. (2010). Statistics and data analysis for nursing research (2nd ed.). Upper Saddle River, New Jersey: Pearson Education Inc. Polit, D. F., & Beck, C. T. (2006). The content validity index: Are you sure you know what’s being reported? Critique and recommendations. Research in Nursing & Health, 29, 489–497. https://doi.org/10.1002/nur.20147 Polit, D. F., & Beck, C. T. (2017). Nursing research: Generating and assessing evidence for nursing research (10th ed.). Philadelphia, Pennsylvania: Wolters Kluwer. Polit, D. F., Beck, C. T., & Owen, S. V. (2007). Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Research in Nursing & Health, 30, 459–467. https://doi.org/10.1002/nur.20199 Pope, C., Ziebland, S., & Mays, N. (2000). Qualitative research in health care: Analysing qualitative data. British Medical Journal, 320(January), 114–116. https://doi.org/10.1136/bmj.320.7227.114 Public Health Agency of Canada. (2009). What Mothers Say: The Canadian Maternity Experiences Survey. Retrieved from http://www.publichealth.gc.ca/mes Raj, A., Dey, A., Boyce, S., Seth, A., Bora, S., Chandurkar, D., … Silverman, J. G. (2017). 79  Associations Between Mistreatment by a Provider during Childbirth and Maternal Health Complications in Uttar Pradesh, India. Maternal and Child Health Journal, 21(9), 1821–1833. https://doi.org/https://dx.doi.org/10.1007/s10995-017-2298-8 Rubashkin, N., Szebik, I., Baji, P., Szántó, Z., Susánszky, É., & Vedam, S. (2017). Assessing quality of maternity care in Hungary: expert validation and testing of the mother-centered prenatal care (MCPC) survey instrument. Reproductive Health, 14(1), 152. https://doi.org/https://dx.doi.org/10.1186/s12978-017-0413-3 Sadler, M., Santos, M. J., Ruiz-Berdún, D., Rojas, G. L., Skoko, E., Gillen, P., & Clausen, J. A. (2016). Moving beyond disrespect and abuse: addressing the structural dimensions of obstetric violence. Reproductive Health Matters, 24(47), 47–55. https://doi.org/10.1016/j.rhm.2016.04.002 Savage, V., & Castro, A. (2017). Measuring mistreatment of women during childbirth: A review of terminology and methodological approaches. Reproductive Health, 14(1), 1–27. https://doi.org/10.1186/s12978-017-0403-5 Scheerhagen, M., Van Stel, H. F., Birnie, E., Franx, A., & Bonsel, G. J. (2015). Measuring client experiences in maternity care under change: Development of a questionnaire based on the WHO responsiveness model. PLoS ONE, 10(2), 1–19. https://doi.org/10.1371/journal.pone.0117031 Schmidt, R. C. (1997). Managing Delphi surveys using nonparametric statistical techniques. Decision Sciences, 28(3), 763–774. https://doi.org/10.1111/j.1540-5915.1997.tb01330.x Sen, G., Reddy, B., & Iyer, A. (2018). Beyond measurement: the drivers of disrespect and abuse in obstetric care. Reproductive Health Matters, 8080. https://doi.org/10.1080/09688080.2018.1508173 80  Shakibazadeh, E., Namadian, M., Bohren, M. A., Vogel, J. P., Rashidian, A., Pileggi, V. N., … Gülmezoglu, A. M. (2017). Respectful care during childbirth in health facilities globally: a qualitative evidence synthesis. BJOG: An International Journal of Obstetrics & Gynaecology. https://doi.org/10.1111/1471-0528.15015 Shaw, D., Guise, J. M., Shah, N., Gemzell-Danielsson, K., Joseph, K. S., Levy, B., … Main, E. K. (2016). Drivers of maternity care in high-income countries: can health systems support woman-centred care? The Lancet, 388(10057), 2282–2295. https://doi.org/10.1016/S0140-6736(16)31527-6 Sheferaw, E. D., Mengesha, T. Z., & Wase, S. B. (2016). Development of a tool to measure women’s perception of respectful maternity care in public health facilities. BMC Pregnancy & Childbirth, 16, 1–8. https://doi.org/10.1186/s12884-016-0848-5 Sjetne, I. S., Iversen, H. H., & Kjøllesdal, J. G. (2015). A questionnaire to measure women’s experiences with pregnancy, birth and postnatal care: Instrument development and assessment following a national survey in Norway. BMC Pregnancy and Childbirth, 15(1), 1–11. https://doi.org/10.1186/s12884-015-0611-3 Skulmoski, G. J., Hartman, F. T., & Krahn, J. (2007). The Delphi method for graduate research. Journal of Information Technology Education, 6, 1–21. https://doi.org/10.1063/1.1827331 Slocumb, E. M., & Cole, F. L. (1991). A practical approach to content validation. Applied Nursing Research, 4(4), 192–195. https://doi.org/10.1136/pgmj.5.52.iii Smyth, J. D., Olson, K., & Burke, A. (2018). Comparing survey ranking question formats in mail surveys. International Journal of Market Research, 60(5), 502–516. https://doi.org/10.1177/1470785318767286 Statistics Canada. (n.d.). Live births and fetal deaths (stillbirths), by place of birth (hospital or 81  non-hospital). https://doi.org/https://doi.org/10.25318/1310042901-eng Stevens, N. R., Wallston, K. A., & Hamilton, N. A. (2012). Perceived control and maternal satisfaction with childbirth: a measure development study. Journal of Psychosomatic Obstetrics and Gynaecology, 33(1), 15–24. https://doi.org/https://dx.doi.org/10.3109/0167482X.2011.652996 Streiner, D. L., & Norman, G. R. (2008). Health measurement scales: a practical guide to their development and use (4th ed). New York, New York: Oxford University Press. Taavoni, S., Goldani, Z., Gooran, N. R., Rostami Gooran, N., Haghani, H., & Gooran, N. R. (2018). Development and assessment of respectful maternity care questionnaire in Iran. International Journal of Community Based Nursing and Midwifery, 6(4), 334–349. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/30465006 The White Ribbon Alliance for Safe Motherhood. (2011). Respectful maternity care: The universal rights of childbearing women. The White Ribbon Alliance for Safe Motherhood, 1–6. Retrieved from http://whiteribbonalliance.org/wp-content/uploads/2013/10/Final_RMC_Charter.pdf Trevelyan, E. G., & Robinson, N. (2015). Delphi methodology in health research: How to do it? European Journal of Integrative Medicine, 7(4), 423–428. https://doi.org/10.1016/j.eujim.2015.07.002 van der Kooy, J., Valentine, N. B., Birnie, E., Vujkovic, M., de Graaf, J. P., Denktaş, S., … Bonsel, G. J. (2014). Validity of a questionnaire measuring the world health organization concept of health system responsiveness with respect to perinatal services in the Dutch obstetric care system. BMC Health Services Research, 14(1), 622. https://doi.org/https://dx.doi.org/10.1186/s12913-014-0622-1 82  Vedam, S., Stoll, K., Martin, K., Rubashkin, N., Partridge, S., Thordarson, D., … Council, C. C. in B. C. S. (2017). The Mother’s Autonomy in Decision Making (MADM) scale: Patient-led development and psychometric testing of a new instrument to evaluate experience of maternity care. PloS One, 12(2), e0171804. https://doi.org/https://dx.doi.org/10.1371/journal.pone.0171804 Vedam, S., Stoll, K., Mcrae, D. N., Korchinski, M., Velasquez, R., Wang, J., … Elwood, R. (2019). Patient-led decision making : Measuring autonomy and respect in Canadian maternity care. Patient Education and Counseling, 102, 586–594. https://doi.org/10.1016/j.pec.2018.10.023 Vedam, S., Stoll, K., Rubashkin, N., Martin, K., Miller-Vedam, Z., Hayes-Klein, H., … Council, Cc. S. (2017). The Mothers on Respect (MOR) index: measuring quality, safety, and human rights in childbirth. SSM - Population Health, 3(June 2016), 201–210. https://doi.org/10.1016/j.ssmph.2017.01.005 Vedam, S., Stoll, K., Taiwo, T. K., Rubashkin, N., Cheyney, M., Strauss, N., … Schummers, L. (2019). The Giving Voice to Mothers study : inequity and mistreatment during pregnancy and childbirth in the United States. Reproductive Health, 16(77), 1–18. Verreault, N., Da Costa, D., Marchand, A., Ireland, K., Banack, H., Dritsa, M., & Khalifé, S. (2012). PTSD following childbirth: A prospective study of incidence and risk factors in Canadian women. Journal of Psychosomatic Research, 73(4), 257–263. https://doi.org/10.1016/j.jpsychores.2012.07.010 von der Gracht, H. A. (2012). Consensus measurement in Delphi studies. Review and implications for future quality assurance. Technological Forecasting and Social Change, 79(8), 1525–1536. https://doi.org/10.1016/j.techfore.2012.04.013 83  Warren, C. E., Njue, R., Ndwiga, C., & Abuya, T. (2017). Manifestations and drivers of mistreatment of women during childbirth in Kenya: Implications for measurement and developing interventions. BMC Pregnancy and Childbirth, 17(1), 1–14. https://doi.org/10.1186/s12884-017-1288-6 WHO. (2015). The prevention and elimination of disrespect and abuse during facility-based childbirth. WHO statement: Every woman has the right to the highest attainable standard of health, which includes the right to dignified, respectful health care. World Health Organization, 1–4. https://doi.org/WHO/RHR/14.23 WHO. (2016). Standards for improving quality of maternal and newborn care in health facilities. Who, 73. https://doi.org/978 92 4 151121 6 Wong, S. T., & Haggerty, J. L. (2013). Measuring patient experiences in primary health care. Health Services and Policy Research, (May), 1–34. https://doi.org/10.14288/1.0048528 Wool, C. (2015a). Instrument development: Parental satisfaction and quality indicators of perinatal palliative care. Journal of Hospice and Palliative Nursing, 17(4), 301–308. https://doi.org/10.1097/NJH.0000000000000163 Wool, C. (2015b). Instrument Psychometrics: Parental Satisfaction and Quality Indicators of Perinatal Palliative Care. Journal of Palliative Medicine, 18(10), 872–877. https://doi.org/https://dx.doi.org/10.1089/jpm.2015.0135 Yelland, J. S., Sutherland, G. A., & Brown, S. J. (2012). Women’s experience of discrimination in australian perinatal care: The double disadvantage of social adversity and unequal care. Birth, 39(3), 211–220. https://doi.org/10.1111/j.1523-536X.2012.00550.x   84  Appendices  Appendix A   Complete Search Histories CINAHL Search Number Description (Terms used, Searches combined) S1  (MH "Childbirth") OR (MH "Term Birth") OR (MH "Childbirth, Premature") S2  (MH "Maternal-Child Care") OR (MH "Prepregnancy Care") OR (MH "Prenatal Care") OR (MH "Lactation Suppression") OR (MH "Postnatal Care") OR (MH "Obstetric Care") OR (MH "Birthing Positions") OR (MH "Delivery, Obstetric") OR (MH "Breech Delivery") OR (MH "Vaginal Birth") OR (MH "Vaginal Birth After Cesarean") OR (MH "Version, Fetal") OR (MH "Intrapartum Care") OR (MH "Amnioinfusions") OR (MH "Labor, Induced") OR (MH "Fetal Membranes, Artificial Rupture") OR (MH "Labor Support") OR (MH "Management of Labor") OR (MH "Pushing (Childbirth)") OR (MH "Umbilical Cord Clamping") OR (MH "Perinatal Care")(MH "Maternal-Child Care") OR (MH "Prepregnancy Care") OR (MH "Prenatal Care") OR (MH "Lactation Suppression") OR (MH "Postnatal Care") OR (MH "Obstetric Care") OR (MH "Birthing Positions") OR (MH "Delivery, Obstetric") OR (MH "Breech Delivery") OR (MH "Vaginal Birth") OR (MH "Vaginal Birth After Cesarean") OR (MH "Version, Fetal") OR (MH "Intrapartum Care") OR (MH "Amnioinfusions") OR (MH "Labor, Induced") OR (MH "Fetal Membranes, Artificial Rupture") OR (MH "Labor Support") OR (MH "Management  ...Show Less  85  S3  TI ( childbirth* OR labour* OR birth* OR labor* OR pregnan* ) OR AB ( childbirth* OR labour* OR birth* OR labor* OR pregnan* )   S4  S1 OR S2 OR S3   S5  (MH "Respect")   S6  (MH "Human Dignity")   S7  (MH "Patient Rights") OR (MH "Treatment Refusal") OR (MH "Patient Autonomy") OR (MH "Patient Access to Records") OR (MH "Right to Die") OR (MH "Right to Life") OR (MH "Fetal Rights") OR (MH "Women's Rights")   S8  TI ( (Women* OR woman* OR patient*) N2 (discriminat* OR respect* OR disrespect* OR mistreat*) ) OR AB ( (Women* OR woman* OR patient*) N2 (discriminat* OR respect* OR disrespect* OR mistreat*) )   S9  TI ( "patient* right*" OR "wom#n* right*" ) OR AB ( "patient* right*" OR "wom#n* right*" )   S10  S5 OR S6 OR S7 OR S8 OR S9   S11  S4 AND S10   S12  (MH "Research Instruments+")   S13  TI ( questionnaire OR survey OR scale OR instrument* ) OR AB ( questionnaire OR survey OR scale OR instrument* )   S14  TI ( scale develop* OR scale valid* OR item valid* OR scale valid* ) OR AB ( scale develop* OR scale valid* OR item valid* OR scale valid* )   S15  S12 OR S13 OR S14   S16  S4 AND S10 AND S15   86  S17  S4 AND S10 AND S15   S18  Employ* or job or work* or occupation* or tax* or "lab#r market" or antitrust   S19  S17 NOT S18   S20  "Birthweight" or "Birth weight" or "Birth order" or "Birth cohort"   S21  S19 NOT S20   S22  Diet or Nutrition or death or die* or dying   S23  S21 NOT S22   S24  parenting or children   S25 S23 NOT S24    Medline (OVID)  Search Number Description (terms used, searches combined) 1 exp Parturition/ 2  maternal health services/ or maternal-child health services/ or perinatal care/ or postnatal care/ or prenatal care/ 3  exp LABOR, OBSTETRIC/ or exp DELIVERY, OBSTETRIC/ or exp OBSTETRIC LABOR, PREMATURE/ or exp OBSTETRIC LABOR COMPLICATIONS/ 4 (childbirth* or birth* or labour* or labor or laboring or labors).ti,ab. 5 1 or 2 or 3 or 4 6 exp "Surveys and Questionnaires"/ 7 (questionnaire* or survey* or scale* or instrument*).ti,ab. 8 ("patient* right*" or "wom#n* right*").ti,ab. 9  ((Women* or woman* or patient* or mother* or matern*) adj2 (discriminat* or respect or respectful* or respecting or respects or respected or disrespect* or mistreat*)).ti,ab. 87  10 exp dehumanization/ or incivility/ or exp prejudice/ or stereotyping/ 11 "Discrimination (Psychology)"/ 12 confidentiality/ or informed consent/ or treatment refusal/ or women's rights/ 13 nurse-patient relations/ or physician-patient relations/ 14 quality indicators, health care/ or risk adjustment/ or "standard of care"/ 15 Personal Satisfaction/ 16 (respect or respectful or respecting or respected).ti,ab. 17 6 or 7 18 8 or 9 or 10 or 11 or 12 or 13 or 14 or 15 or 16 19 5 and 17 and 18 20 19 and "Journal Article".sa_pubt. 21 8 or 9 or 10 or 11 or 12 or 13 or 14 or 15 22 5 and 17 and 21 23 19 not 22 24  (Employ* or job or work* or occupation* or tax* or "lab#r market" or antitrust).mp. [mp=title, abstract, original title, name of substance word, subject heading word, floating sub-heading word, keyword heading word, protocol supplementary concept word, rare disease supplementary concept word, unique identifier, synonyms] 25 19 not 24 26  ("Birthweight" or "Birth weight" or "Birth order" or "Birth cohort").mp. [mp=title, abstract, original title, name of substance word, subject heading word, floating sub-heading word, keyword heading word, protocol supplementary concept word, rare disease supplementary concept word, unique identifier, synonyms] 27 25 not 26 28  (clustering or chemistry or mice or optics or dental or ontology or dimorph*).mp. [mp=title, abstract, original title, name of substance word, subject heading word, floating sub-heading word, keyword heading word, protocol supplementary concept word, rare disease supplementary concept word, unique identifier, synonyms]  88  29 27 not 28 30 limit 29 to (english language and humans) 31  (Diet or Nutrition or death or die* or dying).mp. [mp=title, abstract, original title, name of substance word, subject heading word, floating sub-heading word, keyword heading word, protocol supplementary concept word, rare disease supplementary concept word, unique identifier, synonyms] 32 29 not 31 33 limit 32 to (english language and humans) 34  (parenting or children).mp. [mp=title, abstract, original title, name of substance word, subject heading word, floating sub-heading word, keyword heading word, protocol supplementary concept word, rare disease supplementary concept word, unique identifier, synonyms] 35 32 not 34 36 limit 35 to (english language and humans)    89  Appendix B  List of Round One indicators Physical and Sexual Abuse  Q1 I was physically injured by a health worker or other staff (Taavoni et al., 2018) Q2 I was forced to stay in my bed (Taavoni et al., 2018) Q3 I was held down to the bed forcefully by a health worker or other staff *Appendix 2 of (Bohren et al., 2018)  Q4 I was physically tied to the bed by a health worker or other staff * Q5 I had forceful downward pressure placed on my abdomen before the baby came out (fundal pressure) * Q6 I had another form of physical force used against me (please specify) * Q7 A healthcare provider treated me roughly (Afulani et al., 2017) Q8 I experienced physical abuse, such as aggressive physical contact, being pinched, etc. GVTM-US (Vedam, Stoll, Taiwo, et al., 2019) Q9 I experienced inappropriate sexual conduct GVTM-US (Vedam, Stoll, Taiwo, et al., 2019)  Verbal Abuse Q10 The health worker or other staff member made negative comments to me regarding my sexual activity * 90  Q11 The health worker or other staff member made negative comments about my physical appearance (such as my weight, private parts, cleanliness or other parts of my body) * Q12 The health worker or other staff member made negative comments about my baby's physical appearance * Q13 I was mocked by a health worker or other staff * Q14 I was shouted or screamed at by a health worker or other staff * Q15 I was blamed by the health worker or other staff for something that happened to me or my baby during my time in hospital * Q16 I felt uncomfortable because of words or the tone my healthcare provider used while speaking to me (Abuya et al., 2015) Q17 I felt uncomfortable because of a facial expression my healthcare provider made while in the room with me (Abuya et al., 2015) Q18 I felt that my healthcare provider(s) talked to me rudely (Afulani et al., 2017) Q19 I was yelled at for calling for help (Warren, Njue, Ndwiga, & Abuya, 2017) Q20 I was insulted by my care provider (Taavoni et al., 2018) 91  Q21 My support person was insulted by my healthcare provider (Taavoni et al., 2018) Q22 Care providers threatened to withhold treatment I wanted. (Taavoni et al., 2018) Q23 Care providers threatened to give treatment I did not want (Taavoni et al., 2018) Q24 The health worker or other staff threatened to withhold care from me or my baby * Q25 The health worker or other staff threatened that me or my baby would have a poor outcome if I didn't comply * Q26 Health care providers (doctors, midwives, or nurses) shouted at or scolded you GVTM-US (Vedam, Stoll, Taiwo, et al., 2019) Q27 Health care providers threatened me in other ways GVTM-US (Vedam, Stoll, Taiwo, et al., 2019)  Stigma and Discrimination Q28 A health worker or staff made negative comments to me regarding me ethnicity, race, or culture * (Q29 - 53 use During my pregnancy I felt that I was treated poorly by my doctor or midwife BECAUSE of...  92  this stem)  Q29 My race, ethnicity, cultural background or language (Vedam, Stoll, Rubashkin, et al., 2017) Q30 My sexual orientation and/or gender identity (Vedam, Stoll, Rubashkin, et al., 2017) Q31 My type of health insurance or lack of insurance (Vedam, Stoll, Rubashkin, et al., 2017) Q32 A difference in opinion with my caregivers about the right care for myself or my baby (Vedam, Stoll, Rubashkin, et al., 2017) Q33 Because of a physical disability or chronic illness C. Limmer (personal communication with K. Stoll, 2018) Q34 Because of my HIV status C. Limmer (personal communication with K. Stoll, 2018) Q35 Because of my age C. Limmer (personal communication with K. Stoll, 2018) Q36 Because of my social position C. Limmer (personal communication with K. Stoll, 2018) 93  Q37 Because of another reason Birth Place Lab original item Q38 Because of my weight C. Limmer (personal communication with K. Stoll, 2018) Q39 Because of my religion Birth Place Lab original item Q40 Because of my marital status Birth Place Lab original item Q41 Because of my level of education Birth Place Lab original item Q42 Because of my economic circumstances Birth Place Lab original item Q43 Because of my substance use or history of substance use Birth Place Lab original item Q44 Because of my incarceration or history of incarceration Birth Place Lab original item Q45 Because of my choice of birth place Birth Place Lab original item Q46 Because of my decision to have a home birth Birth Place Lab original item Q47 Because of a difference in opinion between your midwife and hospital staff Birth Place Lab original item Q48 Because of my medical or birth related profession (I am a midwife, doula, doctor, nurse etc…) Birth Place Lab original item Q49 Because of my occupation (lawyer, etc…) Birth Place Lab original item Q50 Because of my partner or support person’s medical or birth related profession (They are a midwife, doula, doctor, nurse etc…) Birth Place Lab original item 94  Q51 Because of my partner or support person’s occupation (lawyer, etc…) Birth Place Lab original item Q52 for reasons I do not understand Birth Place Lab original item Q53 for another reason, please specify ________________ Birth Place Lab original item Q54 Doctors, midwives, nurses, or other health professionals treated me with less courtesy than other people (Yelland et al., 2012) Q55 Doctors, midwives, nurses, or other health professionals talked down to me (Yelland et al., 2012) Q56 Doctors, midwives, nurses, or other health professionals treated me with less respect than other people (Yelland et al., 2012)   Failure to meet professional standards Q57 My care provider kept me informed about what was happening during labour and birth (Dencker, Taft, Bergqvist, Lilja, & Berg, 2010) Q58 My healthcare provider(s) always took time to check that I understood what was happening to me and my baby (Clark, Beatty, & Reibel, 2016) Q59 My healthcare providers seem informed and up-to-date about my care (“Canadian Patient Experiences Survey - 95  Inpatient Care (CPES-IC),” 2017) Q60 My medical care providers asked my opinion about each unplanned procedure before it was performed (Stevens et al., 2012) Q61 My doctor or midwife asked me what I wanted to do before these procedures were done:  Natural, medicine, or surgical ways to deliver the baby  Timing of cord clamping  Giving vitamin K to my baby either by mouth or as an injection  Having a support person present  Putting erythromycin ointment into my baby's eyes  Immediate skin to skin with my baby  Whether or not I had an injection before delivery of the placenta  Screening tests (genetic, bloodwork, ultrasounds)  Listening to the baby continuously (external or internal monitor)  Having doctor or midwife break my water bag before or during labor  Cutting my vaginal opening when the baby was coming out (episiotomy)  Vaginal exams  Having a student/trainee perform a procedure  GVTM-US (Vedam, Stoll, Taiwo, et al., 2019) Q62 My healthcare providers explained to me why they were doing examinations or procedures on me (Afulani et al., 2017) 96  Q63 When I was in labour, my medical care providers decided what procedures I would have (Stevens et al., 2012) Q64 I was not involved in decision making in my care (Warren et al., 2017) Q65 My healthcare provider always made sure I knew I had a choice about whether or not to go ahead with any tests or scans (Clark et al., 2016) Q66 At any time during your recent labour or birth did you DECLINE care offered or recommended by a doctor, nurse or midwife Respondents choose from a drop down list of tests and procedures; those who checked one or more tests/procedures are directed to the follow up questions below:                         After I declined the doctor or nurse or midwife reacted by (check all that apply):        They did the procedure against my will     They alerted child protective services     They accepted my decision    They kept asking me until I agreed    They asked my midwife or doctor to convince me    They tried to get my family to convince me    They suggested that the decision would cause harm to my baby    They suggested that I cared more about myself than my baby   GVTM-US (Vedam, Stoll, Taiwo, et al., 2019) 97   They suggested that my refusal was a liability concern   They shamed me    Other - specify Q67 My healthcare providers accepted my refusal of treatment (Scheerhagen et al., 2015) Q68 My pubic hair was shaved * Q69 Tubal ligation/sterilization was performed on me without my consent * Q70 Before giving me any medicine, hospital staff described possible effects and implications for my well-being or the progress of my labour in a way I could understand (“Canadian Patient Experiences Survey - Inpatient Care (CPES-IC),” 2017) Q71 The health worker explained to me why a vaginal examination was needed * Q72 The health worker asked for permission before performing the vaginal examination * Q73 I felt I had more vaginal examinations than was necessary (Warren et al., 2017) Q74 A doctor was not available to conduct the caesarean section when I needed it (Warren et al., 2017) Q75 I was told to stop pushing because there was no healthcare provider available to attend me yet (Warren et al., 2017) 98  Q76 A trainee was responsible for stitching or cutting me without supervision (Warren et al., 2017) Q77 My health care provider acknowledged that he/she had made a medical error in my care Birth Place Lab original item Q78 My health care provider denied that he/she had made a medical error in my care Birth Place Lab original item Q79 I would have liked to receive an apology about how I was treated or about the care I received from my healthcare provider Birth Place Lab original item Q80 I received an apology about how I was treated or about the care I received from my healthcare provider Birth Place Lab original item Q81 I felt I could choose the pain relief method to use (Dencker et al., 2010) Q82 Hospital staff did everything they could to help me with my pain (“Canadian Patient Experiences Survey - Inpatient Care (CPES-IC),” 2017) Q83 During my time in hospital for childbirth I felt neglected by the health workers or staff * Q84 During my stay for this delivery I was left unattended by health providers when I needed care (Abuya et al., 2015) Q85 I was told different things by different care providers (that didn't make sense together) about my health (Wong & Haggerty, 2013) 99  Q86 The nurse responded to my needs in the postpartum period in a timely manner (Janssen et al., 2006) Q87 The housekeeping staff respected my privacy (Janssen et al., 2006) Q88 When there was a decision to make, I knew what all my options were (Heatley, Watson, Gallois, & Miller, 2015a) Q89 My care providers were open and honest (Heatley et al., 2015a) Q90 My care providers communicated well with my other care providers (Heatley et al., 2015a) Q91 My maternity care provider(s) discussed with me the pros and cons (benefits and risks) of having and not having the following procedures:  ultrasound scan  blood test  induction of labour  pre-labour caesarean section  vaginal examination  fetal monitoring during labour  post-labour caesarean section  epidural anesthesia  episiotomy For each person who checked “yes” or “no” above, they would be directed to follow up question:  “Who decided if you would or would not have a caesarean?’’ Response options are:   I decided from all my available options  My maternity care provider(s) decided and checked if it was OK with me  My care provider(s) decided without checking with me Question repeated for all nine procedures  (Heatley et al., 2015a) Q92 I asked my healthcare provider to stop pressuring me (Afulani et al., 2017) 100  Q93 Your private or personal information was shared without your consent GVTM-US (Vedam, Stoll, Taiwo, et al., 2019) Q94 Your physical privacy was violated GVTM-US (Vedam, Stoll, Taiwo, et al., 2019)  Poor Rapport between Women and Providers Q95 My care provider always let me know the tests or scans results (Clark et al., 2016) Q96 My care provider always made sure I knew what any tests or scans were for (Clark et al., 2016) Q97 When the health care team could not meet my wishes they explained why (Wool, 2015) Q98 They respected my knowledge of my baby (Clark et al., 2016) Q99 Healthcare professionals talked with me about whether I would have the help I needed when I left the hospital (“Canadian Patient Experiences Survey - Inpatient Care (CPES-IC),” 2017) Q100 (For people who planned a community birth and transferred to hospital) During the transfer and after my arrival at the hospital, the hospital provider and staff were sensitive to the emotional impact of my change in birth place GVTM-US (Vedam, Stoll, Taiwo, et al., 2019) 101  Q101 My family or friends were involved as much as I wanted in decisions about my care and treatment (“Canadian Patient Experiences Survey - Inpatient Care (CPES-IC),” 2017) Q102 If I wanted, my family was included in discussions about my baby's plan of care (Wool, 2015) Q103 The healthcare team was sensitive in explaining the range of possible outcomes about my baby's condition (Wool, 2015) Q104 I felt pressure from any doctor or midwife to HAVE:   Medication to start labour   An epidural  Continuous fetal monitoring (listen to baby’s heart by wearing a belt or wire)   Episiotomy (cut vaginal opening)   Medicine for pain relief      A caesarean  They told me I needed to do this BECAUSE:  Staffing constraints  They said there was a risk to the baby  It had to do with a time issue  If yes, please tell us more about this experience:   GVTM-US (Vedam, Stoll, Taiwo, et al., 2019) Q105 I felt pressure from any doctor or midwife to AVOID:   Medication to start labour   An epidural          Continuous fetal monitoring (listen to baby’s heart by wearing a belt or wire)   Episiotomy (cut vaginal opening)  Medicine for pain relief       A caesarean GVTM-US (Vedam, Stoll, Taiwo, et al., 2019) 102   They told me I needed to do this BECAUSE:  Staffing constraints  They said there was a risk to the baby  It had to do with a time issue  If yes, please tell us more about this experience:   Q106 During my visits, I was told about the different roles midwives, doctors, and other health professionals would play in my care (Clark et al., 2016) Q107 I understood the explanations of the treatment I received (Scheerhagen et al., 2015) Q108 I understood the explanations of the treatment I received (Scheerhagen et al., 2015) Q109 I was injured physically by a health care provider (Taavoni et al., 2018) Q110 After my doctor or midwife explained different options for my care during labour and delivery:   I did not understand my options  I understood some explanations  I understood most explanations  I understood everything  Birth Place Lab Q111 My care provider took my health concerns very seriously (Wong & Haggerty, 2013) Q112 My care provider was concerned about my feelings (Wong & Haggerty, 2013) Q113 My care provider found out what my concerns were (Wong & Haggerty, 2013) 103  Q114 My care provider let me say what I thought was important (Wong & Haggerty, 2013) Q115 The healthcare provider answered all of my questions (Raj et al., 2017) Q116 During my visits, I was never made to feel that I was wasting time by asking questions (Clark et al., 2016) Q117 During my pregnancy I held back from asking questions or discussing my concerns BECAUSE my doctor or midwife seemed rushed (Vedam, Stoll, Rubashkin, et al., 2017) Q118 During my pregnancy I held back from asking questions or discussing my concerns BECAUSE I wanted maternity care that differed from what my doctor or midwife recommended (Vedam, Stoll, Rubashkin, et al., 2017) Q119 During my pregnancy I held back from asking questions or discussing my concerns BECAUSE I thought my doctor or midwife might think I was being difficult (Vedam, Stoll, Rubashkin, et al., 2017) Q120 During my pregnancy I held back from asking questions or discussing concerns BECAUSE I felt discriminated against GVTM-US (Vedam, Stoll, Taiwo, et al., 2019) Q121 During my pregnancy I held back from asking questions or discussing concerns BECAUSE I felt my doctor or midwife didn't value my opinion GVTM-US (Vedam, Stoll, Taiwo, et al., 2019) 104  Q122 During my pregnancy I held back from asking questions or discussing concerns BECAUSE I felt my doctor or midwife didn't use language that I could understand GVTM-US (Vedam, Stoll, Taiwo, et al., 2019) Q123 The healthcare team listened to me (Wool, 2015) Q124 They made sure I had the contact details of someone I could get in touch with at any hour if I had concerns about my baby (Clark et al., 2016) Q125 An interpreter was available *Appendix 2 of (Bohren et al., 2018) Q126 The healthcare team spoke to me using words I could understand (Wool, 2015) Q127 If I needed it, the health care system provided an interpreter (Wool, 2015) Q128 I was given paperwork I could understand (Warren et al., 2017) Q129 During this hospital stay, how often did healthcare professionals explain things in a way you could understand (“Canadian Patient Experiences Survey - Inpatient Care (CPES-IC),” 2017) Q130 During my time in hospital for childbirth I felt my presence was a nuisance for the health workers or staff * 105  Q131 The clerks and receptionists at the clinic treated me with courtesy and respect (Wong & Haggerty, 2013) Q132 Healthcare providers spoke down to me (Yelland et al., 2012) Q133 Healthcare providers came to speak to me when I was not fully clothed or comfortably covered Birth Place Lab Q134 Healthcare providers asked for my permission for a student or trainee to accompany them and/or do examinations or procedures on me Birth Place Lab Q135 Healthcare providers asked for my permission for non-essential personnel being present during my care Birth Place Lab Q136 I was addressed informally (by first name) but all the healthcare providers addressed each other more formally (doctor, nurse, etc.) Birth Place Lab Q137 Healthcare providers made negative or inappropriate comments about my personal characteristics Birth Place Lab Q138 The DIGNITY the doctor or midwife showed during my pregnancy, labour and/or birth was… (excellent, very good, good, fair, poor, don’t know, not applicable) GVTM-US (Vedam, Stoll, Taiwo, et al., 2019) Q139 The healthcare team was compassionate (Wool, 2015) Q140 I was treated with kindness (Scheerhagen et al., 2015) Q141 I felt that my healthcare provider/s respected my knowledge and experience (Heaman et al., 2014a) 106  Q142 I was able to have exactly the people I wanted with me during labour and birth (Stevens et al., 2012) Q143 I was able to select my preferred type of health professional (Scheerhagen et al., 2015) Q144 I felt invisible during my birth (Warren et al., 2017) Q145 I felt like a passive participant in my labour and delivery (Warren et al., 2017) Q146 Healthcare providers talked about me as if I was not there Birth Place Lab team Q147 Healthcare providers only asked my partner to make decisions about my care Birth Place Lab team Q148 During my labour and birth, when I was told about the procedures I felt that I could not question my medical care providers decisions (Stevens et al., 2012) Q149 Overall while making decisions about my pregnancy or birth care …   I felt comfortable asking questions  I felt comfortable declining care that was offered   I felt comfortable accepting the options for care that my doctor or midwife recommended   I felt pushed into accepting the options my doctor or midwife suggested   I chose the care options that I received  My personal preferences were respected   My traditional and/or cultural preferences were respected   (Vedam, Stoll, Rubashkin, et al., 2017) 107  Q150 Please describe your experiences with decision making during your pregnancy, labour, and/or birth:     My healthcare provider(s) asked me how involved in decision making I wanted to be    My healthcare provider(s) explained different options for my maternity care    My healthcare provider(s) helped me understand all the information    I was given enough time to thoroughly consider the different care options    I was able to choose what I considered to be the best care options    My healthcare provider(s) respected my choices   (Vedam, Stoll, Martin, et al., 2017) Q151 I was involved in decision-making on my preferred setting of birth (Scheerhagen et al., 2015) Q152 I felt judged or criticized about where I chose to give birth by (check all that apply):  My doctor  My midwife  The hospital staff  Health care providers or hospital staff  Other (please specify) GVTM-US (Vedam, Stoll, Taiwo, et al., 2019) Q153 I was not allowed to have food or drink during my labour (Warren et al., 2017) Q154 I was able to have a bath or shower if I wanted one (when it was appropriate) (Stevens et al., 2012) Q155 I was able to move around freely during labour if I wanted to (Stevens et al., 2012) 108  Q156 While I was in labour, I was able to decide how to be most comfortable (Stevens et al., 2012) Q157 I felt I could choose the delivery position (Dencker et al., 2010) Q158 I was able to hold the baby immediately after the birth if I wanted to (skin to skin) (Stevens et al., 2012) Q159 The healthcare team asked about my spiritual or religious customs (Wool, 2015) Q160 The healthcare team supported my spiritual and religious customs (Wool, 2015) Q161 The healthcare team asked about my cultural or family traditions (Wool, 2015) Q162 The healthcare team supported my cultural or family traditions (Wool, 2015) Q163 I was discouraged from engaging in cultural, traditional or religious practices (Sheferaw, Mengesha, & Wase, 2016) Q164 Finding a midwife or doctor who spoke my language or shared my heritage, gender, sexual orientation, race, ethnic or cultural background was important to me GVTM-US (Vedam, Stoll, Taiwo, et al., 2019)  Health System Constraints  109  Q165 The hospital, clinic, or healthcare providers’ office was easily accessible (physical accessibility, etc.) (Scheerhagen et al., 2015) Q166 The birth setting where I gave birth was adequately clean * Q167 I was not able to be admitted to the facility of my choice because it was overfilled, did not have enough beds, or did not have enough staff (Afulani et al., 2017) Q168 In general, I felt safe in the birth setting (Afulani et al., 2017) Q169 I had difficulty finding a doctor or midwife GVTM-US (Vedam, Stoll, Taiwo, et al., 2019) Q170 I experienced the following:  Difficulty contacting a physician  A specialist was unavailable  Difficulty getting an appointment  Waited too long to get an appointment  Waited too long in the waiting room  Service not available in the area  Transportation problems  Cost issues  Language problems  Did not feel comfortable with the available doctor or nurse  Did not know where to go (for example, I didn’t have enough information in order to get the help I needed)  Unable to leave the house because of a health problem  Other (please specify): _________________________  (Wong & Haggerty, 2013) 110  Q171 There was a lack of equipment or forced bed sharing in my birth setting (Warren et al., 2017) Q172 There were no curtains or clean linen at my birth setting (Warren et al., 2017) Q173 There was no water for bathing at my birth setting (Warren et al., 2017) Q174 Curtains, partitions, or other measures were used to provide privacy for me from other patients, patients family members, or health workers/staff not involved in providing care for me * Q175 I could not control the number of people in the labour/birth room (Stevens et al., 2012) Q176 Hospital staff were talking about my health in front of me without including me Birth Place Lab team Q177 Hospital staff were walking in and out of my room while I was uncovered Birth Place Lab team Q178 Healthcare professionals walked in and uncovered me without speaking to me first Birth Place Lab team Q179 Healthcare professionals walked in and were looking at my chart without speaking to me first Birth Place Lab team Q180 I had to discuss my care with my healthcare provider with other people listening Birth Place Lab team Q181 The room I was in allowed me to get enough peace and rest after birth (Sjetne, Iversen, & Kjøllesdal, 2015) 111  Q182 Staff respected my need for rest (Sjetne et al., 2015) Q183 Staff worked around my need for rest after giving birth (Sjetne et al., 2015) Q184 I felt like my labour and birth were disrupted/unsafe/disrespected BECAUSE of a difference of opinion between healthcare providers Birth Place Lab team Q185 The area around my room was quiet at night (“Canadian Patient Experiences Survey - Inpatient Care (CPES-IC),” 2017) Q186 I felt like my labour and birth were disrupted/unsafe/disrespected BECAUSE  of a lack of privacy Birth Place Lab team Q187 I felt like my labour and birth were disrupted/unsafe/disrespected BECAUSE  I felt rushed Birth Place Lab team Q188 I felt like my labour and birth were disrupted/unsafe/disrespected BECAUSE I felt like I was being observed for teaching or demonstration Birth Place Lab team Q189 I was detained because I was unable to pay (Taavoni et al., 2018) Q190 If you did not have healthcare coverage for your pregnancy, at any point during this delivery in this facility were you asked for money other than the official cost of service (Abuya et al., 2015) 112  Q191 I experienced barriers to accessing health care coverage to my preferred care provider, or place of birth (Abuya et al., 2015) Q192 I felt pressured to stay in the hospital when I was ready to go home (Abuya et al., 2015) Q193 After giving birth I was instructed to clean up my own blood, urine, feces, or amniotic fluid * Q194 The room was spacious and adequate for my needs (Janssen et al., 2006) Q195 The food was acceptable in quantity (Janssen et al., 2006) Q196 The food was acceptable in quality (Janssen et al., 2006) Q197 I was able to find the supplies that I needed (Janssen et al., 2006) Q198 The housekeeping staff respected my privacy (Janssen et al., 2006) Q199 There were times when I felt unsafe Birth Place Lab team Q200 I knew the roles of each care provider involved in my care Birth Place Lab team Q201 Each provider introduced themselves and their role Birth Place Lab team      113  Appendix C  Abbreviated Round One online instrument Instructions for Delphi Panel      The goal of the Delphi process is to reach agreement about the best way to capture people’s experiences of respect, discrimination and mistreatment during pregnancy, labour, birth, postpartum and newborn care. We have gathered survey items from previous international studies that measure different aspects of experience of respect and disrespect during pregnancy and childbirth. The Delphi process will involve approximately 50 experts (such as yourself) identified by members of our team across Canada and internationally.  This team will review and come to agreement on a set of survey items across seven categories:  (1) physical abuse, (2) sexual abuse, (3) verbal abuse, (4) stigma and discrimination, (5) failure to meet professional standards of care, (6) poor rapport between women and providers, and (7) health system conditions and constraints.       In this first Delphi phase please rate each item for 1) importance, 2) relevance, and 3) clarity selecting one of the following answer options:      Importance: This question is important to the measurement of respectful maternity care         1 – Not important  2 – Unable to assess without revision  3 – Important but needs minor revision  4 – Very important        Relevance: This question is relevant to the community/communities I most identify with, OR This question is relevant to the community/communities currently I work/have worked with AND/OR This item is relevant in my geographical area (e.g. where I live; at my local hospital/clinic etc.)   114    1 – Not relevant   2 – Unable to assess without revision   3 – Relevant but needs minor revision   4 – Very relevant.        Clarity: This question is clear/easy to understand.          1 – Not clear   2 – Unable to assess without revision   3 – Clear but needs minor revision   4 – Very clear and succinct.         As you proceed through this online survey you will have opportunities to provide comments, or edits for each item, including suggestions for rewording.   Please review all of the items first and then you will have an opportunity to suggest additional items that may not appear but do reflect your experience or the experiences of your community.       Please take a few moments to answer three questions about yourself.     115  Q22 Which region are you from?  o British Columbia  (2)  o Quebec  (3)  o Ontario  (4)  o Alberta  (5)  o … o Outside of Canada; please specify country:  (15) ________________________________________________    Q23 Which group do you most identify with?  o Maternity care user (past or present)  (1)  o Maternity care provider  (2)  o Maternity care researcher  (3)  o Other  (4) ________________________________________________  116    Q24 If you identify with more than one group, please select a choice below:  o Maternity care user (past or present)  (1)  o Maternity care provider  (2)  o Maternity care researcher  (3)  o Other  (4) ________________________________________________    Q26 Now you will begin answering questions about the importance, relevance and clarity of items. Please note that you will only be asked about clarity if you rated the item as important or relevant.    117   IMPORTANCE: I was injured physically by a health care provider (response scale from (5) All of the times to (0) Never)             o 1 – Not important  (1)  o 2 – Unable to assess without revision  (2)  o 3 – Important but needs minor revision  (3)  o 4 – Very important  (4)      RELEVANCE: I was injured physically by a health care provider (response scale from (5) All of the times to (0) Never)  o 1 – Not relevant  (1)  o 2 – Unable to assess without revision  (2)  o 3 – Relevant but needs minor revision  (3)  o 4 – Very relevant  (4)    118  Display This Question: If IMPORTANCE: I was injured physically by a health care provider (response scale from (5) All of th... = 3 – Important but needs minor revision Or IMPORTANCE: I was injured physically by a health care provider (response scale from (5) All of th... = 4 – Very important Or If RELEVANCE: I was injured physically by a health care provider (response scale from (5) All of the... = 3 – Relevant but needs minor revision Or RELEVANCE: I was injured physically by a health care provider (response scale from (5) All of the... = 4 – Very relevant    CLARITY: I was injured physically by a health worker or other staff (response scale from (5) All of the times to (0) Never)        o 1 – Not clear  (1)  o 2 – Unable to assess without revision  (2)  o 3 – Clear but needs minor revision  (3)  o 4 – Very clear and succinct  (4)     119  Display This Question: If IMPORTANCE: I was injured physically by a health care provider (response scale from (5) All of th... = 1 – Not important Or IMPORTANCE: I was injured physically by a health care provider (response scale from (5) All of th... = 2 – Unable to assess without revision And If RELEVANCE: I was injured physically by a health care provider (response scale from (5) All of the... = 1 – Not relevant Or RELEVANCE: I was injured physically by a health care provider (response scale from (5) All of the... = 2 – Unable to assess without revision And If IMPORTANCE: I was injured physically by a health care provider (response scale from (5) All of th... != 3 – Important but needs minor revision Or IMPORTANCE: I was injured physically by a health care provider (response scale from (5) All of th... != 4 – Very important And If RELEVANCE: I was injured physically by a health care provider (response scale from (5) All of the... != 3 – Relevant but needs minor revision Or RELEVANCE: I was injured physically by a health care provider (response scale from (5) All of the... != 4 – Very relevant    What would you like to do with this item? (I was injured physically by a health worker or other staff (response scale from (5) All of the times to (0) Never)) ▢ Discard from survey  (1)  ▢ Revise as follows:  (2) ________________________________________________    120  Display This Question: If CLARITY: I was injured physically by a health worker or other staff (response scale from (5) All... = 1 – Not clear Or CLARITY: I was injured physically by a health worker or other staff (response scale from (5) All... = 2 – Unable to assess without revision Or CLARITY: I was injured physically by a health worker or other staff (response scale from (5) All... = 3 – Clear but needs minor revision Or CLARITY: I was injured physically by a health worker or other staff (response scale from (5) All... = 4 – Very clear and succinct  Q12 What would you like to do with this item? (I was injured physically by a health worker or other staff (response scale from (5) All of the times to (0) Never)) ▢ Leave item as is  (1)  ▢ Revise as follows:  (2) ________________________________________________  121  Appendix D  List of Round Two indicators  VERBAL COMMUNICATION 1. Each care provider introduced themselves and their role 2. I understood the explanations of the treatment or procedures I received 3. I was told different things by different care providers about my health that led to confusion 4. My care provider(s) asked me what my concerns were 5. My care provider(s) listened to my priorities 6. My care provider(s) took my concerns very seriously  7. My care provider(s) were open and honest about their plan to care for me and my baby 8. The healthcare provider(s) answered all of my questions 9. The healthcare team explained things to me in a way I could understand 10. The healthcare team listened to me 11. When the health care team could not meet my wishes they explained why 12. The healthcare team took time to explain the range of possible outcomes of my baby's condition 13. Healthcare provider(s) talked about me as if I was not there, without including me INFORMATION & CONSENT  1. I was forced to have procedures against my will 2. My health care provider(s) asked for my consent for each exam or procedure before it was performed 3. My health care provider(s) asked what I wanted to do (my preferences) before doing procedures 4. My healthcare provider always made sure I knew I had a choice about whether or not to go ahead with any exams or procedures 5. My healthcare provider(s) kept me informed about what was happening during labour and birth 6. My healthcare provider(s) took time to check that I understood what was happening to me and my baby 7. My care provider(s) explained the reasons for any tests or scans 8. When there was a decision to make, I knew what all my options were 9. Before giving me any medicine, my healthcare provider(s) described possible effects and implications for my well-being or the progress of my labour PRIVACY AND CONFIDENTIALITY  1. My private or personal information was shared without my consent 2. Healthcare provider(s) walked in and uncovered me without speaking to me first 122  3. Healthcare provider(s) asked my permission for non-essential personnel (such as trainees, students, or observers) to be present during my care 4. Healthcare provider(s) or other staff were walking in and out of my room while I was uncovered 5. I felt disrespected BECAUSE of a lack of privacy 6. My privacy was respected PHYSICAL EXAMS AND PROCEDURES  1. My healthcare provider(s) explained to me why they were doing examinations or procedures on me 2. Healthcare provider(s) asked my permission for a student or trainee to do examinations or procedures on me 3. I felt I could choose which pain relief method to use 4. I felt I had more vaginal examinations than was necessary 5. I felt that trainees and students involved in my care were adequately supervised 6. My healthcare provider(s) explained or told me the results of any tests or scans 7. The health care provider(s) asked for permission before performing a vaginal examination 8. Tubal ligation/ sterilization was performed on me without my consent AVAILABILITY & RESPONSIVENESS OF HEALTHCARE PROVIDERS  1. During my childbirth I felt neglected by healthcare provider(s) 2. Healthcare provider(s) responded to my needs in the postpartum period in a timely manner 3. Healthcare provider(s) responded to my baby’s needs in the postpartum period in a timely manner 4. I was left unattended by healthcare provider(s) when I needed care 5. I was told to stop pushing because there was no healthcare provider(s) available to attend me 6. My healthcare provider(s) acknowledged that they had made a medical error in my care 7. My healthcare provider(s) did everything to help me with my pain 8. The healthcare team listened to me PATIENT REACTIONS TO EXPERIENCES OF CARE  1. During my childbirth I felt my presence was a nuisance for the healthcare provider(s) 2. Overall while making decisions about my pregnancy or birth care:     I felt comfortable asking questions    I felt comfortable declining care that was offered   I felt comfortable accepting the options for care that my healthcare provider(s) recommended   123   I felt pushed into accepting the options my healthcare provider(s) suggested    I chose the care options that I received    My personal preferences were respected    My traditional and/or cultural preferences were respected    3. During my pregnancy I held back from asking questions or discussing concerns:     BECAUSE my healthcare provider(s) seemed rushed    BECAUSE I wanted maternity care that differed from what my healthcare provider(s) recommended    BECAUSE I thought my healthcare provider(s) might think I was being difficult    BECAUSE I felt discriminated against    BECAUSE I felt my healthcare provider(s) didn't value my opinion    BECAUSE I felt my healthcare provider(s) used language that was difficult to understand   4. My labour felt disrespected BECAUSE I felt like I was being observed for teaching or demonstration 5. I felt invisible during my birth 6. In general, I felt safe in the birth setting 7. After I declined care or treatment, the healthcare provider(s) alerted child protective services 8. After I declined care or treatment, the healthcare provider(s) kept asking me until I agreed 9. After I declined care or treatment, the healthcare provider(s) asked other support people to convince me NON-VERBAL COMMUNICATION 1. I knew the roles of each provider involved in my care 2. If I needed it, an interpreter was available 3. Paperwork for me to read was in a language I could understand 4. Healthcare professional(s) walked in and were looking at my chart without speaking to me first 5. I felt disrespected because of a difference of opinion between healthcare providers 6. I felt that I could not question my healthcare provider(s’) decisions or recommendations 7. My care provider(s) communicated well with my other care providers 8. My care provider(s) seemed informed and up-to-date about my care 9. I felt pressure from a healthcare provider to have treatment or procedures (for example: induction or augmentation, epidural anesthesia, caesarean section) 10. I felt pressure from a healthcare provider to avoid treatment or procedures (for example: induction or augmentation, epidural anesthesia, caesarean section) CULTURAL SUPPORT AND FAMILY INVOLVEMENT 124  1. Finding a healthcare provider who spoke my language or shared my heritage, gender, sexual orientation , race, ethnicity, or culture was important to me 2. I was able to have the people I wanted supporting me during labour and birth 3. I was discouraged from engaging in cultural, traditional, or religious practices 4. My partner, family or friends were involved as much as I wanted in decisions about care and treatment for me or my baby 5. The health care team asked about my cultural or family traditions 6. The health care team asked about my spiritual or religious customs  7. The health care team supported my cultural or family traditions 8. The health care team supported my spiritual and religious customs STIGMA AND DISCRIMINATION  1. A healthcare provider made negative comments regarding my ethnicity, race, or culture 2. Healthcare provider(s) treated me with less courtesy or respect than they showed other people 3. Healthcare provider(s) implied or stated I would not be a good parent because of my personal characteristics 4. I felt judged or criticized by healthcare provider(s) about where I chose to give birth 5. The clerks and receptionists at the clinic treated me with courtesy and respect STIGMA AND DISCRIMINATION During my pregnancy, I felt that I was treated poorly by my healthcare provider(s) BECAUSE of… 1. My race, ethnicity, cultural background, or language 2. My sexual orientation and/or gender identity 3. My type of health insurance insurance or lack of insurance 4. A difference in opinion with my caregivers about the right care for myself or my baby 5. Because of a physical disability or chronic illness 6. Because of my HIV status 7. Because of my age 8. Because of my weight 9. Because of my religion 10. Because of my marital status 11. Because of my level of education 12. Because of my economic circumstances 13. Because of my substance use or history of substance use 14. Because of my incarceration or history of incarceration 15. Because of my choice of birth place 16. Because of my occupation (lawyer, etc…) 17. Because of my partner or support person’s occupation 125  18. For reasons I do not understand 19. Another reason, please specify: VERBAL MISTREATMENT  1. The healthcare provider(s) or other staff member(s) made negative comments to me regarding my sexual activity 2. The healthcare(s) or other staff member(s) made negative comments about my physical appearance (such as my weight, private parts, cleanliness, or other parts of my body) 3. The healthcare provider(s) or other staff member(s) made cruel or negative comments about my baby's physical appearance 4. I was mocked by healthcare provider(s) or other staff 5. I was shouted at or yelled at by healthcare provider(s) or other staff 6. I was blamed or made to feel guilty by healthcare provider(s)/staff for something that happened to me or my baby 7. My healthcare provider(s) talked to me rudely 8. My healthcare provider(s) insulted me 9. Healthcare provider(s) threatened to withhold treatment from me or my baby 10. Healthcare provider(s) threatened to give treatment I did not want 11. Healthcare provider(s) threatened that me or my baby would have a poor outcome if I did not comply with their advice 12. Healthcare providers talked down to me PHYSICAL MISTREATMENT  1. I was physically restrained to the bed by healthcare provider(s) 2. I had strong, forceful downward pressure placed by healthcare provider(s) on my upper abdomen while I was pushing my baby out (fundal pressure) 3. Healthcare provider(s) was/were physically rough in their treatment of me 4. I experienced aggressive physical contact such as being pinched, slapped or hit by healthcare provider(s) 5. I experienced inappropriate sexual conduct from healthcare provider(s) SUPPORTIVE BEHAVIOURS OF HEALTHCARE PROVIDERS  1. (For people who planned a birth at home, clinic, or birth centre and transferred to hospital) During the transfer and after my arrival at the hospital, health care provider(s) were sensitive to the emotional impact of my change in birth place 2. Healthcare professional(s) talked with me about whether I would have help at home after the birth 3. I felt that my healthcare provider(s) respected my knowledge and experience of caring for myself 126  4. I felt that my healthcare provider(s) respected my knowledge and experience of caring for my baby 5. I was treated with kindness 6. The health care team helped create a comfortable, caring environment  7. The healthcare team was compassionate 8. My healthcare provider(s) treated me in a way that supported my dignity 9. Healthcare providers respected my need for rest CHOICE OF EVIDENCE-BASED CARE OPTIONS  1. Healthcare provider(s) asked before helping me with activities of daily living 2. I did not feel I could go home when I wanted to due to standard policies and procedures 3. Healthcare provider(s) talked about me as if I was not there, without including me 4. My healthcare provider(s) accepted when I refused treatment or procedures 5. I felt I could choose my delivery position 6. I was able to choose my preferred place of birth 7. I was able to hold the baby immediately after the birth if I wanted to 8. I was able to move around freely and choose my labouring position if I wanted to 9. I was not allowed to have food or drink during my labour AUTONOMY (ABOUT CARE DECISIONS)  1. I felt like an active participant in my labour and delivery 2. I was not involved in decision making in my care  3. Healthcare provider(s) only asked my partner to make decisions about my care 4. When I was in labour, my health care provider(s) decided what procedures I would have without my involvement 5. Please describe your experiences with decision making during your pregnancy, labour, and/or birth:     My healthcare provider(s) asked me how involved in decision making I wanted to be    My healthcare provider(s) explained different options for my maternity care    My healthcare provider(s) helped me understand all the information    I was given enough time to thoroughly consider the different care options    I was able to choose what I considered to be the best care options    My healthcare provider(s) respected my choices   6. I felt pressured to stay in the hospital when I was ready to go home HEALTH SYSTEM CONDITIONS AND CONSTRAINTS (PHYSICAL)  1. Curtains, partitions, or other measures were used to provide privacy for me  2. I had to discuss my care with my healthcare provider(s) in a place that was not private  3. I was not able to be admitted to the facility of my choice because it was overfilled or did not have enough beds 127  4. The area around my room was quiet at night 5. The birth setting where I gave birth was adequately clean 6. The food given to me was acceptable 7. The hospital, clinic, or healthcare provider(s) office was accessible given my needs (ex. specialized equipment, extra space) 8. The room I was in allowed me to get enough peace and rest after birth 9. The room was adequate for my needs 10. I was able to have a bath or shower if I wanted one (when it was appropriate) HEALTH SYSTEM CONDITIONS AND CONSTRAINTS (HUMAN RESOURCES)  1. After giving birth I was told to clean up my own blood, urine, feces, or amniotic fluid 2. I experienced difficulty accessing health care coverage (financial coverage) for my preferred care provider 3. I experienced difficulty accessing health care coverage (financial coverage) for my preferred place of birth 4. I experienced the following:     Difficulty contacting a healthcare provider    A specialist was unavailable    Difficulty getting an appointment    Waiting too long to get an appointment  Waiting too long in the waiting room    Service not available in the area    Transportation issues    Cost issues    Language problems    Uncomfortable with the available healthcare provider(s)    Did not know where to go (for example: I didn't have enough information in order to get the help I needed)    Unable to leave the house because of a health problem    Other (please specify):_____________   5. I had difficulty finding my preferred type of health care provider 6. I was not able to be admitted to the facility of my choice because it did not have enough staff 7. (For individuals who had to relocate from their home community for birth) There was personal support available in place in the community where I delivered my baby 8. I was able to have my preferred type of health professional(s) present at my birth 9. I had difficulty locating a healthcare provider who spoke my language or shared my heritage, gender, sexual orientation, race, ethnicity, or culture   128   Appendix E  Abbreviated Round Two online instrument Speaking of Respect Study:  Delphi Round 2      Delphi Round 1 identified a set of core indicators of respect, disrespect, and mistreatment during pregnancy and childbirth that are relevant to high and middle resource countries.    In Delphi Round 2, participants will prioritize those items, within several domains of patient experience.      We anticipate that this round will take between 35-45 minutes of your time. You can complete your review over more than one session, as long as you use the same computer and web browser.  Please note that there is no Save function on the tool; your responses are saved automatically and you will be taken to the item that you last completed. Please submit your review by Wednesday, May 8th or contact us if you need an extension.      To indicate your preferred ranking, please use the "Drag and Drop" function by clicking on the item and moving it to a desired position on the list. The items are grouped underseveral domains related to experiences of care. When making your decisions about priority order, please rank within the context of the domain heading.  Also consider relevance to a range of populations (e.g. People with disabilities, immigrants, from diverse backgrounds, family structure, health status, etc). Items should also have sufficient specificity to inform implementation of findings.       Following Delphi Round 2, the team will reword some items to assure a balance of positively or negatively worded items. You may wish to suggest such edits. The response options for most items will be in a Likert format (e.g. 6 options ranging from strongly agree to strongly 129  disagree) unless the question is clearly a matter of fact (i.e., yes/no format). There are also comment boxes at the end of each set of items, so you may provide additional comments.       At the end of the survey, there are a few areas for your consideration: ie decisions about whether to use “care provider” or “doctor, midwife, nurse” in the stem of each question; and whether to separate items by prenatal, labour and birth, postpartum, and newborn care events.      We will analyze and collate findings based on your anonymous responses. To allow us to summarize the characteristics of the Delphi expert panel, please take a few moments to answer three questions about yourself.      Which region are you from?  o Canada  (1)  o Outside of Canada; please specify country:  (2) ________________________________________________    130  Which province/territory are you from?  o British Columbia  (2)  o Quebec  (3)  o Ontario  (4)  o Alberta  (5)  o …  Which group do you most identify with?  o Pregnancy and/or birth service user (past or present); please tell us when you were last pregnant (year)  (1) ________________________________________________ o Pregnancy and/or birth service provider  (2)  o Pregnancy and/or birth researcher  (3)  o Other  (4) ________________________________________________   131  If you identify with more than one group, please select a choice below:  o Pregnancy and/or birth service user (past or present); please tell us when you were last pregnant (year)  (1) ________________________________________________ o Pregnancy and/or birth service provider  (2)  o Pregnancy and/or birth researcher  (3)  o Other  (4) ________________________________________________   132  Now you will begin prioritizing items.  VERBAL COMMUNICATION   Please rank the following items from greatest priority (top of list) to least priority (bottom of list) ______ Each care provider introduced themselves and their role (1) ______ I understood the explanations of the treatment or procedures I received (2) ______ I was told different things by different care providers about my health that led to confusion (3) ______ My care provider(s) asked me what my concerns were (4) ______ My care provider(s) listened to my priorities (5) ______ My care provider(s) took my concerns very seriously (6) ______ My care provider(s) were open and honest about their plan to care for me and my baby (7) ______ The healthcare provider(s) answered all of my questions (8) ______ The healthcare team explained things to me in a way I could understand (9) ______ The healthcare team listened to me (10) ______ When the health care team could not meet my wishes they explained why (11) ______ The healthcare team took time to explain the range of possible outcomes of my baby's condition (12) ______ Healthcare provider(s) talked about me as if I was not there, without including me (13)    Is there anything you would like to comment on about these items or this group of items? ________________________________________________________________ ________________________________________________________________   133   Appendix F  Delphi Round One report Speaking of Respect: Delphi Round One Report With thanks to panel members who were able to participate in Round 1:  After reviewing the panel rankings for importance and relevance, Esther Clark removed 10 items from the initial 201 items. Then, based on the narrative comments, and the clarity rankings, a small team from the Birth Place Lab merged, edited items, or discarded items in favour of other items. 156 items will be assessed in Round 2. Phase of Care & Type of Provider A significant portion of the feedback received in Round 1 centred around two major themes: care timing and care providers. Panel members raised concerns around some items being more appropriate for different types of care received (for example prenatal care as opposed to labour and delivery care), or some items measuring different behaviours at different times. Additionally, many panel members raised the question about distinguishing between different care-providers either individually or by provider group (for example midwives and doctors) in their feedback.  We have carefully considered and discussed these two themes.  For Round 2, you will notice that we have currently adjusted the items to focus more on the concepts or experiences of respect and disrespect, and less on identifying specific healthcare providers (such as doctors or midwives) in the item stem. However, we have included some questions to seek your input about whether/how to separate items by stage of pregnancy, and whether to use a general term like “health care providers” for most items, or whether it is best to specify type of provider (e.g. obstetrician, physician, midwife, nurse, and hospital staff). We are keen to hear your ideas in these areas.  Positive and Negative phrasing: 134  Our intent when we mount a final survey instrument is to balance positive and negative phrasing. By grouping the items by domains in Round 2, it may be more obvious that we seek to measure both experiences of respectful care and disrespectful care/mistreatment. While many items can be rephrased to change the direction from negative to positive, some will remain more logically in one place.  Once we have a finalized set, researchers who select from these best indicators may need to consider rephrasing to present a balanced survey to avoid influencing responses in one direction. Response options Many panel members provided feedback related to the response options that were presented with the items. Many of the items will be appropriate for Likert type responses, allowing for some nuance and grading during analysis, others are clearly set up for a binary response.  Please make your ranking based on concept measured, and as we further develop our survey, we will select response options that fit the analytic design.  Delphi Round 2: In Delphi Round 2, we focus on prioritizing items within the different domains of respect and mistreatment that service users have identified.  The items have been sorted into groups based on the themes from Bohren and colleagues’ mistreatment typology (2015), as well as other inductively derived themes. You will notice that some items that look identical to the items you saw on the first-round survey, but you will also see that a significant number of items have been adjusted or created as described above.     

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0384605/manifest

Comment

Related Items