UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Studies, experience, and reflection on the promotion of standardized outcome measures in physical therapy Kozlowski, Allan John 2010

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
24-ubc_2010_fall_kozlowski_allan.pdf [ 1.87MB ]
Metadata
JSON: 24-1.0071077.json
JSON-LD: 24-1.0071077-ld.json
RDF/XML (Pretty): 24-1.0071077-rdf.xml
RDF/JSON: 24-1.0071077-rdf.json
Turtle: 24-1.0071077-turtle.txt
N-Triples: 24-1.0071077-rdf-ntriples.txt
Original Record: 24-1.0071077-source.json
Full Text
24-1.0071077-fulltext.txt
Citation
24-1.0071077.ris

Full Text

      Studies, Experience, and Reflection on the Promotion of Standardized Outcome Measures in Physical Therapy  by  ALLAN JOHN KOZLOWSKI  B.Sc. (PT), The University of British Columbia, 1991    A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF    DOCTOR OF PHILOSOPHY   in   THE FACULTY OF GRADUATE STUDIES  (Rehabilitation Sciences)    THE UNIVERSITY OF BRITISH COLUMBIA  (Vancouver)    July 2010  © Allan John Kozlowski, 2010      ii  ABSTRACT The use of standardized self-report questionnaires to measure outcome has been promoted in physical therapy (PT) for two decades but has not been widely adopted. Knowledge translation literature has disclosed complex and multidimensional factors associated with practice change in healthcare. Specific barriers and facilitators need to be disclosed to tailor KT interventions. The dissertation is framed with the Ottawa Model for Research Use. Chapter Two describes a comprehensive literature review of interventions to facilitate development of reflective abilities by clinicians, to use reflection to change specific practice abilities, and to implement outcomes measures in healthcare settings.  Chapter Three describes a review of Canadian regulatory and professional documents regarding elements of PT practice. Examination and diagnosis elements were represented while prognosis and outcome evaluation were less prevalent. Systematic integration of these latter elements into practice is recommended. Chapter Four describes a statistical evaluation of outcome data collected by a motivated physical therapist and insights she gained from her experience. The evaluation offered an interpretation of data partitioned into meaningful subsets. The clinician’s reflection on her experience provides insight to changes she made in her practice. Chapter Five describes the development of a clinical decision-making model to integrate PT practice elements with the components of the International Classification of Functioning, Disability, and Health. Chapter Six describes a preliminary survey and interview on practices and attitudes towards outcome measurement, and barriers and facilitators to the use of self-report   iii questionnaires to measure outcome. Despite the small sample, insight was gained on some limitations of a scale to measure attitude towards outcome measurement and reported barriers. Chapter 7 provides a summary and synthesis of the findings, and reflections on my experience. Studies on interventions to promote changes in clinical decision-making through adoption of outcome measures in healthcare are sparse. Reflection is one cognitive process that can influence decision-making, but the extent to which these cognitive processes can change is unclear. The meaning of outcome must be understood from the clinician’s perspective and integrated into clinical practice. The Ottawa Model of Research Use functions as a framework to guide planning and redirection of implementation studies.    iv ABSTRACT  ......................................................................................................... ii TABLE OF CONTENTS TABLE OF CONTENTS  .......................................................................................iv LIST OF TABLES  ................................................................................................ix LIST OF FIGURES  ..............................................................................................xi LIST OF ABBREVIATIONS  ................................................................................ xii ACKNOWLEDGEMENTS  .................................................................................. xiv CO-AUTHORSHIP STATEMENT  ...................................................................... xvi   CHAPTER 1.  INTRODUCTION  ......................................................... 1  1.1 INTRODUCTION  ........................................................................................... 1 1.1.1 Outcome Measurement in Rehabilitation  .......................................... 4 1.1.2  The Prevention and Early Active Return-to-work Safely (PEARS) Program  ............................................................................................ 6 1.1.3  Knowledge Translation in Healthcare  ............................................. 10 1.2 EXPERIENCE NARRATIVE  ........................................................................ 11 1.3 THE OTTAWA MODEL OF RESEARCH USE (OMRU)  .............................. 15 1.4 FRAMING OF THE DISSERTATION WITH THE OTTAWA MODEL OF RESEARCH USE  ......................................................................................... 18 1.5 SCOPE OF THE DISSERTATION  ............................................................... 22 1.6 OBJECTIVES OF THE DISSERTATION  ..................................................... 25 1.7 REFERENCES  ............................................................................................ 29  CHAPTER 2.  REVIEW OF INTERVENTIONS TO PROMOTE CHANGES IN CLINICAL REFLECTION AND DECISION- MAKING IN HEALTHCARE PROFESSIONS WITH SPECIAL REFERENCE TO THE STANDARDIZED OUTCOME MEASURES  ........................................... 38  2.1 BACKGROUND  ........................................................................................... 38 2.2 METHODS  ................................................................................................... 41 2.3 RESULTS  .................................................................................................... 43 2.3.1  Search Results  ................................................................................ 43 2.3.2  Studies of Students Developing Reflective Abilities  ........................ 44 2.3.3  Studies of Students Using Reflection to Develop Clinical Behaviors  ........................................................................................ 47 2.3.4  Studies of Clinicians Using Reflection to Improve Aspects of Clinical Practice ............................................................................... 48 2.3.5  Studies of Clinicians Implementing Standardized Outcome Measures in Practice ....................................................................... 51     v 2.4 DISCUSSION  .............................................................................................. 53 2.4.1  Limitations  ....................................................................................... 56 2.4.2  Future Research  ............................................................................. 57 2.4.3  Conclusion  ...................................................................................... 57 2.5 REFERENCES  ............................................................................................ 65  CHAPTER 3.  STANDARDS OF PHYSICAL THERAPY PRACTICE RELATED TO OUTCOMES, MEASUREMENT, AND EVALUATION IN ENGLISH-SPEAKING CANADA: A REVIEW OF REGULATORY AND RESOURCE DOCUMENTS  ........................................................... 71  3.1 INTRODUCTION  ......................................................................................... 71 3.1.1  Mandate  .......................................................................................... 73 3.1.2  Meaning: The Physical Therapy Practice Model  ............................. 73 3.1.3  Key Resources in Outcome Measurement and Evaluation  ............. 76 3.1.4  Purpose  .......................................................................................... 78 3.2 METHODS  ................................................................................................... 78 3.2.1  Review Framework  ......................................................................... 79 3.2.2  College Regulatory Document Review  ........................................... 79 3.2.3  College and Canadian Physiotherapy Association Website Review   ........................................................................................... 80 3.3 RESULTS  .................................................................................................... 80 3.3.1  Regulatory Review  .......................................................................... 80 3.3.1.1  Definition of Physical Therapy Practice 3.3.1.2  ICF Constructs 3.3.1.3  American Physical Therapy Association (APTA) Practice Model Concepts 3.3.1.4  Outcome Measurement and Evaluation Concepts 3.3.2  Regulatory and Professional Website Search  ................................. 83 3.3.3  Select Resource Review  ................................................................. 84 3.3.3.1  Physical Rehabilitation Outcome Measures, Second Edition 3.3.3.2  Essential Competency Profile for Physiotherapists in Canada 3.3.3.3  Review of Supplemental Resources and Supporting Material 3.4 DISCUSSION  .............................................................................................. 94 3.4.1  Mandate  .......................................................................................... 94 3.4.2  Meaning  .......................................................................................... 96 3.4.2.1  International Classification of Functioning, Health and Disability (ICF) 3.4.2.2  Elements of Practice 3.4.2.3  System of Outcome Measurement and Evaluation 3.4.2.4  Evolution of Practice 3.4.3  Limitations  ..................................................................................... 104 3.5 CONCLUSIONS AND CONSIDERATIONS  ............................................... 104 3.6 REFERENCES  .......................................................................................... 116   vi CHAPTER 4.  OUTCOME EVALUATION IN ORTHOPEDIC PHYSICAL THERAPY: APPLICATION OF AND REFLECTION ON A SIMPLE METHOD TO QUANTIFY CLINICAL PRACTICE  ................................................................................. 125  4.1 BACKGROUND  ......................................................................................... 125 4.2 METHODS  ................................................................................................. 128 4.2.1  Practitioner’s Rationale for Adoption Decision  .............................. 128 4.2.2  Measurement Process  .................................................................. 129 4.2.3  Evaluation Process  ....................................................................... 130 4.2.3.1  Post-Diagnostic Analysis 4.2.3.2  Data Integrity 4.2.3.3  Case Validity 4.2.3.4  Body Region/Questionnaire 4.2.3.5  Change Indices 4.2.3.6  Response Comparison 4.3 RESULTS  .................................................................................................. 137 4.3.1  Post-Diagnostic Analysis  .............................................................. 138 4.3.2  Data Integrity  ................................................................................. 138 4.3.3  Case Validity  ................................................................................. 139 4.3.4  Body Region/ Questionnaire  ......................................................... 139 4.3.5  Change Indices  ............................................................................. 139 4.3.6  Response Comparison  ................................................................. 140 4.3.7  Impact on the Clinician  .................................................................. 141 4.4 DISCUSSION  ............................................................................................ 146 4.4.1  Limitations  ..................................................................................... 150 4.4.2  Future Research  ........................................................................... 151 4.4.3  Conclusion  .................................................................................... 152 4.5  REFERENCES  ......................................................................................... 160  CHAPTER 5.  INTEGRATING THE INTERNATIONAL CLASSIFICATION OF FUNCTIONING (ICF), CLINICAL DECISION-MAKING, AND OUTCOME ASSESSMENT INTO PHYSICAL THERAPY PRACTICE: A PROPOSED FRAMEWORK  ........................................................ 165  5.1 BACKGROUND AND PURPOSE  .............................................................. 165 5.1.1  The International Classification of Functioning, Disability, and Health (ICF)  .................................................................................. 167 5.1.2 Physical Therapy Models  .............................................................. 169 5.1.3 Outcome Measurement and Evaluation Processes  ...................... 170 5.2 MODEL DEVELOPMENT  .......................................................................... 171 5.2.1 Elements of Practice  ..................................................................... 172 5.2.2 Diagnostic and Outcome Evaluation Processes  ........................... 174   vii 5.2.3 Integration of Practice Elements, Measurement Processes, and ICF Components  ........................................................................... 177 5.3 DISCUSSION  ............................................................................................ 180 5.3.1 Limitations  ..................................................................................... 184 5.3.2 Conclusion  .................................................................................... 185 5.4 REFERENCES  .......................................................................................... 192  CHAPTER 6.  OPINIONS OF PHYSICAL THERAPISTS ON OUTCOME MEASUREMENT IN A WORK DISABILITY PREVENTION PROGRAM FOR HEALTHCARE WORKERS: PILOT DATA TO INFORM A KNOWLEDGE TRANSLATION INTERVENTION  ........................... 197  6.1 BACKGROUND AND PURPOSE  .............................................................. 197 6.1.1 Environmental Scan  ...................................................................... 200 6.2 Methods  ..................................................................................................... 203 6.2.1 Subject Survey  .............................................................................. 203 6.2.2 Subject Interviews  ......................................................................... 204 6.2.3 Stakeholder Interviews  .................................................................. 205 6.3 Results  ....................................................................................................... 206 6.3.1 Attitudes-Beliefs  ............................................................................ 207 6.3.2 Current Outcome Measurement Practices  .................................... 208 6.3.3 Barriers to Outcome Measurement  ............................................... 208 6.3.4 Qualitative Responses  .................................................................. 209 6.4 Discussion  ................................................................................................. 211 6.4.1 Attitudes-Beliefs  ............................................................................ 212 6.4.2 Current Outcome Measurement Practices  .................................... 213 6.4.3 Barriers to Outcome Measurement  ............................................... 214 6.4.4 Qualitative Responses  .................................................................. 215 6.4.5 Intervention Mapping  .................................................................... 218 6.4.6 Limitations  ..................................................................................... 218 6.4.7 Conclusion  .................................................................................... 219 6.5 REFERENCES  .......................................................................................... 227  CHAPTER 7.  GENERAL DISCUSSION AND CONCLUSION  ...... 230  7.1 GENERAL DISCUSSION  .......................................................................... 230 7.1.1  Review of interventions to promote changes in clinical reflection and decision-making in healthcare professions with special reference to the standardized outcome measures  ........................ 233 7.1.1.1  Summary of Findings 7.1.1.2  Contributions 7.1.2  Standards of Physical Therapy Practice Related to Outcomes, Measurement, and Evaluation in English-speaking Canada: A Review of Regulatory and Resource Documents  .......................... 235   viii 7.1.2.1  Summary of Findings 7.1.2.2  Contributions 7.1.3  Outcome Evaluation in Orthopedic Physical Therapy: Application of and Reflection on a Simple Method to Quantify Clinical Practice... 238 7.1.3.1  Summary of Findings 7.1.3.2  Contributions 7.1.4  Integrating the International Classification of Functioning (ICF), Clinical Decision-Making, and Outcome Assessment into Physical Therapy Practice: A Proposed Framework  ..................... 241 7.1.4.1  Summary of Findings 7.1.4.2  Contributions 7.1.5  Opinions of Physical Therapists on Outcome Measurement in a Work Disability Prevention Program for Healthcare Workers: Pilot Data to Inform a Knowledge Translation Intervention  ........... 243 7.1.5.1  Summary of Findings 7.1.5.2  Contributions  7.2 SYNTHESIS OF RESEARCH FINDINGS  ................................................. 246 7.3 STRENGTHS OF THE DISSERTATION RESEARCH  .............................. 249 7.4 LIMITATIONS OF THE DISSERTATION RESEARCH  .............................. 251 7.5 FUTURE RESEARCH DIRECTIONS  ........................................................ 251 7.6 POTENTIAL APPLICATIONS OF RESEARCH  ......................................... 254 7.7 REFLECTION ON THE DISSERTATION  .................................................. 254 7.8 CONCLUSION  ........................................................................................... 261 7.9 REFERENCES  .......................................................................................... 263  APPENDICES     ............................................................................................ 272  APPENDIX A-1    Operational Definitions for Independent and Outcome Variables    ............................................................................................ 273  APPENDIX A-2    Cues to Reflection in the Outcome Evaluation Process  ..... 274  APPENDIX B-1     Questionnaire on Attitudes, Current Practices, Barriers, and Facilitators to Measurement of Physical Therapy Outcomes      ....................................................................... 277  APPENDIX B-2    Script for Semi-Structured Interview Questions Regarding the Affective Component of Attitudes, Barriers, and Facilitators to the Use of Standardized Disability Questionnaires to Measure Physical Therapy Outcomes ... 281  APPENDIX C    Ethics Review Certificates   ................................................. 282   ix  LIST OF TABLES Table 2-1  Levels of Evidence  .......................................................................... 62 Table 2-2  Search results for studies evaluating the acquisition of reflective skills by students  .......................................................................................... 62 Table 2-3  Search results for studies evaluating the use of reflection to develop clinical skills by students  ................................................................. 63 Table 2-4  Search results for studies evaluating the use of reflection to change aspects of clinical practice  .............................................................. 63 Table 2-5  Search results for studies evaluating the implementation of standardized outcome measures in practice  ........................................................ 64 Table 3-1  Definition of Physiotherapy and/or Physical Therapy in English-Speaking Canadian Provincial Legislation  .................................................... 108 Table 3-2  Definition and Use of Terms Descriptive of Concepts in Physical Therapy Practice in English-Speaking Canadian Provincial Legislation  ....................................................................................................... 110 Table 3-3   Summary of Terminology Definition and Usage in Physical Therapy Practice in English-Speaking Canadian Provincial Legislation  ..... 113 Table 3-4  Additional Documents and Resources Found on College Websites with Content Relating to Outcome, Outcome Measurement, or Outcome Evaluation  ..................................................................................... 114     x Table 3-5  Documents and Resources Found on Canadian Physiotherapy Association (CPA) National and Provincial Branch Websites with Content Relating to Outcome, Outcome Measurement, or Outcome Evaluation  ..................................................................................... 115 Table 4-1  Summary of Measurement Properties for Standardized Outcome Measures  ...................................................................................... 155 Table 4-2  Patient Data by Partitioned for Post-Clinical Evaluation and Data Integrity  ......................................................................................... 155 Table 4-3 Response Comparison for Neck Disability Index (NDI) Data  ........ 156 Table 4-4  Response Comparison for Disabilities of the Arm, Shoulder, and Hand (DASH) Questionnaire Data  .......................................................... 157 Table 4-5  Response Comparison for Oswestry Disability Index (ODI) Data .. 158 Table 4-6 Response Comparison for Lower Extremity Functional Scale (LEFS) Data  .............................................................................................. 159 Table 5-1  Mapping of Elements of Practice from Two Practice Models to the APTA Practice Model  .............................................................................. 186 Table 6-1 Attitudes to Outcome Measurement  .............................................. 224 Table 6-2  Current Use of Measures for Outcome Measurement Rated from Never (1) to Always (5)  ............................................................................ 225 Table 6-3 Barriers to Use of Outcome Measures in PEARS Clinical Practice, Rated from No Barrier (1) to “Extreme Barrier (5)  .................................... 226     xi  LIST OF FIGURES Figure 1-1  The Ottawa Model of Research Use (OMRU) ............................... 27 Figure 1-2  Dissertation chapters mapped to the Ottawa Model of Research Use  ...................................................................................................... 28 Figure 2-1  Wainwright’s revised conceptual framework for the use of reflection to inform the clinical decision-making process within the patient/client management model. Adapted from Wainwright et al (2010)  ........ 59 Figure 2-2  Database Search Strategy ............................................................ 60 Figure 2-3  Snowball Search Results  ............................................................. 61 Figure 4-1  Outcome Evaluation Process  ..................................................... 154 Figure 5-1  International Classification of Functioning, Disability, and Health (ICF) Framework ......................................................................... 187 Figure 5-2 The APTA Elements of Physical Therapy Practice ........................... 187 Figure 5-3 Practice elements derived from model comparison ..................... 188 Figure 5-4 ICF-Integrated Clinical Decision-Making Model  ......................... 189 Figure 6-1 PEARS Program Organization: Management Hierarchy  ............ 222 Figure 6-2  PEARS Program Regional Organization  ................................... 223   xii  LIST OF ABBREVIATIONS A   Activity [a component of the ICF] APTA    American Physical Therapy Association  BC   British Columbia b/s    Body Functions and Structures [components of the ICF]  CDMM(s)   clinical decision-making model(s) CIHR   Canadian Institutes for Health Research CPA   Canadian Physiotherapy Association  d  Disability; under which Activity and Participation are jointly coded [a component of the ICF] DASH   Disabilities of the Arm, Shoulder, and Hand questionnaire  e   Environmental Factors [a component of the ICF]  HC  health condition, which may be modified by diseases or disorders [a component of the ICF] HOAC (II) Hypothesis-Oriented Algorithm for Clinicians (second edition)  ICD-10  International Classification of Diseases, Tenth Revision ICF    International Classification of Functioning, Health, and Disability  LEFS    Lower Extremity Functional Scale  MDC    minimal detectable change MeSH   Medical Subject Headings  NDI    Neck Disability Index  OCP    outcome completion proportion ODI   Oswestry Disability Index OHSAH   Occupational Health & Safety Agency for Healthcare OMRU  Ottawa Model of Research Use  P    Participation [a component of the ICF] p  Personal Factors [a component of the ICF, which are not coded in the classification system are important in physical therapy] PEARS   Prevention and Early Active Return-to-work Safely PT   physical therapy  RCP    reliable change proportion RISe    Researcher Information System   xiii  the Alliance   Canadian Alliance of Physiotherapy Regulators the Guide   Guide to Physical Therapist Practice, Second Edition the Profile  Essential Competency Profile for Physiotherapists in Canada the handbook  Physical Rehabilitation Outcome Measures, Second Edition: A Guide to Enhanced Clinical Decision-Making  UBC    University of British Columbia  WDP   Work Disability Prevention    xiv  ACKNOWLEDGEMENTS I would like to acknowledge the contributions of others whose support has contributed to the completion of my doctoral program and this dissertation. First I would like to thank my supervisor, Dr. Elizabeth Dean for providing years of guidance, direction, and support, and for editing all chapters of this dissertation. Her constant optimism and vision provided a foundation for success.  I would like to thank my committee members, Dr. Annalee Yassi, Dr. Anita Hubley, and Dr. Maziar Badii for providing challenges and guiding me towards alternate paths or inquiry, perspective, and understanding.  I would like to thank my colleagues in the rehabilitation sciences research doctoral and masters programs, in particular Dr. Jocelyn Harris, Dr. Dana Anaby, and Dr.  Mike Bodner for sharing the experience and leading the way to the end.  I would like to thank Dr. Lyn Jongbloed, Graduate Program Advisor for the Graduate Programs in Rehabilitation Sciences, and Dr. Jim Thompson and Rebecca Trainior from the Faculty of Graduate studies for their assistance in navigating the final steps.  I would like to thank Charlotte Beck from the UBC Libraries for her assistance with the literature search strategy for Chapter 2.   xv I would like to thank present and past faculty and staff of the Department of Physical Therapy, in particular Dr. Brenda Loveridge, Dr. Darlene Redenbach, Dr. Darlene Reid, Claudia Buffone, Agnes Zee, Jennifer Talbot, Mark Meheriuk, and Larry Smithe. I would like to thank the Canadian Institutes of Health Research (CIHR) Strategic Training Program in Rehabilitation Research, also know as the Quality of Life Program, and its mentors in particular Dr. Joy MacDermid and Dr. Janice Eng.  I would like to thank the CIHR Strategic Training Program in Work Disability Prevention (WDP) and its mentors, particularly Dr. Jaime Guzman, Dr. Patrick Loisel, Dr. Renee-Louise Franche, and Dr. Han Anema.  I would like to thank the trainees of WDP Program, particularly Dr. Douglas Gross, Åsa Tjulin, Dörte Bernhard, Maurice Driessen, and Sandra van Oostrom.  I would like to thank the Michael Smith Foundation for Health Research, WorkSafe BC, and the Occupational Health & Safety Agency for Healthcare for their funding and support, and the EMGO Institute at the Vrije Universiteit in Amsterdam, Netherlands for partnership support in the WDP program.      xvi  CO-AUTHORSHIP STATEMENT Sections of this dissertation are in preparation for publication in peer-review journals. Some of these manuscripts have multiple authors. The details of authorship contributions are listed below.  Chapter 4: Coauthors Selena Horner, PT and Elizabeth Dean, PhD, PT: Ms. Horner was responsible for the original study proposal and data collection and reviewing and editing all manuscript drafts. Dr. Dean was responsible for jointly developing with me the study concept for this secondary data analysis and reviewing and editing all manuscript drafts. Chapter 5: Coauthors Dr. Joy MacDermid and Dr. Patty Solomon:  Both Drs. MacDermid and Solomon contributed to the study design, data collection, interpretation, and reviewed and edited all manuscript drafts.  With respect to these coauthor contributions, I either jointly (as listed above) or with guidance from my doctoral supervisor Dr. Dean and supervisory committee, was primarily responsible for the development of the concepts and design for the research program represented in this dissertation and each chapter, conducted the data collection and analyses, and prepared the manuscripts.   1 CHAPTER 1. INTRODUCTION  1.1 INTRODUCTION The concept of change being the only constant in life is not new. The science of attempting to understanding the process of and promoting change to the practice behaviors of healthcare practitioners, however, is a relatively new development. The physical therapy (PT) profession in Canada has a 20-year history of promoting the implementation of outcome measurement and evaluation processes into clinical practice. However, evidence suggests that this element of PT practice has evolved little in this time.1-3 Such limited advancement may be due in part to differences between current research and practice regulation, insufficient definition of an outcome evaluation system suitable for practitioners, failure to develop and implement adequate knowledge translation initiatives, failure to orchestrate a combination of such strategies, or a host of other influential factors relating to individuals, organizations, society, and human behavior in general. Further, the challenge in promoting change in professional practice like the adoption of standardized measures of clinical outcome extends beyond the PT profession. This dissertation explores dimensions of these issues through a series of studies on the state of outcome evaluation in healthcare with specific attention to PT practice. A review of literature reporting on interventions to promote the use of standardized questionnaires to measure outcomes and reflecting on practice in healthcare precedes a review of literature and policy, an outcome evaluation study, a proposal for a PT practice model integrating an outcome evaluation system, and a   2 preliminary environmental scan for a proposed knowledge translation intervention study. These studies cover the historical and professional bases for standardized clinical outcome evaluation, clinical application of outcome evaluation methods, and research to promote professional behavior change. These studies are integrated into a cohesive unit with a framework for translating research findings into clinical practice. The concluding chapter presents a synthesis of the findings from these studies and personal reflections on these findings and on the experience gained through my doctoral program. This dissertation is one of the requirements for obtaining a Doctor of Philosophy in Rehabilitation Sciences at the University of British Columbia (UBC). It has been prepared in a manuscript format which is a style approved by the Faculty of Graduate Studies of UBC. In addition to the introductory and concluding chapters, five research chapters are presented, each with an introduction or background, methods, results, discussion, and other relevant sections. As each chapter has been drafted as an independent manuscript with the intention of submission for peer- reviewed publication, there is some repetition across chapters. This chapter outlines some of the literature relating to a work disability prevention program for healthcare workers, the promotion of standardized self-report questionnaires to measure PT outcomes, and early literature on the promotion of uptake of research evidence. Development and implementation of my proposed research project is presented in the context of a framework for promoting use of research, along with a narrative of my experience. This experience is germane to the integration of the research program into a collective work, and provides a basis for   3 reflection on how the experience and the findings have shaped my perspective on the application of health services research. Regarding the promotion of outcome measurement in PT, the literature demonstrates a gap between PT practice and the ideal promoted within the profession. For almost two decades the integration of self-report standardized disability questionnaires as a method to measure clinical PT outcomes and enhance clinical decision making has been encouraged by the profession.1-3 Recent evidence, however, suggests that although change has been demonstrated with some studies4 physical therapists generally have not adopted this practice over this time.3, 5, 6 The Prevention and Early, Active, Return-to-work Safely (PEARS) program is another focus of this dissertation. This work disability prevention program for healthcare workers in British Columbia (BC) was central to my initial research proposal as it provided an accessible population of physical therapists who had demonstrated use and subsequent abandonment of an outcome measurement system that used a battery of standardized self-report questionnaires. Literature on the science of implementation of research evidence into practice has been described with many titles including implementation science, knowledge transfer and/or exchange, and knowledge translation. The Canadian Institutes for Health Research (CIHR) has adopted the term knowledge translation, defining it as “a dynamic and iterative process that includes synthesis, dissemination, exchange, and ethically-sound application of knowledge to improve the health of Canadians, provide more effective health services, and products, and   4 strengthen the health care system. This process takes place within a complex system of interactions between researchers and knowledge users which may vary in intensity, complexity, and level of engagement depending on the nature of the research and the findings, as well as the needs of the particular knowledge user.”7 Although this branch of science and healthcare practice has evolved greatly over the past five years, select literature available during the development of my research proposal is of relevance to this dissertation. This overview is followed by a narrative describing my experience, a description of a framework for promoting the implementation of research evidence into healthcare practice, and an application of this framework to link the research chapters of this dissertation.  1.1.1 Outcome Measurement in Rehabilitation Awareness of the importance of outcome evaluation in rehabilitation and its implications has increased over 20 years,1, 3, 8-10 but the rate of adoption of standardized measures by physical therapists in clinical practice has lagged.3, 5, 11 In part, this may be explained by the multitude of factors that influence behavioral change in healthcare professionals.12, 13, 14 The concept of outcome measurement has become common in professional terminology, partly in response to an increasing demand for accountability in practice.2, 15-21 This trend has been supported by the development of print and web-based knowledge transfer resources1, 2, 22-28 and workshops29 aimed at providing clinicians with information on measurement tools, and their properties and clinical applications. However,   5 evidence of the current state of adoption of self-report questionnaires and outcome evaluation in practice remains sparse.4, 5, 11, 30 Surveys have been reported on the barriers and facilitators to clinicians’ use of outcome measures and measurement issues.1-3, 10 Barriers included limited knowledge of instruments and of their development, time, applicability to clients, consensus, availability of equipment, administrative support, knowledge of utility,1 low priority, and lack of personal interest.3, 10 Facilitators included provision of lists of measures and their characteristics, standardized forms and directions, guidance on evaluating client outcome, and a system for their use.10 A 1998 survey6 identified the most commonly used measures for low back pain-associated disorders in five European countries. These were the Visual Analogue Scale for pain,31, 32 the Oswestry Disability Index,33, 34 and Roland-Morris Questionnaire;35 the latter two tools evaluate disability associated with back pain. However, no report was made of barriers or facilitators to use, or the extent to which these measures were used. A 2002 study reported low use of standardized measures of pain and activity- level disability outcomes by physical therapists based on a random sample of their clinical records in eastern Canada.5 Specifically, the use of a pain rating scale (visual analogue or numeric) was noted in 31% of records on the initial assessment and in 5% of records at both initial and discharge visits. Only use of one self-report questionnaire (the Roland-Morris) was reported in 2% of initial and 1% of discharge records. These findings could have overestimated prevalence due to a participation rate of 71%.5   6 A 2003 longitudinal study examined self-reported use of measures by physical therapists in Australia.4 Physical therapists were surveyed about their use of measures and attitudes towards outcome measurement before and after a six-month marketing campaign to increase awareness and use. An increase in their use was reported over a six-month interval (i.e., use of the Oswestry Disability Index increased from 69% to 81%) with no change in attitude,4 but the proportion of complete data was not discussed. A 2006 study promoting the use of evidence- based guidelines to manage low back pain found no change between intervention and control groups following a training program.30 Although change in health clinician behavior might be expected, only modest changes have been reported.14, 36  1.1.2  The Prevention and Early Active Return-to-work Safely (PEARS) Program Increasing incidence of musculoskeletal disorders in the BC healthcare sector during the 1990s and aging of the healthcare workforce prompted OHSAH (the Occupational Health & Safety Agency for Healthcare) to develop an evidence-based program to provide primary37 and secondary38 prevention of work disability. The PEARS program was developed to reduce the injury rate and to reduce pain and disability arising from musculoskeletal disorders sustained by healthcare workers. Evaluation of pilot data provided support for reductions in both primary39 and secondary40 prevention outcomes. Tracking of secondary outcome data however was abandoned after the pilots were completed, leaving a gap in the ability to report comprehensively on PEARS program outcomes, and thus to comment on both   7 program effectiveness and efficiency. This kind of deficiency has been noted as a major factor challenging reform of Canada’s healthcare system.41 According to the 2006 Canadian census, the oldest working-age class (55-64 years) is growing the fastest due to the aging of the baby boomers. This group represents 16.9% of the population and is expected to reach 20% by 2016.42 The ratio of people entering to those leaving the work force is currently 1.1:1 but by 2016 more people are predicted to leave the labor force than those who can begin working.42 The Canadian Nurse’s Association has predicted a shortage of 113,000 nurses by 2016.41, 43 As the demands on healthcare workers increase due to the aging population and the decline in healthcare worker supply, the risks of injury and disability to the aging healthcare workforce could increase markedly. Injured workers with longer and non-standard shifts already risk poorer vocational outcomes from rehabilitation.44 To combat these risks, the PEARS program could lead in injury and disability prevention. In addition to labor force issues such as employee retention, health of older workers, and the continuous training of employees, the development of effective knowledge translation strategies are needed to offset these increasing demands on the Canadian healthcare system. Achieving this, however, would require comprehensive measurement of relevant healthcare outcomes and routine evaluation and reporting to support evidence-based decisions clinically and in program development initiatives. The PEARS program was founded on 20 principles established from the partnership of the union and healthcare management and based on evidence from research.39, 45 Primary37 and secondary38 prevention strategies were integrated into   8 one program to address both injury prevention and injury-related disability in a timely manner. Active surveillance of reported workplace incidents was followed by contact of the involved healthcare workers by the PEARS program coordinator within three days of the incident report. Workers were offered access to the PEARS program whether they reported a resulting musculoskeletal disorder or not, but participation was voluntary. Interested workers were offered an assessment of their workplace with ergonomic advice to address environmental risk factors and PT to treat signs and symptoms of the musculoskeletal disorder and any resulting disability. The program objectives were to reduce workplace hazards before further injuries occurred and to reduce disability by facilitating early and safe return to the workplace.39, 46 Primary outcomes defined for the pilot studies were injury incidence, and time off work which was measured as claim duration from WorkSafeBC data. The secondary study outcome to reduce disability was measured using valid and reliable standardized questionnaires. Each participant was to complete a visual analogue scale for pain and one of four self-report questionnaires each relating to a region of the body. These were the Neck Disability Index,47, 48 the Oswestry Disability Index,33, 34 the DASH or Disabilities of the Arm, Shoulder, and Hand questionnaire,49, 50 and the Lower Extremity Functional Scale.51, 52 Selected measures were to be administered on admission to the PEARS program, at discharge, and at one and three months after discharge. The administration of self-report questionnaires over multiple time-points as a component of an outcome evaluation model was thought to be necessary in contemporary work disability prevention research53, 54 and in PT.2, 3   9 However, a gap apparently existed with respect to the congruence between the rhetoric of outcome evaluation and its application in rehabilitation contexts. The PEARS program was launched with a one-year pilot study at two sites in 2002 followed by two more pilot sites in 2003, and was subsequently adopted by four of the five regional BC health authorities and expanded to sites across the south and central province. OHSAH retained responsibility for program evaluation. Reductions in musculoskeletal injuries and work time-loss were comparable to findings at the pilot control sites.39 The healthcare worker participants reported large reductions in pain and disability.40 However, complete pain and disability data were available for only 36-39% of participants.40 A linear regression model supported a relationship between high disability scores at discharge on the Oswestry Disability Index and risk of re-injury, but the model was limited by missing data for 61% of participants.40, 55 Due to the low completion rates for these outcomes, the requirement to administer the visual analogue scale and one of the four self-report questionnaires was dropped at the conclusion of the pilot studies. With only Participation-level measures being reported, a gap in the outcome evaluation of the PEARS program was created. Thus, evaluation of outcomes may not be fully descriptive of health benefits56 and may not fully inform decisions for the clinical component of the PEARS program.2 This gap may have represented a failure of knowledge translation regarding adoption of an outcome evaluation system consistent with the rehabilitation1-3 and work disability prevention literature53, 54 in what was otherwise seen to be a successful program.39, 57 This experience also represented an opportunity to apply intervention mapping in the development of a   10 tailored, multifaceted knowledge translation intervention, and assess the impact of such a knowledge translation intervention on a small population of physical therapists practicing in the PEARS program.  1.1.3 Knowledge Translation in Healthcare The study of knowledge translation in healthcare is emerging as a recognized field of inquiry.58, 59 In rehabilitation the use of stimulated chart recall interviews60 and social marketing campaigns have been employed. Radio and bus advertising campaigns were implemented in Alberta to promote adoption of an evidence-based practice guide for work disability prevention. Despite the collaboration of academic, regulatory, and professional PT bodies with the workers’ compensation board, no change in practice was found from pre- to post-campaign surveys.11 Current evidence supports that healthcare professionals can change their practice behaviors but that effective knowledge translation intervention requires multifaceted approaches to address barriers at multiple ecological levels (i.e., individual, interpersonal, and organizational) that are tailored to the characteristics of the situation.58, 61-63 As most individual strategies and some combinations provide small to moderate improvements in care,14, 36 logical integration of strategies tailored to the specific barriers and incentives of the target healthcare clinicians and settings may yield more rapid adoption rates and larger changes.64 Strategies showing some success include reminders, interactive small group meetings, audit and feedback, and multifaceted interventions that included educational outreach.14 Other factors include an understanding of the knowledge translation process, the relevant   11 stakeholders, and the conceptual framework used to facilitate the knowledge translation intervention.58 An approach has been advanced to guide the systematic integration of multiple explanatory theories to inform matching of intervention strategies with specific barriers in the development of health promotion programs.63, 65 Intervention mapping is one approach developed in the health promotion arena which may provide a methodology to guide matching of strategies to barriers based on level, type, and content of knowledge translation interventions. In addition to early applications in HIV education66 and asthma management,67 intervention mapping has recently been used to tailor a return-to-work intervention developed for musculoskeletal disorder- related work disability to a stress-related mental health disorder context.68 Since the issues of tailoring knowledge translation interventions and matching strategies to barriers in health promotion appear to be similar, exploration of this process in designing knowledge translation interventions is warranted.  1.2  EXPERIENCE NARRATIVE The impetus for this dissertation shifted from the proposed research to develop and implement the knowledge translation intervention study towards a reflection on my experience in this pursuit. My research interest in the outcome measurement and the use of standardized self-report questionnaires arose from my practice as a physical therapist. In my doctoral program, I integrated evidence from a range of research areas to develop my research question into a detailed proposal. However,   12 circumstances changed which necessitated an alternate path to complete the dissertation with a focus on reflection as a means to learn from the experience. My research interest developed over 11 years of clinical and managerial experience as a physical therapist in a rehabilitation center for injured workers. Early on I experienced growing frustration with my inability to measure clinical outcome in a meaningful way. The PT profession concurrently began to address this issue nationally.1 In 1994 I accepted a senior therapist position with the task of promoting outcome measurement to the PT staff in the center. After three years I had no evidence of any change in clinical outcome measurement practices in the center. I was promoted to a managerial position where, among other objectives, such practices could be mandated. Over a five year period I held two management roles where I observed little change in the use of standardized outcome measures. In 1997 I became the manager of a program with 12 PT and occupational therapy staff. I set a requirement for the clinical staff to select and implement an appropriate battery of measures. After 18 months staff had not agreed on which measures would be appropriate. In 1999 I was charged with managing services provided to injured workers by private PT practices. This role coincided with a second effort by the profession to promote outcome measurement in clinical practice.2 The PT contract required reporting progress on measured Activity-level change, but questionnaires were not specifically required. Over a three-year contract period, I again observed little change. In sampling PT progress reports I cannot recall having once found evidence of the use of self-report questionnaire scores to measure activity-level status or to   13 report of change over time. This was of concern as the most common recommendation made was to extend the PT intervention. A desire to understand the dynamics of this paradox and to explore ways of promoting change became the impetus for my pursuit of graduate studies. In January 2004, I began graduate studies and became aware of the PEARS program. The abandonment of questionnaire use offered an ideal test-case for my research interest. The question I asked was: could a tailored, multi-faceted knowledge translation intervention to promote the adoption of standardized self-report questionnaires to measure outcome facilitate a change in of physical therapists’ practice in the PEARS programs? By early 2007 I had addressed many methodological challenges of studying this question. I had opted for an interrupted time-series design to measure change over time for the small number of subjects, defined a means of measuring use of questionnaires, and addressed issues of statistical power and recruitment. I selected the Intervention Mapping63 method to tailor knowledge translation strategies to the subjects. I also chose an operational knowledge translation framework, the Ottawa Model of Research Use (OMRU; Figure 1-1), to guide development and implementation the study. My final proposal included three phases. A preliminary phase would gather information about the Practice Environment and the Potential Adopters (PEARS physical therapists). This information would have been integrated with literature evidence through the Intervention Mapping process yielding a range of strategies to promote adoption of the questionnaires based on research evidence and the   14 subject’s preferences for change strategies.63 From this list, those that were best matched to the barriers and facilitators identified from the environmental scan would be included in the intervention.  The battery of four questionnaires used in the PEARS pilot studies was considered the Evidence-Based Intervention. Transfer Strategies were to be applied in a three-month intervention phase. A follow-up phase would have repeated the interview and survey, and gathered information on outcomes and perceptions of the study methodology. Use of measures would have been measured from data extracted from clinical records to evaluate change over the three phases, with 13 weekly time-points per phase. As the ethical review process was being completed in 2008, changes in the four health authorities were announced that directly impacted implementation of the study. One health authority announced in March 1998 that it was cancelling it PEARS programs in a departmental restructuring and another reported a change in priorities that precluded their staff from enrolling in the study. The two remaining health authorities eliminated the in-house PT component, leaving only the preliminary phase viable.  I proceeded with the survey in these two health authorities while considering alternative directions of inquiry to complete my dissertation. Reflection on this change led to the exploration of other aspects of the promotion of outcome measures in PT practice. These included reviews on the use of reflection on outcomes to facilitate decision-making, the gap between professional regulations and ideal practice, an outcome evaluation of data collected by a motivated physical therapist, and development of a clinical decision-making model that explicitly integrates an outcome evaluation process.   15 1.3  THE OTTAWA MODEL OF RESEARCH USE (OMRU) The OMRU provides a framework in which to explore the promotion of use of standardized disability questionnaires to evaluate outcome, and to reflect on my experience. This model was developed to provide a framework to guide policy makers seeking to increase research use by clinicians and to assist researchers interested in studying the process of integrating research into practice.69 The model consists of six key elements (Figure 1.1) thought to be central to the research use process. These are the Practice Environment, the Potential Adopters of the evidence, the Evidence-Based Innovation, the research Transfer Strategies, the resultant Adoption decision, and the Outcomes (health-related and otherwise).69 These elements are described as an interactive model that “views research use as a dynamic process of interconnected decisions and actions by different individuals relating to each of the model elements.”69  Although depicted as linear, the application of the model requires a cyclical or iterative approach to making decisions and actions, as all of the model elements influence and are influenced by each other. Linearity is merely a consequence of the unidirectional nature of time.69 The first three elements constitute a scan of the environment, stakeholders, and research innovation of interest. The Practice Environment can influence practitioners, policy makers, and researchers70 and can do so across many organizational and social levels.63 These influences can facilitate or inhibit the process of adoption, act positively or negatively under different circumstances, and may do both concurrently through different mechanisms.13, 71  Described in the model as structural, social, patient-related, and other situation-specific factors, these   16 may present as factors like regulations, physical structures, or workload; politics and personalities; and patient influences, respectively. The objective of this element is to identify, describe, and assess the nature and magnitude of their influence on the research innovation to be promoted for use.69 The Potential Adopters of the evidence may include clinicians, patients, other policy makers, and any other target audience that will have a stake in the implementation of the innovation. The objective of this element is to identify who they are and describe them in terms of their knowledge, attitudes, skills, and current practices. Individual interests may vary widely and may interact to magnify or offset the positive or negative influences of others. Creating adopter profiles can guide planning to select and implement strategies to address inhibitors and facilitators to research use.69 This element is focused on identifying perceptions that the adopters may hold towards the evidence-based innovation, which may relate to the attributes of the evidence itself, or the process through which it was translated for use. Attributes of the evidence may be perceived differently by different audiences (e.g., clinicians and administrators) or by individuals within a group (e.g., early and late adopters). Attributes include compatibility with current practice, complexity of the innovation, competitive (dis)advantage, and risk-benefit balance. Transparency of the process through which the evidence was translated, or lack thereof, may also influence the willingness or reluctance of adopters’ decisions to use. Staging of strategies over time may be necessary to address the facilitators and inhibitors presented by the   17 interactions of different groups or individuals with the innovation attributes and process.69 The remaining three elements represent the process of implementation of the research. Transfer Strategies represent the specific methods that will be employed to promote adoption of the research. These may include passive methods like diffusion, dissemination, and social marketing, or interactive methods like outreach, audit and feedback, reminders, incentives and sanctions, or patient-mediated strategies.69 The resultant Adoption decision represents a behavioral change central to knowledge translation science. This element represents two actions: the decision to adopt or not adopt the evidence, and the use or application of that decision. Monitoring of the process can identify the nature of the adoption decision and the extent to which it was or was not impacted by the transfer strategies. Monitoring can also identify the extent to which actual use reflects intended use of the evidence, and whether the change was durable or transient. Desired results can then be encouraged and undesired consequences can be addressed as they appear.69 The Outcomes element represents an evaluation of the impact that the resulting use of the innovation has in healthcare delivery. Improvement of patient health outcomes is the primary reason for implementing evidence from research, thus evaluating the nature and extent of the impact is important. Additional insight can be gained by evaluating the extent to which practitioners have been impacted and by conducting cost-benefit or other types of economic analyses. This   18 information can then be fed back to the stakeholders and used to inform other implementation projects.69 The six elements are interconnected through the process of evaluation.72 Prior to, during, and following a research transfer effort, each element undergoes a systematic process to Assess, Monitor, and Evaluate. This iterative process can potentially identify inhibiting and facilitating factors for the research use as they relate to the practice environment, potential adopters, and the evidence-based innovation. The assessment process can then guide selection and tailoring of transfer strategies. Monitoring can identify the extent to which transfer strategies impact adoption. The impact of unintended consequences or new information on (changes in) the practice environment, adopters’ attributes, or adopters’ perceptions of the innovation can be addressed by adapting the transfer strategies. Finally, the evaluation process can determine any change in the use of the evidence-based innovation and its impact on health outcomes and economics.69, 72  1.4  FRAMING OF THE DISSERTATION WITH THE OTTAWA MODEL OF RESEARCH USE Each study in this dissertation represents the application of a different element of the OMRU in the exploration of my original research question (Figure 1-2). Chapter 3 presents a review of regulatory and professional documents with respect to the elements of physical therapy practice. This represents part of the structural Practice Environment under which PEARS physical therapists were operating. Professional regulations and supports represent an important element of the   19 structural environment at the level of society in which health professionals practice. Understanding gaps or discrepancies that may exist between the requirements established for minimal standards of practice by regulatory bodies and the ideals promoted by a voluntary membership association has implications for understanding how individuals perceive their obligations and options in defining their individual practice styles. Depending on the individuals, the social and physical environments, and the organizational structure within which individuals function, different inhibiting and facilitating factors may have to be addressed using a variety of combinations of strategies to change practice behaviors. The environmental scan presented in Chapter 6 represents an attempt to document and understand such variations from the perspectives of the individual potential adopters (Figure 1-2). This study sought to disclose the current state of outcome evaluation practiced by physical therapists in the PEARS program. Therapists were surveyed on the nature and level of their use of outcome measurement methods, their attitudes towards outcome measurement practices, and the factors they reported as inhibiting and facilitating to adoption. A subsequent interview was conducted to gain deeper insight to the reported inhibiting factors and the affective component of attitude towards outcome measurement. The clinical decision-making model presented in Chapter 5 represents an attempt to redefine the framework in which physical therapists conceptualize their practices by integrating the familiar elements of practice with those that represent the outcome evaluation process. This study fits into the OMRU under the evidence- based innovation element (Figure 1-2). The innovation is represented by both the   20 wide array of standardized questionnaires that can be used to measure outcome, which in my proposal was reduced to a battery of four activity-level measures, and the conceptual process of using them to measure and interpret change over time of a construct (disability) argued to be as meaningful to clinicians and their clients.2 The proposed clinical decision-making model represents a practical effort in knowledge translation by framing the evaluation process as a logical extension of a practice model with which physical therapists are already familiar. The outcome evaluation study presented in Chapter 4 provides an analysis and interpretation of outcome data for self-report questionnaires collected on a consecutive sample of patients by a motivated physical therapist. The outcome evaluation is supplemented with the clinician’s insights on how she used reflection in and on her practice to develop meaning for the questionnaires with respect to her clients, and to make decisions about changing her practice to improve patient care. This chapter represents the result of an adoption decision and an evaluation of the resulting outcome data.  Although the transfer strategies are not detailed in this chapter, they are implicit in the clinician’s implementation of the outcome measurement process (Figure 1-2). Although presented as the first study in the dissertation, the literature review on implementation studies to use reflection on outcomes to decision-making in healthcare in Chapter 2 represents a reappraisal and redirection of my research question as a consequence of changes in my program. Reflection on the collapse of my proposed implementation study raised the question that perhaps the objective of promoting the use of standardized questionnaires to measure outcome is not only   21 important in PT, but may also be of importance in other healthcare professions. The question of measuring outcome as a means of promoting reflection on one’s practice and clinical decision-making abilities has been a theme central to the PT profession. The extent to which this premise has been researched in the major health professions was unclear, as was the role of reflection on outcomes in decision- making. The Assess-Monitor-Evaluate process of OMRU guides us to reflect on not only the findings, limited as they may be, from the Chapter 6 environmental scan, but also the events leading to the changes in PEARS and the collapse of the study as an important result. Such reflection may lead one to ponder many questions, but one of relevance here is the extent to which the implementation of standardized questionnaires in clinical practice has demonstrated an impact on clinical reasoning and decision-making. This is represented in the OMRU framework with a Reassess box (Figure 1-2) but conceptually represents the start of another cycle of the Assess phase. The proposed intervention study was intended to apply the remaining elements of the OMRU. The Transfer Strategies were to be identified from the integration of the environmental scan of PEARS stakeholders including the clinicians with the findings from the other parts of the Assess element. The adoption decision was to be determined from a review of clinical records before, during, and after the application of the selected transfer strategies looking for the evidence of use, and change in use of the battery of disability questionnaires. These two phases would have represented the Monitor process. Finally, the Evaluate process would have provided   22 o an evaluation of the outcomes for healthcare workers participating in the PEARS programs from data collected on the four questionnaires o outcome profiles reporting indices of change and completion proportion for each clinician in comparison to the group totals o and from follow-up interviews, a qualitative review of the impact of the study on the PEARS program. This final step may have offered some insight to the outcome of the study. In the event some clinicians had adopted the questionnaires, what changed in their opinions and attitudes to their use, and what transfer strategies did they find helpful in making the change? In the case where the clinicians did not adopt the questionnaires, why did they not do so?  1.5  SCOPE OF THE DISSERTATION This dissertation includes five research studies followed by a general discussion in Chapter 7 which provides a summary and synthesis of the research chapters and a discussion of the significance of these findings and future directions. Chapter 2 provides a comprehensive literature review of intervention studies using reflection on outcomes to facilitate changes in clinical decision-making in healthcare practice. Literature on this main theme was sparse, but studies on sub-themes of interventions to develop reflective abilities, use of reflection to change specific practice abilities, and implementation studies of standardized measures are reviewed. Chapter 3 provides a review of the regulatory documents that define current PT practice in the English-speaking provinces of Canada, revealing   23 inconsistencies between theory, policy, and practice. Not only did we find few provinces that include outcome evaluation as part of PT practice, we identified variations in both the elements of practice and the terminology used to describe them amongst provincial regulations. Chapter 4 explores the degree to which a single motivated practitioner can implement such a change in practice independently. This outcome evaluation study of a physical therapist working in an orthopedic outpatient department of a large multi-center hospital in the United States demonstrates that individual practitioners can, with very limited guidance and support, implement a change in their clinical practices to adopt an outcome evaluation system. Given that a motivated practitioner can adopt an outcome evaluation system consistent with that promoted by the profession, there remains a question: what is needed to promote such a change to a small population of physical therapists who work in a defined program? Chapter 5 describes a basis for constructing a clinical decision-making model that explicitly incorporates an outcome evaluation system. The clinical decision- making model is constructed from a review of practice models and practice standards in addition to filling the gaps and terminology deficiencies identified in the policy review in Chapter 3. Variations in terminology are addressed in the clinical decision-making model by integrating components of the International Classification of Functioning, Health, and Disability (ICF), and by compiling a glossary in which we adopt terms and definitions where sufficient ones are available, and recommend new terminology and/or draft definitions for evolving constructs.   24 Chapter 6 describes the methods and results of the preliminary phase of my proposed study. Physical therapists practicing in the PEARS programs operating in BC Health authorities were surveyed on their outcome measurement practices. The PEARS programs offered a combination of primary (injury prevention through workplace ergonomic modification) and secondary prevention (disability prevention through PT) strategies to healthcare workers with or at risk for disability from work due to musculoskeletal injuries. The PEARS program was available to the healthcare workforce in four health authorities. After successful pilot testing the PEARS program was adopted by these health authorities with one exception. The outcome evaluation system for the secondary prevention component of the program which was based on collecting scores from standardized disability questionnaires at multiple time points was abandoned. During development of our research proposal, PEARS had expanded to employ almost 30 physical therapists. To promote adoption of a similar system, we planned to survey and interview physical therapists practicing in PEARS programs and to interview representatives from four key stakeholder groups, namely PEARS program coordinators, union and health authority management representatives, and healthcare workers who had received PEARS services. The survey explored the attitudes, use, barriers, and facilitators to use of the outcome evaluation components by physical therapists practicing in PEARS programs. These elements and the therapists’ perceptions to the proposed implementation study were to be examined in greater depth through interviews. Stakeholder perceptions, which could provide insight into interpersonal and   25 organizational influences to adoption barriers and facilitators were to be gathered through interviews. However, due to administrative changes in PEARS program prior to initiating this study phase, part of this phase and the subsequent knowledge translation intervention implementation and follow up phases could not be implemented. Finally, Chapter 7 provides a general discussion and conclusion to summarize the key findings of this program of research. As this dissertation is predominantly exploratory, we provide recommendations for further research, and reflections on my experience. The appendices provide supporting resources and documents for the dissertation including copies of the ethics review certificates.  1.6  OBJECTIVES OF THE DISSERTATION The studies included in this dissertation are all exploratory in nature. As such, this research program was not based on hypothesis testing. Within each chapter and in the general discussion, recommendations are made from which hypotheses may be generated to suit the needs of the researchers who choose to further advance this work. The objectives of each study are: Study 1. To describe the literature on intervention studies using reflection on outcomes to facilitate clinical decision-making in four healthcare professions specifically those of medicine, nursing, physical therapy, and occupational therapy. The secondary objective is to describe the studies that facilitate development of reflective abilities, use reflection to facilitate change in other clinical abilities, or   26 describe a change in practice through implementation of outcome measures in a clinical setting.  Study 2. To describe differences between elements of PT practice, including those relating to outcome evaluation, as defined in regulatory and professional documents versus the ideal promoted by the profession. Study 3. To describe an evaluation process using four standardized self-report questionnaires, and the resulting clinical outcomes, as implemented by a motivated physical therapist, for a consecutive sample of clients. Study 4. To describe development of a PT clinical decision-making model that integrates familiar elements of practice with those relating to outcome evaluation and an internationally recognized model of health. Study 5. To describe the results of a survey of physical therapists practicing in the PEARS program on attitudes, practices, barriers, and facilitators to outcome measurement.     27 Figure 1-1. The Ottawa Model of Research Use (OMRU).  Practice Environment o structural o social o patients o other Potential Adopters o knowledge o attitudes o skills  Evidence- based Innovation o translation process o innovation Transfer Strategies o diffusion o dissemination o implementation Adoption o decision o use Outcomes o patient o practitioner o economic Assess               +              Monitor              +                 Evaluate    28 Figure 1-2.  Dissertation chapters mapped to the Ottawa Model of Research Use.   Practice Environment  o Chapter 3 o Chapter 6 Potential Adopters  o Chapter 6  Evidence- based Innovation  o Chapter 5 Transfer Strategies  o Chapter 4 Adoption  o Chapter 4  Outcomes  o Chapter 4  Assess               +              Monitor              +                 Evaluate  Re-Assess  o Chapter 2   29 1.7 REFERENCES 1. Cole B, Finch E, C G, Mayo N. Physical rehabilitation outcome measures. Toronto: Canadian Physiotherapy Association; 1994. 2. Finch E, Brooks D, Stratford P, Mayo N. Physical rehabilitation outcome measures. Second ed. Hamilton: BC Decker; 2002. 3. Jette DU, Halbert J, Iverson C, Miceli E, Shah P. Use of standardized outcome measures in physical therapist practice: perceptions and applications. Phys Ther. 2009;89(2):125-135. 4. Abrams D, Davidson M, Harrick J, Harcourt P, Zylinski M, J. C. Monitoring the change: current trends in outcome measure usage in physiotherapy. Man Ther. 2006;11(1):46-53. 5. Kirkness C, Korner-Bitensky N. Prevalence of outcome measure use by physiotherapists in the management of low back pain. Physiother Can. 2002;53(4):249-257. 6. Torenbeek M, Caulfield B, Garrett M, Van Harten W. Current use of outcome measures for stroke and low back pain rehabilitation in five European countries: first results of the ACROSS project. Int J Rehabil Res. 2001;24(2):95-101. 7. The Canadian Institutes of Health Research. Knowledge translation web page. Available at: http://www.cihr-irsc.gc.ca/e/26574.html. Accessed April 13, 2010. 8. Health and Welfare Canada. Toward Assessment of Quality of Care in Physiotherapy. Ottawa 1980.   30 9. Health and Welfare Canada. Toward Assessment of Quality of Care in Physiotherapy II: Instruments to Measure Health Status of Patients Receiving Physiotherapy. Ottawa 1981. 10. Kay T, Myers A, Huijbregts M. How far have we come since 1992? A comparative survey of physiotherapists' use of outcome measures. Physiother Can. 2001;53(4):268-275. 11. Gross DP. Evaluation of a knowledge translation initiative for physical therapists treating patients with work disability. Disabil Rehabil. 2008;1(9):1-8. 12. Rogers EM. Diffusion of Innovations. 5th ed. New York: Free Press; 2003. 13. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kryiakidou O. Diffusion of innovations in service organizations. Systematic review and recommendations. Milbank Q. 2004;82:581-629. 14. Grimshaw JM, Shirran L, Thomas R, et al. Changing provider behavior: an overview of systematic reviews of interventions. Med Care. 2001;39(8, Suppl 2):II-2-II-45. 15. Mayo NE. Outcome measures or measuring outcome. Physiother Can. 1994;46(3):145-148. 16. Thomas J, Miller P, Silaj A, King ML. Application of physiotherapy outcomes to the managed care model. Physiother Can. 1994;46(4). 17. Loomis J. Rehabilitation outcomes: the clinician's perspective. Can J Rehab. 1994;7(3):165-170. 18. May L. The challenge of measuring change: responsiveness of outcome measurements. Can J Rehab. 1997;10(1):1997.   31 19. Lavis JM. Informing policy-making with research findings. Can J Rehab. 1997;11(1):8. 20. Renwick R. Quality of life: linking research and policy. Can J Rehab. 1997;11(1):8-9. 21. Law M. Integrating outcomes research findings into rehabilitation practice. Can J Rehab. 1997;11(1):16-17. 22. American Physical Therapy Association. APTA Hooked on evidence website. Available at: http://www.hookedonevidence.com/. Accessed June 29, 2010. 23. Centre for Evidence-Based Physiotherapy (CEBP). Physiotherapy evidence database (PEDro) website. Available at: http://www.pedro.org.au/. Accessed June 14, 2007. 24. Chartered Society of Physiotherapists. Chartered Society of Physiotherapists outcomes website. Available at: http://www.csp.org.uk/director/members/practice/clinicalresources/outcomem easures.cfm. Accessed June 30, 2009. 25. Canadian Physiotherapy Association. CPA outcomes database: client-specific reports guide. Toronto 1997. 26. Guide to physical therapist practice: Part 3: specific tests used in physical therapist practice CD-ROM [computer program]. Version. Alexandria: American Physical Therapy Association; 2003. 27. American Physical Therapy Association. Guide to physical therapist practice. Second ed. Alexandria, VA: American Physical Therapy Association; 2003.   32 28. Physiotherapy Association of British Columbia (PABC). PABC Home Page. Available at: http://www.bcphysio.org/app/index.cfm?fuseaction=pabc.home. Accessed July 14, 2009. 29. Huijbregts MP, Myers AM, Kay TM, Gavin TS. Systematic outcome measurement in clinical practice: challenges experienced by physiotherapists. Physiother Can. 2002;54(1):25-31, 36. 30. Stevenson K, Lewis M, Hay E. Does physiotherapy management of low back pain change as a result of an evidence-based education program? J Eval Clin Pract. 2006;12(3):365-375. 31. Scott J, Huskisson EC. Graphic representation of pain. Pain. 1976;2:175-184. 32. Bolton JE, Wilkinson RC. Responsiveness of pain scales: a comparison of three intensity measures in chiropractic patients. J Manipulative Physiol Ther. 1998;21:1-7. 33. Fairbank J, Couper J, Davies J, O'Brien J. The Oswestry low back pain questionnaire. Physiotherapy. 1980;66:271-272. 34. Fairbank J, Pynsent P. The Oswestry Disability Index. Spine. 2000;25:2940- 2953. 35. Roland M, Fairbank J. The Roland-Morris disability questionnaire and the Oswestry disability questionnaire. Spine. 2000;24:3115-3124. 36. Oxman AD, Thompson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. Can Med Assoc J. 1995;153(10):1423-1431.   33 37. Frank JW, Brooker A-S, DeMaio SE, et al. Disability resulting from occupational low back pain: part II: what do we know about secondary prevention? A review of the scientific evidence on prevention after disability begins. Spine. 1996;21(24):2918-2929. 38. Frank JW, Kerr MS, Brooker A-S, et al. Disability resulting from occupational low back pain: part I: what do we know about primary prevention? A review of the scientific evidence on prevention before disability begins. Spine. 1996;21(24):2908-2917. 39. Davis MP, Badii M, Yassi A. Preventing disability from occupational musculoskeletal injuries in an urban, acute and tertiary care hospital: results from a Prevention and Early Active Return-to-Work Safely program. J Occup Environ Med. 2004;46(12):1253-1262. 40. Kozlowski AJ, Yassi A. Pain and disability outcomes from a Prevention and Early Active Return-to-work Safely (PEARS) program. Paper presented at: World Confederation for Physical Therapy (WCPT), 2007; Vancouver, Canada. 41. Kirby M. The Health of Canadians - The federal role: final report. Ottawa 2002. 42. Canada S. Canada's changing labour force, 2006 census: Highlights web page. 43. Romanow R. Building on values: the future of health care in Canada - final report. Ottawa 2002.   34 44. Allard ED, Delbos R, Erickson JB, Banks SM. Associations between employees' work schedules and the vocational consequences of workplace Injuries. J Occup Rehabil. 2007;17(4):641-651. 45. Yassi A, Ostry A, Spiegel J. Injury prevention and return to work: breaking down the solitudes. In: Sullivan T, Frank J, eds. Preventing and managing injury and disability at work. London: Taylor & Francis; 2003:75-86. 46. Frank J, Sinclair S, Hogg-Johnson S, et al. Preventing disability from work- related low-back pain. Can Med Assoc J. 1998;158(12):1625-1631. 47. Vernon H, Mior S. The Neck Disability Index: a study of reliability and validity. J Manipulative Physiol Ther. Sep 1991;14(7):409-415. 48. Stratford PW, Riddle DL, Binkley JM, Spadoni G, Westaway M, Padfield B. Using the Neck Disability Index to make decisions concerning individual patients. Physiother Can. 1999;51:107-112, 119. 49. McConnell S, Beaton D, Bombardier C. The DASH outcome measure user's manual: Institute for Work and Health; 1999. 50. Beaton DE, Katz JN, Fossel AH, Wright JG, Tarasuk V, Bombardier C. Measuring the whole or the parts? Validity, reliability, and responsiveness of the Disabilities of the Arm, Shoulder and Hand outcome measure in different regions of the upper extremity. J Hand Ther. Apr-Jun 2001;14(2):128-146. 51. Binkley J, Stratford P, Lott S, et al. The Lower Extremity Functional Scale (LEFS): scale development measurement properties and clinical application. Phys Ther. 1999;79:371-383.   35 52. Stratford PW, Binkley JM, Watson J, Heath-Jones T. Validation of the LEFS on patients with total joint arthroplasty. Physiother Can. 2000;52:97-105. 53. Loisel P, Buchbinder R, Hazard R, et al. Prevention of work disability due to musculoskeletal disorders: the challenge of implementing evidence. J Occup Rehabil. Dec 2005;15(4):507-524. 54. Franche RL, Baril R, Shaw W, Nicholas M, Loisel P. Workplace-based return- to-work interventions: optimizing the role of stakeholders in implementation and research. J Occup Rehabil. Dec 2005;15(4):525-542. 55. Occupational Health and Safety Agency for Healthcare. OHSAH PEARS Website. Available at: http://www.ohsah.bc.ca/EN/affiliate_ohs_services/. Accessed June 30, 2009. 56. Baldwin ML, Johnson WG, Butler RJ. The error of using returns-to-work to measure the outcomes of healthcare. Am J Ind Med. 1996;29:632-641. 57. Badii M, Keen D, Yu S, Yassi A. Evaluation of a comprehensive integrated workplace based program to reduce occupational musculoskeletal injury and its associated morbidity in a large hospital. J Occup Environ Med. 2006;48:1159-1165. 58. Graham ID, Logan J, Harrison MB, et al. Lost in knowledge translation: time for a map? J Contin Educ Health Prof. 2006;26:13-24. 59. Estabrooks CA, Thompson DA, Lovely JE, Hofmeyer A. A guide to knowledge translation theory. J Contin Educ Health Prof. 2006;26:25-36.    36 60. MacDermid JC, Solomon P, Law M, Russel J, Stratford PW. Defining the effect and mediators of two knowledge translation strategies designed to alter knowledge, intent and clinical utilization of rehabilitation outcome measures: a study protocol. Implementation Science. 2006;1(1):14. 61. Grol R, Grimshaw JM. From best evidence to best practice: effective implementation of change in patients’ care. Lancet. 2003;362:1225-1230. 62. Grol R, Wensing M, Eccles MP. Improving patient care: the implementation of change in clinical practice. Edinburgh: Elselvier Butterworth Heinemann; 2005. 63. Bartholomew LK, Parcel GS, Kok G, Gottlieb NH. Planning health promotion programs: an Intervention Mapping approach. Second ed. San Francisco: Josey-Bass; 2006. 64. Grol R, Bosch MC, Hulscher M, Eccles MP, Wensing M. Planning and studying improvement in patient care: the use of theoretical perspectives. Milbank Q. 2007;85(1):93-138. 65. Kok G, Schaalma H, C. RRA, van Emplin P. Intervention Mapping: a protocol for applying health psychology theory to prevention programs. J Health Psychol. 2004;9(1):85-98. 66. Schaalma H, Kok G. A school HIV-prevention program in the Netherlands. In: Bartholomew LK, Parcel GS, Kok G, Gottlieb NH, eds. Planning health promotion programs: an Intervention Mapping approach. Second ed. San Francisco: Josey-Bass; 2006.   37 67. Markham C, Tyrrell S, Shegog R, Fernandez M, Bartholomew LK. Asthma Management for inner-city children. In: Bartholomew LK, Parcel GS, Kok G, Gottlieb NH, eds. Planning health promotion programs: An Intervention Mapping approach. Second ed. San Francisco: Josey-Bass; 2006. 68. van Oostrom SH, Anema JR, Terluin B, Venema A, de Vet HCW, van Mechelen W. Development of a workplace intervention for sick-listed employees with stress-related mental health disorders: Intervention Mapping as a useful tool. BMC Health Serv Res. 2007;7(127). 69. Logan J, Graham ID. Toward a comprehensive interdisciplinary model of health care research use. Sci Commun. 1998;20(2):227-246. 70. Lomas J. Making clinical policy explicit. Legislative policy making and lessons for developing practice guidelines. Int J Technol Assess Health Care. 1993;9(1):11-25. 71. Fleuren M, Wiefferink K, Paulussen T. Determinants of innovation within health care organizations: Literature review and Delphi study. Int J Qual Health Care. 2004;16(2):107-123. 72. Logan J, Harrison MB, Graham ID, Dunn K, Bissonnette J. Evidence-based pressure-ulcer practice: the Ottawa model of research use. Can J Nurs Res. 1999;31(1):37-52.    38 CHAPTER 2. REVIEW OF INTERVENTIONS TO PROMOTE CHANGES IN CLINICAL REFLECTION AND DECISION- MAKING IN HEALTHCARE PROFESSIONS WITH SPECIAL REFERENCE TO STANDARDIZED OUTCOME MEASURES*   2.1 BACKGROUND Research into professional practice development and behavioral change among health care professionals is a relatively new area of scientific inquiry. Systematic reviews reporting on knowledge translation interventions to change specific practices among physicians found that single interventions such as prescribing patterns have small to modest effect sizes.1,2 The tailoring of multi- faceted interventions to improve fit of intervention strategies to potential adopters and environments in which they practice may be necessary to elicit larger magnitude changes in practice behaviors1-3 In physical therapy, a change of practice has been promoted by the profession in the adoption of standardized self-report questionnaires to measure and evaluate outcome.4-7 Finch and colleagues proposed that standardized self-report questionnaires offered a means of evaluation change in patient status superior to traditionally used measures of impairment with physical therapy intervention.5 The rationale offered was that data from standardized self- report measures can aggregated and interpreted for individuals and groups whereas  * A version of this chapter will be submitted for publication. Kozlowski AJ. review of interventions to promote changes in clinical reflection and decision-making in healthcare professions with special reference to the standardized outcome measures.   39 interpretation of measures of impairment are relative only to the individual to whom they are applied.5 However, it appears little change in the adoption of such measurement practices has been demonstrated for nearly 20 years.4,5,8,9 Enhanced clinical decision-making is one rationale put forth in support of adoption of self report questionnaires to measure and evaluate outcomes.5 Physical therapy models and theories are based on sound application of clinical reasoning and decision-making.10-13 Decision-making is, in turn, a complex process that integrates knowledge and experience to select a deliberate course of action towards an anticipated outcome, using reasoning processes.14,15 Although not explicitly incorporated into definitions, reflection has been described as integral to the thought processes of experts16 and hence is an element of decision-making.17 Reflection has been studied extensively in nursing,18,19 and to a lesser degree in the health professions of medicine,20 physical therapy, and occupational therapy.17,21 Schön described variations of reflection as elements in a framework for the development of abilities considered necessary in lifelong learning and professional growth.22,23 Two of these variants were reflection on action, described as a retrospective appraisal of events or experiences, and reflection in action, described as a way of thinking about a situation while engaged in it.22 Recently Wainwright and colleagues conducted a qualitative study that explored the use of reflection in clinical decision-making of novice and expert physical therapists.17 They used the framework proposed by Schön,22 identifying that both novices and experts employed reflection on action but only the experienced clinicians demonstrated reasoning in action.17 Reasoning on action was demonstrated in two domains of   40 specific action, where reflection was on decisions for specific patient interactions, and of professional experience, where reflection was more broadly applied to the therapist’s collective experience of many patient and professional interactions.17 Wainwright and colleagues argued that a gap exists with respect to our understanding of how clinical decision-making abilities evolve from novice to intermediate to expert practice, and that explicitly teaching the application of these various forms of reasoning can facilitate this evolution.17 The premises that decision-making abilities can be learned, that the processes of reasoning and reflection are employed in decision-making, and that decision- making is a deliberate application of forethought to effect a desired outcome are not new. However, investigations of specific interventions to promote changes in the ways clinicians employ reasoning, reflection, and decision-making processes have not been widely studied. If, as Wainwright and colleagues propose, that reflection and decision-making are skills that can be taught, knowledge of the factors that inhibit and facilitate the acquisition of these processes, and the contexts in which they operate, is necessary for developing professional curricula at entry and continuing education levels. The use of standardized questionnaires to support decision-making is not specific to physical therapy practice. Measures such as the SF-3624,25 and the EuroQOL-5D26 are employed in medicine and other disciplines as indicators of health-related quality of life, and condition-specific measures have been applied in multi-disciplinary settings as seen with the use of the WOMAC27,28 in rheumatology. Additionally, much the study of reflective practice has focused on nursing   41 practice.19,23 Thus, the primary purpose of this study was to conduct a comprehensive review of literature to determine the nature and scope of intervention studies promoting changes in clinicians’ abilities of reflection and decision-making in the health professions of medicine, nursing, physical therapy, and occupational therapy. The secondary purpose was to identify specifically those studies that incorporated the use of standardized self-report questionnaires to measure outcome as a means to facilitate reflective practice and/or improve decision-making skills.  2.2 METHODS Literature was searched first through two databases (MEDLINE and CINAHL) commonly used in research in healthcare, and second through snowball search by backward and forward review of reference lists. To guide the search strategy, we adopted the framework proposed by Wainwright and colleagues (Figure 2-1) in which reflection in action and reflection on action integrate with prior experience to influence clinical decision-making abilities, which in turn leads to effective patient management.17  Thus we included search terms related to outcome measures, reflection, and clinical decision-making. Although constructs such as clinical reasoning and critical thinking are also related to decision-making,15,29 we excluded them as they were not part of Wainwright’s conceptual framework. The search was limited to articles published between January 1990 and June 2010 on the assumption that this time frame paralleled the promotion of use of standardized self- report questionnaires in healthcare practice. Articles published on reflection and   42 decision-making in the healthcare professions of medicine, nursing, physical therapy, and occupational therapy were reviewed. Key words and medical subject headings or MeSH terms were derived by searching the MeSH lists in the MEDLINE and CINAHL databases. The database search strategies have been provided (Figure 2-2). The cumulated list of citations retrieved from each database search were considered for inclusion or exclusion based on review of the abstracts. Studies were included if they described an intervention to facilitate acquisition of reflective skills in students or practitioners, used reflection as a strategy to change decision making in practice, or used any strategy to promote use of an outcome measure. Of specific interest were studies using reflection on outcome measures to change decision-making. Articles were excluded if they described an application of reflection but did not describe a resulting change in clinical decision-making. Articles that were included were reviewed in their entirety. The forward (citing) and backward (cited) review of articles was started with the article by Wainwright and colleagues17 and the studies included from the database searches. Only peer-reviewed articles published between January 1990 and June 2010 were included in the forward search, but publications of any date were included in the backward search as they may lead to other recent publications. The forward (citing article) search was conducted using Web of Science®.30 This search strategy was continued for each additional included article. Those articles that were considered relevant were then categorized based on the methodology employed as an indication of the strength of evidence for the   43 interventions based on the levels of evidence as defined by Sackett31 (Table 2-1). It was anticipated that interventions for reflective practice would be reported as qualitative studies or descriptive reviews however these were not rated for level of evidence.  2.3 RESULTS 2.3.1 Search Results The three search strategies yielded 12 articles; one from MEDLINE,32 one from CINAHL,33 nine from the snowball search,34-42 and one by chance.43 The chance finding resulted from a cursory review of a new issue of the journal received by the researcher in the mail. The MEDLINE search retrieved 95 references of which four were older than 1990, three were in languages other than English, 50 which used synonyms for reflection or other contexts for decision-making (e.g., assumed impact of study findings on decision-making in conclusion), and 37 which offered commentary, described models or frameworks for reflection or decision-making, or described applications of reflection without description of changes in decision- making, or reported survey results. The CINAHL search retrieved 26 references, of which one was in a language other than English, one reported on a profession outside the scope of this review (speech-language pathology), and 23 of which offered commentary, described models,  described applications of reflection without statement of change, or reported survey results. In the snowball search (Figure 2-3), one article42 was found in a forward search from an article44 cited by Stevens and Beurskens43 (Haight and   44 colleagues44 was not included in the study). Articles by Green36 and Bellman34 were cited by Donaghy and Morss.32  Forward search from Donaghy and Morss located the article by Roche and Coote,39 which in turn cited Carr.38 Carr was cited by Burnett and colleagues,40 Peden-McAlpine and colleagues,37 and Toy and colleagues.41 Toy cited the study by Sobral.35 The studies were grouped into four themes for the content review. Studies of students were split into two groups: those exploring the acquisition of reflective and associated clinical abilities, and those using reflection as a means to improve a specific clinical practice behavior. Studies of practicing clinicians were grouped into those that used reflection as a means to improve clinical practice abilities and those that used reflection to influence use of a standardized outcome measure.  2.3.2 Studies of Students Developing Reflective Abilities Four studies reported on the development of reflective skills in courses designed for this purpose32,35,38,39 (Table 2.2). Two of these used content analysis of the transcribed audio-recordings of focus groups to evaluate the change in reflective ability of students in undergraduate university physical therapy programs in the United Kingdom.32,39 Although these courses were required, participation in the study was voluntary. Both studies used focus groups randomly selected from those participants who volunteered. One of the studies used a pre- and post-course evaluation for 3rd year students and one-year post-course evaluation for 4th year students.39   45 Both studies reported increased levels of reflective ability from the focus groups and awareness of how reflection can facilitate decision-making. Roche and Coote reported changes in students’ perceptions of reflection from a negative to positive, with increases in confidence, and self reported instances of reflection-in-action.39 Donaghy and Morss reported on benefits to reflective abilities due to increased personal insight (e.g., use of stereotyping and value judgments of patients), acceptance of discomfort with disclosure, and improved ability to self-identify strengths and weaknesses.32 Both studies reported improved awareness of the link between reflection and integrating evidence with decision-making in physical therapy practice. Donaghy and Morss also reported on strengths and weaknesses of the framework they used to support the course on reflection.32 Both studies identified the limitations of their qualitative methodologies, but recognized their contributions to this sparse literature.32,39  An evidence level was not assigned given the qualitative methodology. The third study used a pre-and post validated questionnaire for medical students taking an elective course in reflective practice at a university in Brazil.35 A non-random control group was comprised of about one-third of the student cohort who did not take the elective course but completed the pre- and post course questionnaires. The questionnaire was comprised of scales to measure self- reflection in learning, self-perceived confidence for self-regulated learning, and meaningfulness of the learning experience. Although no differences were found on demographic and academic factors for students taking the course versus those in   46 the control group, the groups were not equivalent on numbers of subjects (>2/3 intervention and <1/3 control). Although a randomized control design was not feasible in the Sobral study a cohort study with control group was implemented that included all 198 students in that year of the medical program. Perhaps consent and voluntary participation were not required in Brazil at the time of the study, but neither was reported in the article. Of the 103 course participants, 81% scored higher on the reflection-in-learning scale following the course. The small mean increase represented a significant difference from the control group which showed no mean group change.35 Differences were more substantial on association with competence on self-regulated learning scores, meaningfulness of the experience, and academic performance with  stratification of reflection in learning scores based on magnitude and direction of change.35 An evidence level of 2B (cohort study) was assigned to the study. The fourth study used written narratives of medical students in a rotation on obstetrics and gynecology to examine their use of reflection.38 Out of 187 students from two cohorts in an Australian medical school, 149 consented to participate. Reflections were submitted at mid-term and final evaluations and were analyzed with thematic coding by one blinded researcher. Frequency of reflective themes was also recorded. Sixteen themes were identified, of which clinical reasoning, knowledge, personal/professional development, and cultural influences were endorsed most frequently. Four levels of critical reflection were described. These were listing, describing, applying, and integrating elements of the clinical experience, where listing was considered the least developed and integration the most advanced   47 strategy. Most students (46%) demonstrated application and 16% demonstrated use of integration strategies. These levels were thought to be appropriate for the student level and indicative that development of reflective abilities requires time and experience.38 An evidence level was not assigned given the qualitative nature of the study. The use of two researchers to encode the themes and the use of validation methods as reported in other studies would have strengthened the methodology.  2.3.3 Studies of Students Using Reflection to Develop Clinical Behaviors Two studies were retrieved describing the use of reflection to promote a change in specific clinical skills or behaviors (Table 2.3). Green described the use of reflection by nursing students on their skills for moving and handling patients in light of national legislation that required use of safe procedures in the United Kingdom.36 Twenty-five nursing students  completed a module of 15 hours of classroom instruction in lifting and handling which included 5 hours of time for reflection. Each provided brief written reflective accounts to an open-ended prompt regarding the usefulness of reflection for the moving and handling of patients.36 All students stated the reflection was useful in preparing for future practice situations that included encountering clinicians using unsafe practices that were contrary to institutional policy. The reflection process was reported as helpful in developing strategies to deal with such difficult situations. Researchers concluded that all students demonstrated some reflective abilities, and some had demonstrated more advanced skills, but very few demonstrated conceptual and theoretical reflectivity.36   48 The Green study was based on Schön’s framework for reflection22 and Freire’s sociopolitical framework for education.45 Although an acceptable method of data collection was used, the qualitative analysis of the written text was not described, and no resources in qualitative methodology were cited. It is not clear how the student’s accounts were transformed into themes on reflection. An evidence level was not assigned given the qualitative methodology. Burnett and Phillips described a reflective intervention with fourth-year medical students in the United Kingdom aimed at promoting hand hygiene in an infection control program.40 Forty-four students each submitted three reflective accounts over the duration of the program. These were evaluated using a reflective ability assessment instrument. Each account was rated by two of three trained raters. Rater agreement assessed with the kappa statistic ranged from moderate to substantial. Although the reliability of the reflective ability assessment tool was reported, the change in hand hygiene practice of the medical students was not reported. An evidence level of 4 was assigned as this was reported as a cohort study, but the analysis of data was qualitative in nature.  2.3.4 Studies of Clinicians Using Reflection to Improve Aspects of Clinical Practice Four studies that described the use of reflection to facilitate changes in aspects of clinical practice were included (Table 2.4). Bellman reported on the use of reflection in action research to facilitate the adoption of a nursing model to effect change in the quality of care on their ward in a United Kingdom hospital.34 The   49 interactive research process was implemented on a 22-bed ward with 12 nurses who became co-researchers under the study methodology. The study consisted of two phases that covered a 15-month time frame. The first phase identified problems through a rating of nurses’ perceptions of the nursing model, semi-structured patient interviews, and a care plan analysis tool administered to patients. The second phase developed a plan using audio-recorded discussions and focused reflections. Reflective journals were also to be recorded by the co-researchers, but non- compliance was a problem. The study described an implementation phase but did not report on measurement of the changes planned to improve quality of care. An evidence level of 5 was assigned. Although application of action research and the nursing model were well described, the implementation phase was not reported, thus the impact of the study cannot clearly be understood. Auburn,33 a trainee emergency department nurse practitioner, described a single case study of uncertainty of a diagnosis of a scaphoid fracture in a 12 year old boy in the United Kingdom. Her interaction with the boy and his mother and the radiologist spurred her to reflect on her knowledge, decisions, and actions in the interaction, and to develop an action plan to address deficiencies in her clinical skills. She integrated current evidence for this diagnosis in children from the literature with an appraisal of her communication with the clients and other clinicians to improve her clinical ability to address this type of case, and her confidence in doing so. An evidence level 5 of was assigned for this single case study. Peden-McAlpine and colleagues reported on the use of a practice intervention reflection in a phenomenological study to facilitate incorporation of family   50 intervention to the practice of pediatric critical care nurse in two United States facilities.37 Eight nurses participated in the intervention. Course content was provided using narrative, role modeling, and reflective practice. Interviews were recorded, transcribed, and coded for thematic analysis. The qualitative methodology was described and referenced. Three themes relating to recognizing and reframing preconceptions about family, the meaning of family stress, and incorporating the family into nursing practice were identified. Narratives identified self-report of the nurses’ experiences with reflection and how their practices changed. An evidence level was not assigned to this study given its qualitative nature. The methods were described and referenced. Toy and colleagues reported on the use of reflection to facilitate achievement of rotation goals for residents in a four-year obstetrics and gynecology program in the United States.41  Sixteen residents participated in the study which compared attainment of residents’ goals and changes in their practice over two six-month rotations in the obstetrics and gynecology program. Residents used a questionnaire to measure monthly levels of attainment of goals set at the start of the rotation. They reported better attainment in the second rotation, which they attributed to better specification in defining their rotational goals. Significant increases in numbers of clinical procedures were also logged, indicating improvements in practice. Residents reviewed the reflection exercises as valuable in defining explicit goals and in communicating with team members. An evidence level of 4 was assigned to this cohort study due to the small number of subjects.    51 2.3.5 Studies of Clinicians Implementing Standardized Outcome Measures in Practice Two studies were retrieved describing implementation of standardized measures of outcome (Table 2.5). Calquhoun and colleagues described a cohort study to promote the implementation of the Canadian Occupational Performance Measure46 as part of routine practice in an inpatient geriatric unit in Toronto, Canada.42 The study included three occupational therapists working with a cohort of 45 clients whose stay on the unit was 2 weeks or longer. Value of the Canadian Occupational Performance Measure was identified as a change in scores of the Functional Independence Measure47 from admission to discharge. Results were compared to Functional Independence Measure scores for a group of clients discharged from the unit prior to the study. Analysis included regression analysis but the study was insufficiently powered to find significant differences between the experimental cohort and the usual care group The authors  highlighted important factors to consider regarding the inherent value of the Canadian Occupational Performance Measure and the burden of implementing it routinely in a particular clinical setting.42 An evidence level of 4 was assigned to this study as it represented a poor cohort study due to low power and/or unrealized expectations in the magnitude of expected change. The second study, reported by Stevens and Beurskens, reported on an implementation study to promote the use of two commonly recommended standardized measures for use in private physical therapy practice in the Netherlands by Dutch clinical practice guidelines.43 The two measures were the   52 Patient-Specific Complaints instrument48 which is a Dutch analogue of the Patient Specific Functional Scale,49 and the Six-Minute Walk test.50 The former is a standardized measure tailored to activities of preference to the individual patient, and the latter is a standardized test of demonstrated performance in walking tolerance. The implementation strategy was based on a framework proposed by Grol and colleagues3 which was consistent with current evidence in implementation science.2,51,52 Study implementation consisted of five steps that included a problem analysis with search of the literature and interactive involvement of a group of potential adopters, through semi-structured interview, consultation throughout development of the proposal, and participation in the first phase of pilot testing. The process also included development of a self-analysis list to provide potential adopters with self-awareness of barriers and phases of change, which may function as a facilitator for reflection, and use of a sounding board meeting of the researchers and participants. The development phase included interviews of 13 physical therapists, less than half of whom reported using the measures. The intervention tested with this first group and modified based on its feedback before being tested with a second independent group of physical therapists. The authors reported successful implementation with recommendation for implementation of a similar process nationally. However, the reported outcome with respect to change in use of the two outcome measures was curiously vague, stated as “at the last meeting, most physical therapists indicated that they actually used both instruments.”43 No measure of self-reported or demonstrated use such as a   53 chart audit was reported, which is of interest as the authors also stated that in a group discussion, some physical therapists admitted to over-reporting their past use of the two measurement instruments.43 The study was assigned an evidence level was not assigned due to the exploratory and qualitative nature of the design.  2.4 DISCUSSION In this study we attempted to conduct a comprehensive review of literature describing interventions to facilitate development of reflective skills, to use reflection as a means to improve decision-making skills, to promote use of outcome measures, or to use reflections on outcomes as a means to improve decision-making in four healthcare professions. We succeeded in this regard in that the 12 articles retrieved covered three of these specific objectives and represented all four professions. It may not have succeeded in that most of the studies were retrieved by the snowball search. Thus it is likely that others exist but were not retrieved. The quality of most of these studies was rated at level 4 according to the criteria described by Sackett.31 This is not surprising given the exploratory nature of this literature and the limited scope in which these questions have been studied. Despite the level of evidence, these studies do offer insight to the processes of reflection and outcome measurement in light of clinical decision-making. This insight may be considered in the development of models to facilitate development and enhancement of decision- making skills for the purpose of improving health outcomes. The twelve studies equally represented studies in medicine, nursing, and the rehabilitation sciences with geographic representation spanning three continents.   54 Where used, practice models tended to be selected from within the researchers’ disciplines32,34,37,38 yet most of the studies on reflection cited the seminal work of Schön.32,35-37,39-41 Perhaps this is an indication that a broader perspective on modeling reflection in practice is viable. In this light, Wainwright’s revised conceptual framework proposing that reflection-in-action and reflection-on-action influence decision-making to promote effective patient management17 warrants review. First, this literature review found no intervention studies in which reflection was used to directly influence clinical decision-making abilities. The four studies that sought to change aspects of clinical practice33,34,37,41 did not attend explicitly to decision-making processes. Second, although Wainwright’s framework might facilitate depth of understanding of the role of reflection in development of experience in physical therapy, this ‘part’ should mesh with a larger more general model of practice that incorporates other elements of decision-making such as reasoning. Findings supported by this literature review include the potential to develop reflective abilities and to change practice abilities using reflection with select subjects. The extent to which this can be done on a larger scale remains uncertain. Voluntary participation, consent requirements, and the limitations of exploratory methods may limit changes of practice to those who are willing and able to implement reflection in their practices. The strongest evidence we found was in Sobral’s study of medical students. Where consent was not a limit to inclusion, one third of students elected to not enroll in the reflection module, and of those enrolled, a range of changes from large positive to negative changes was seen.35 Thus,   55 reflective practice may not be effective for all practitioners, and identification of facilitating and inhibiting factors and alternative strategies to influence changes in practice may be necessary. Another finding of interest is that represented by what was not found. In addition to finding no studies that attempted to change decision-making directly, no studies reporting on implementation of self-report questionnaires were located. Thus we have no additional insight into the link between outcome measurement and decision-making or the function of reflection to facilitate this link. Colquhoun and colleagues reported on an effort to implement the Canadian Occupational Performance Measure, however, this instrument was applied not as an outcome measure but as a planning and management tool. Further, they did not explore the perceptions of the clinicians in the function, value, burden, or balance of these factors in the routine use of the measure. In Stevens and colleagues implementation study, they used two measures that have utility in application to individual clients, but are limited in ability to aggregate data, which has been offered as an alternate way to understand practice.53 Also both measures require the clinician to interview (patient-specific functional scale) or instruct and observe (six-minute walk test). Self- report questionnaires can be administered by means other than the clinician thus reducing burden. This area of research continues to offer opportunities for exploration.      56 2.4.1 Limitations This study is limited by a number of factors. With the exception of assistance from a librarian in structuring and implementing the database searches, a single researcher conducted the search strategy and literature review. Independent search and review by a second researcher would bolster the methodology and add confidence to the findings. Consequently, the potential exists that relevant studies were not retrieved. This limitation may be supported by the fact that ten of the 12 studies retrieved were located through the snowball search. A more extensive database search strategy might retrieve more records including those found by the snowball search. A hand-search of selected journals was considered but abandoned due to the apparent scarcity of relevant studies and the preponderance of journals over the time frame of the study. Retrieval of additional studies might add depth to the findings, but would not likely change the finding that the literature on interventions involving reflection and outcome measurement to promote changes in decision-making is sparse and disconnected by professional boundaries. Qualitative studies were not rated for levels of evidence. Additionally, the levels of evidence that were assigned reflect the rudimentary nature of research into interventions to change practice behaviors more than the methods of the researchers. Exploratory study is necessary to facilitate hypothesis generation before hypothesis testing methods like randomized control trials can be implemented. Further, the nature of professional behavior and complexity of the healthcare systems may provide ethical and practical challenges to the use of experimental designs and random assignment of subjects to groups.   57 2.4.2 Future Research Extension of this study and the contributing literature include investigation of the individual components of clinical decision-making, and integration of these parts to develop a comprehensive understanding of the whole. Disciplinary differences may be warranted for some aspects of the decision-making process, but much of it may reflect common attributes of human behavior and organizational culture. Developing a comprehensive model of decision-making in healthcare systems could provide a framework for developing and implementing the findings of implementation studies. Within such a framework, understanding which components of decision- making can be changed and the circumstances that inhibit or facilitate that change may lead to more effective interventions to promote changes in practice behaviors within complex systems.  2.4.3 Conclusion This study attempted to comprehensively review the literature on interventions to facilitate reflection, use reflection to influence clinical decision-making, promote use of standardized outcome measures, and to use reflection on outcome data to enhance clinical decision-making. Although the extent to which the review represents a comprehensive assessment of this literature is debatable, the study provided insight on a number of points. First, although most of the studies were methodologically weak, there was evidence that some students and clinicians can develop reflective abilities. The extent to which this is possible has not been demonstrated. Second, the extent to   58 which implementation interventions can increase use of standardized outcome measures remains unclear, as does the balance of value to burden necessary for potential adopters to consider specific measures as being sufficiently valuable for their clinical settings. Third, the literature in this area is sparse and there is room to improve decision making models to adequately reflect the components of reflection, reasoning, tacit and explicit knowledge as they apply to various professional groups, and healthcare system as a whole. The gaps in this literature and minimal interest in the subjects of reflection, decision-making, and outcome measurement appear to include the health professions of medicine, nursing, physical therapy, and occupational therapy, and cross international boundaries.     59 Figure 2-1.  Wainwright’s revised conceptual framework for the use of reflection to inform the clinical decision-making process. Adapted from Wainwright et al.17     Prior Experience Develop skills and abilities Clinical Decision- Making Abilities Reflection -in- Action  Reflection -on- Action Reflection   - on- Specific Action Reflection -on- Professional Experience   Novice Intermediate Experienced Intermediate  Experienced  Effective Patient Management   60 Figure 2.2  Database Search Strategies                  MEDLINE Search Strategy 1.  reflect*.mp. 2.  Decision Making/ or Decision Mak*.ti,ab. 3.  Professional Practice/ 4.  "Outcome Assessment (Health Care)"/ 5.  occupational therapy/ or "physical therapy (specialty)"/ 6.  Nursing/ 7.  3 or 5 or 6 8.  1 and 2 9.  7 and 8 10. 2 and 4 11. 1 and 10 12. 11 or 9 CINAHL Search Strategy S1. (MH "Reflection") S2. (MH "Decision Making, Clinical") S3. (MH "Professional Practice") or (MH "Medical Practice") or (MH "Nursing Practice") or (MH "Occupational Therapy Practice") or (MH "Physical Therapy Practice") S4. (MH "Outcome Assessment") S5. S1 and S3 S6. S2 and S5 S7. S4 and S5 S8. S6 of S7    61 Figure 2-3. Snowball Search Results    MEDLINE Search CINAHL Search  Chance Find  Donaghy 2007 Auburn 2002 Stevens 2010 Haight 2001 Colquhoun 2010 Bellman 1996 Green 2002 Roche 2008 Carr 2006 Peden- McAlpine 2005 Sobral 2000 Toy 2009 Burnett 2007 LEGEND Backward search (cited references)  Forward search (citing references)    62  Table 2-1. Levels of Evidence as described by Sackett.31 Evidence Level Study Types 1A Systematic review or meta-analyses of randomized clinical trials 1B Randomized clinical trials with narrow confidence intervals 1C All or none case series 2A Systematic review cohort studies 2B Cohort study or low quality Randomized clinical trials 2C Outcomes research 3A Systematic review of case-controlled studies 3B Case-controlled study 4 Case series, poor cohort, or case-controlled study 5 Expert opinion    Table 2-2.  Search results for studies evaluating the acquisition of reflective skills by students First Author (Year) Journal, Volume (Issue):Pages Study Type (Level of Evidence) Comments Donaghy (2007) Physiother Theory Pract, 23(2)83-94 Focus groups, post- module evaluation Physical therapy, 5 groups total of 43 students. Roche (2008) Med Educ, 42(11):1064-1070 Focus groups, pre- and post-evaluation Physical therapy, 2 groups total of 20 students. One group pre- post module (3rd year), other group one year post-module only (4th year) Sobral (2000) Med Educ, 34(3):182-187 Pre-and post- elective course evaluation with non-random control using a validated questionnaire and Inferential statistics to evaluate differences (2B) Medicine, 103 students. Refection-in-learning scores increased for 81% of students, and were associated with self- regulated learning, learning experience meaning, grade point average, and diagnostic ability. Carr (2006) Med Educ, 40(8):768-774 Mid-term and final written student narratives were thematically coded by one blinded researcher. Frequency of four levels of reflection and other descriptive statistics were reported. Medicine; 149 of 187 students from 2 cohorts in an obstetrics and gynecology rotation consented to participate. 16 themes were extracted from the data. Most students demonstrated description and application of reflections, but few demonstrated integration.      63  Table 2-3.  Search results for studies evaluating the use of reflection to develop clinical skills by students First Author (Year) Journal, Volume (Issue):Pages Study Type (Level of Evidence) Comments Green (2002)  Nurse Educ Pract, 2(1):4-12 Post module evaluation, open ended text;analysis methodology not described Nursing; 25 student responses. Reported increases in confidence, clinical reasoning ability, and professional development Burnett (2008) Med Teach, 30(6):157-160 Cohort study with qualitative analysis of reflective ability (4) Medicine; 132 reflective accounts re a program to promote infection control through hand washing were submitted by 44 final year medical students    Table 2-4.  Search results for studies evaluating the use of reflection to change aspects of clinical practice First Author (Year) Journal, Volume (Issue):Pages Study Type (Level of Evidence) Comments Bellman (1996)  J Adv Nurs, 24(1)129-138 Action research  Nursing; 12 nurses on a 22-b3d ward engage as co-researchers to develop a nursing model- based quality improvement strategy. Two of three phases are reported but result of the initiative are not included Auburn (2007) Med Educ, 40(8):768-774 Single case study. (5) Nursing; one emergency room nurse practitioner describes a missed diagnosis of a wrist fracture, reflective insights, and an action plan to remediate deficiencies in clinical knowledge and skills Peden- McAlpine (2005) J Adv Nurs 49(5):494-501 Phenomenology study of reflection to change Nursing, 8 pediatric critical care nurses consented to participate. Three themes are derived from narrative accounts relating to the experience of reflecting on the meaning of family intervention in practice Toy (2009) Teach Learn Med, 21(1)15-19 Cohort study with a same-group pre- intervention control (4) Medicine; 16 obstetrics and gynecology residents promote change in ability to define and attain rotation goals through reflection in the United States    64 Table 2-5.  Search results for studies evaluating the implementation of standardized outcome measures in practice First Author (Year) Journal, Volume (Issue):Pages Study Type (Level of Evidence) Comments Colquhoun (2010)  Aust j Occup Ther J 57:111-117 Cohort study of 45 clients with historical comparison group of 58 previously discharged clients. (4) Occupational therapy; 3 clinicians routinely used the Canadian Occupational Performance Measure to facilitate gains in functional independence. Study was underpowered to detect a change in the outcome measure. Findings are limited to awareness of the complexities of implementation research Stevens (2010) Med Educ, 40(8):768-774 Implementation study.  Physical therapy; 13 physical therapists engage in development and pilot testing of a multifaceted intervention to promote use of two standardized measures in Dutch private physical therapy. An important outcome of magnitude of change was reported vaguely as a self- reported increase in use.     65 2.5 REFERENCES 1. Oxman AD, Thompson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. Can Med Assoc J. 1995;153(10):1423-1431. 2. Grimshaw JM, Shirran L, Thomas R, et al. Changing provider behavior: an overview of systematic reviews of interventions. Med Care. 2001;39(8, Suppl 2):II-2-II-45. 3. Grol R, Wensing M, Eccles MP. Improving patient care: the implementation of change in clinical practice. Edinburgh: Elselvier Butterworth Heinemann; 2005. 4. Cole B, Finch E, C G, Mayo N. Physical rehabilitation outcome measures. Toronto: Canadian Physiotherapy Association; 1994. 5. Finch E, Brooks D, Stratford P, Mayo N. Physical rehabilitation outcome measures. Second ed. Hamilton: BC Decker; 2002. 6. Kay T, Myers A, Huijbregts M. How far have we come since 1992? A comparative survey of physiotherapists' use of outcome measures. Physiother Can. 2001;53(4):268-275. 7. Huijbregts MP, Myers AM, Kay TM, Gavin TS. Systematic outcome measurement in clinical practice: challenges experienced by physiotherapists. Physiother Can. 2002;54(1):25-31, 36. 8. Kirkness C, Korner-Bitensky N. Prevalence of outcome measure use by physiotherapists in the management of low back pain. Physiother Can. 2002;53(4):249-257.   66 9. Jette DU, Halbert J, Iverson C, Miceli E, Shah P. Use of standardized outcome measures in physical therapist practice: perceptions and applications. Phys Ther. 2009;89(2):125-135. 10. Darrah J, Loomis J, Manns P, Norton B, May L. Role of conceptual models in a physical therapy curriculum: application of an integrated model of theory, research, and clinical practice. Physiother Theory Pract. 2006;22(5):239-250. 11. Wessel J, Williams R, Cole B. Physical therapy students' application of a clinical decision-making model. The Internet Journal of Allied Health Sciences and Practice. 2006;4(3):1-11. 12. American Physical Therapy Association. Guide to physical therapist practice. Second ed. Alexandria, VA: American Physical Therapy Association; 2003. 13. Rothstein JM, Echternbach JL, Riddle DL. The hypothesis-oriented algorithm for clinicians II (HOAC II): A guide for patient management. Phys Ther. 2003;83(455-470). 14. Mattingly C, Fleming MH. Clinical reasoning: forms of inquiry in a therapeutic practice. Philadelphia, PA: FA Davis Co.; 1994. 15. Higgs J, Jones M. Clinical reasoning in the health professions. 2nd ed. Oxford: Butterworth Heinemann; 2000. 16. Elstein AS, Shulman LA, Sprafka SA. Medical problem solving: an analysis of clinical reasoning. Cambridge, MA: Harvard University Press; 1978. 17. Wainwright SF, Shepard KF, Harman LB, Stephens J. Novice and experienced physical therapist clinicians: a comparison of how reflection is used to inform the clinical decision-making process. Phys Ther. 2010;90(1):75-88.   67 18. Atkins S, Murphy K. Reflection: a review of the literature. J Adv Nurs. 1993;18:1188-1192. 19. Johns C. Becoming a reflective practitioner. Third ed. Oxford, UK: Wiley- Blackwell; 2009. 20. Kidd J, Nestel D. Facilitating reflection in an undergraduate medical curriculum. Ned Teacher. 2004;26(5):481-483. 21. Donaghy M, Morss K. Guided reflection: a framework to facilitate and assess reflective practice within the discipline of physiotherapy. Physiother Theory Pract. 2000;16:3-14. 22. Schon D. The reflective practitioner: how professionals think in action. San Francisco, CA: Jossey-Bass Inc Publishers; 1983. 23. Schon D. Educating the reflective practitioner. San Francisco, CA: Jossey-Bass Inc Publishers; 1987. 24. Ware JE. SF-36 Health Survey update. Spine. 2000;25(31):30-39. 25. Ware JEJ, Snow KK, Kosinski M, Gandek B. SF-36 Health Survey: manual and interpretation guide. Boston, MA: The Health Institute, New England Medical Center; 1993. 26. the EuroQoL Group. EuroQoL - a new facility for the measurement of health- related quality of life. Health Policy. 1990;16:199-208. 27. Bellamy N, Buchanan WW. A preliminary evaluation of the dimensionality and clinical importance of pain and disability in osteoarthritis of the hip and knee. Clin Rheumatol. 1986;5(231-241).   68 28. Bellamy N, Buchanan WW, Goldsmith CH, Campbell J, Stitt LW. Validation study of the WOMAC: a health status instrument for measuring clinically important patient relevant outcomes to anti-rheumatic drug therapy in patients with osteoarthritis of the hip or knee. J Rheumatol. 1988;15:1833-1840. 29. Edwards I, Jones M, Carr J, Braunack-Mayar A, Jensen G. Clinical reasoning strategies in physical therapy. Phys Ther. 2004;84(312-335). 30. Thompson Reuters. Web of Science Search Engine, ISI Web Of Knowledge webpage; Date viewed: June 29, 2010. 31. Sackett DL. Evidence-based medicine: how to practice and teach EBM. Edinburgh: Churchill Livingstone; 2000. 32. Donaghy M, Morss K. An evaluation of a framework for facilitating and assessing physiotherapy students' reflection on practice. Physiother Theory Pract. 2007;23(2):83-94. 33. Auburn T, Bethel J. Hand injuries in children: A reflective case study. Emerg Nurse. 2007;40(8):768-774. 34. Bellman LM. Changing nursing practice through reflection on the Roper, Logan and Tierney model: the enhanced approach to action research. J Adv Nurs. 1996;24(1):129-138. 35. Sobral DT. An appraisal of medical students' reflection-in-learning. Med Educ. 2000;34(3):182-187. 36. Green CA. Reflecting on reflection: students' evaluation of their moving and handling education. Nurse Educ Pract. 2002;2(1):4-12.   69 37. Peden-McAlpine C, Tomlinson PS, Forneris SG, Genck G, Maiers SJ. Evaluation of a reflective practice intervention to enhance family care. J Adv Nurs. 2005;49(5):494-501. 38. Carr S, Carmody D. Experiential learning in women's health: medical student reflections. Med Educ. 2006;40(8):768. 39. Roche A, Coote S. Focus group study of student physiotherapists' perceptions of reflection. Med Educ. 2008;42(11):1064-1070. 40. Burnett E, Phillips G, Ker JS. From theory to practice in learning about healthcare associated infections: reliable assessment of final year medical students' ability to reflect. Med Teach. 2008;30(6):157-160. 41. Toy EC, Harms KP, Morris JRK, Simmons JR, Kaplan AL. The effect of monthly resident reflection on achieving rotational goals. Teach Learn Med. 2009;21(1):15-19. 42. Colquhoun H, Letts L, Law M, MacDermid JC, Edwards M. Routine administration of the Canadian Occupational Performance Measure: effect on functional outcome. Aust Occup Ther J. 2010;57:111-117. 43. Stevens JGA, Beurskens AJMH. Implementation of measurement instruments in physical therapist practice: development of a tailored strategy. Phys Ther. 2010;90(6):953-961. 44. Haigh R, Tennant A, Biering-Sorensen F, et al. The use of outcome measures in physical medicine and rehabilitation within Europe. J Rehabil Med. 2001;33:273-278. 45. Friere P. Pedagogy of the oppressed. London, UK: Penguin Books; 1972.   70 46. Law M, Baum C, Dunn W, eds. Measuring occupational performance. Supporting best practice in occupational therapy. 2nd ed. Thorofare, NJ: Slack Incorporated; 2005. 47. Hamilton B, Granger C, Sherwin F, Zuielezny M, Tashman JS. A uniform national data system for medical rehabilitation. In: Fuhrer MJ, ed. Rehabilitation Outcomes: Analysis and Measurement. Baltimore, MD: Brooks; 1987:59-74. 48. Beurskens AJMH, de Vet HC, Koke AJ. Assessing disability of functional status in low back pain: a comparison of different instruments. Pain. 1996;65:71-76. 49. Stratford PW, Gill C, Westaway MD, Binkley JM. Assessing disability and change on individual patients: a report of a patient-specific measure. Physiother Can. 1995;47:258-263. 50. Butland RJ, Pang J, Gross ER, Woodcock AA, Geddes DM. Two-, Six-, and 12-minute walking tests in respiratory disease. Br Med J (Clin Res Ed). 1982;284:1607-1608. 51. Grol R, Grimshaw JM. From best evidence to best practice: effective implementation of change in patients’ care. Lancet. 2003;362:1225-1230. 52. Grol R, Bosch MC, Hulscher M, Eccles MP, Wensing M. Planning and studying improvement in patient care: the use of theoretical perspectives. Milbank Q. 2007;85(1):93-138. 53. Resnik L, Hart D. Using clinical outcomes to identify expert physical therapists. Phys Ther. 2003;83(11):990-1002.    71 CHAPTER 3. STANDARDS OF PHYSICAL THERAPY PRACTICE RELATED TO OUTCOMES, MEASUREMENT, AND EVALUATION IN ENGLISH-SPEAKING CANADA: A REVIEW OF REGULATORY AND RESOURCE DOCUMENTS†   3.1 INTRODUCTION Over the past 20 years, standardized methods to measure the outcomes of physical therapy (PT) have been increasingly advocated within the profession.1,2 Evidence however indicates that little change has occurred in systematically incorporating the use of such measures into clinical practice by physical therapists.3- 5 Given the challenges of promoting change in professional practice,6,7 including the uptake and use of standardized methods of outcome measurement,3-5 this may be indicative of a failure in knowledge translation within the profession. Literature on use of standardized measures to evaluate outcome in PT suggests that Canadian physical therapists have not systematically incorporated outcome measures into routine practice despite their awareness of these tools.1,3,4,8 Knowledge translation theory may be useful in examining the uptake and integration of standardized outcomes measures into PT practice. In general, little is  † A version of this chapter will be submitted for publication. Kozlowski, A.J. Standards of Physical Therapy Practice Related to Outcomes, Measurement, and Evaluation in English-speaking Canada: A Review of Regulatory and Resource Documents.   72 known about the characteristics of successful knowledge translation interventions to promote change in health professional practice behaviors.6,7 The most effective knowledge translation strategies that have been reported include a coordinated mandate for change from professional bodies,7,9 resources dedicated to the implementation of change by practitioners7,9 including provision of effective training and support strategies7,9 that match ecological level, type, and content of barriers10 and that are provided over a sufficient time frame.11 In addition, to be successfully adopted behavioral change needs to be both meaningful to and valued by the practitioner as well as other stakeholders including organizational superiors.9,12,13 Failure to promote adoption of outcome measurement and evaluation practices as a desired behavioral change could be a result of one or more of these factors. The Canadian Physiotherapy Association (CPA) has promoted adoption of standardized outcome measurement and has provided resources to guide outcome measurement practices. The lack of adoption despite promotion and provision of resources could reflect deficiencies in practitioner motivation which may be influenced by individual and organizational inhibiting factors; in the mandate of professional bodies to institute change; in the availability of a generally accepted practice model based on outcome evaluation; or in the strategies to support such changes in practice. This chapter addresses the mandate for the use of standardized outcome measurement in PT practice, the meaning of outcome measurement, and the implications of adoption of outcome measurement and evaluation in routine PT practice.    73 3.1.1 Mandate The mandate for PT practice is defined and directed in Canada by provincial regulatory boards. This mandate is defined in various regulatory documents, including PT acts, regulations, bylaws, practice standards, codes of ethics, and advisory or practice statements, among others. The application of this mandate to practice may be further described to practitioners through other resources and means of support. As the profession is regulated provincially these colleges are not mandated to shape national practice standards and policy. Although the regulatory boards are members of a national agency, the Canadian Alliance of Physiotherapy Regulators (the Alliance), this body is tasked primarily with setting entry level standards of practice and managing the development and administration of the national physiotherapy competency examination.14 Another national agency positioned to promote advances in clinical practice is the CPA. Through its national body15 and provincial branches,16-22 the CPA has the ability to promote a nationally unified position but as a voluntary member body it is not mandated to enforce such practice. That said, many of the resources currently available to guide the measurement and evaluation of clinical outcomes have been developed and/or made available by the CPA and its provincial branches, some in collaboration with the colleges, the Alliance, and other organizations.  3.1.2 Meaning: The Physical Therapy Practice Model Rogers defines the meaning of an innovation as “the subjectively and frequently unconscious perception of an innovation by members of a social   74 system.”13 Meaning may be interpreted through perceived attributes of the innovation such as relative advantage, compatibility, complexity, trialability, observability.13 Meaning may also be influenced by environmental factors like the organizational or professional culture.12,23 This meaning has a powerful influence on the adoption decision.9 Since the concept of outcome measurement and evaluation is integral to the clinical decision-making process, we need to look more broadly at our clinical practice model. In Canada, we do not currently have one practice model that has been adopted and promoted across jurisdictions. The International Classification of Functioning, Disability and Health, also known as the ICF, includes a health framework that has been promoted as a framework for PT practice.24 In the ICF framework, health status is viewed as reflecting the three component levels of functioning, namely, body functions and structure, activity, and participation. In turn, these components are modified by environmental factors, personal factors, and changes in health due to diseases or other conditions. Although the ICF framework provides clarity and context to the constructs of function and disability, it does not inherently provide a comprehensive framework for the integration of these concepts into PT practice. Canadian physical therapists have proposed a number of practice models.25-28 In 1985, Dean put forth the Psychobiological Adaptation Model, which held that treatment outcome was dependent on the interaction of primary and secondary factors relating to both the patient and the physical therapist.25 Ten years later, Cott and colleagues proposed that movement across multiple levels, from micro to macro, forms the central theme of PT practice in the Movement Continuum Theory.26   75 In 2006, Darrah and colleagues and Wessel and colleagues both published models integrating constructs of the ICF framework with PT practice. The CORxE model of best practice was developed to integrate the constructs that theory, research and clinical practice are interdependent, that client-centered and evidence-based practices are paramount, and that the terminology and philosophy of the ICF framework are highly consistent with contemporary definitions of PT practice.27 The CORxE conceptual model was developed with a partner clinical decision-making model to guide curriculum planning in entry-level education at the University of Alberta.27 Wessel described the model in use at McMaster University28 which was based on the earlier work of Rothstein and Echternach.29 Although the models proposed by Dean and Cott predate the ICF, they share common elements with it. And although these models could provide a sound basis for integrating ICF constructs and terminology into PT practice, none has been adopted widely. This may be a consequence of having used simple diffusion methods like publication to promote the models, which alone are unlikely to change practice.7,9 Lacking a sound rationale to select one of these models as a criterion for this review, we looked further abroad. Outside Canada, the American Physical Therapy Association (APTA) has published a practice model that parallels common practice in Canada.30 We selected this model for use in our review for the following reasons. Its elements are analogous to those commonly found in Canadian practice, and it provides an externally defined criterion on which to assess Canadian practice. The APTA represents the largest national population of physical therapists in the world.   76 3.1.3 Key Resources in Outcome Measurement and Evaluation In addition to regulatory documents, two key resources have been published and broadly circulated to the PT community in Canada; they provide guidance on the integration of outcome measurement and evaluation methods. These are the Physical Rehabilitation Outcome Measures, Second Edition: A Guide to Enhanced Clinical Decision-Making2 (the handbook) and the Essential Competency Profile for Physiotherapists in Canada This handbook was published in 2002 as an update to the first edition1 based on research indicating that, although physical therapists demonstrated greater awareness of outcome measurement issues, their integration of measurement and evaluation practices into practice remained deficient.2,8 Barriers to adoption of outcome measurement practices had changed somewhat and included lack of information on how to find, select, and use measures and how to interpret the results of treatment.2,8 The handbook was published as a guide to advanced clinical decision-making apparently on the assumption that it would motivate clinicians to adopt measurement and evaluation practices. The handbook was organized in two parts. Part I consisting of six chapters provided the history and rationale for measurement and evaluation. Topics included the selection of measures that are relevant to the client and those that are relevant to outcomes; a review of measurement properties; a methodology for and evaluation and interpretation of outcomes to enhance clinical decision-making for individual clients; and a 31 (the profile). These two resources addressed gaps in outcome measurement and evaluation in practice that existed at their respective publication times.   77 comparable methodology for decision-making regarding rehabilitation programs.2 Part II provided a bank of 74 standardized measures available at the time of the handbook’s publication. These measures were listed alphabetically and categorized by construct and by area of practice. These included mostly self-report and performance measures of activity-level function (i.e., consistent with the language of the ICF framework) and quality of life constructs covering many areas of PT practice. A description of the development and application was provided for each measure along with a literature-based summary of reliability, validity, and measurement properties like detectable change indices. Also included was a glossary of terms and a CD-ROM with portable document format versions of selected self-report questionnaires.2 The profile was developed to describe essential competencies that physical therapists must demonstrate upon entry to the profession and should maintain throughout their careers. Its intended uses span individuals and groups of stakeholders both internal and external to the PT profession.31 Development was led by a National Physiotherapy Advisory Group in collaboration with the National Accreditation Council for Physiotherapy Academic Programs, the Alliance, the CPA, and the Canadian Universities Physical Therapy Academic Council.31 Construction of the profile was based on a functional job analysis model, which was informed by literature review and stakeholder consultation. The profile represents a three-part model incorporating aspects of a professional development continuum (novice to expert) and dimensions of competence and context of practice.31 Of particular relevance to this chapter are three of the seven dimensions of competencies related   78 to the practice model. These are dimensions the Client Assessment, the Physiotherapy Diagnosis/Clinical Impression and Intervention Planning, and the Implementation and Evaluation of Physiotherapy Intervention.31 These three competencies include seven elements akin to those of the APTA practice model, namely assessment (or examination), physiotherapy diagnosis, clinical impression and evaluation, prognosis, intervention planning (or plan of care), and implementation (or intervention), and outcome. The profile incorporates ICF constructs and terminology. The profile also provides a glossary including terms like diagnosis, outcome, and outcome measure. Missing, however, are terms such as function, disability, prognosis, planning, and outcome evaluation.  3.1.4  Purpose The primary purpose of this chapter was to compare the PT standards of practice in English-speaking provinces in Canada by reviewing the mandate to practice as defined by provincial regulatory boards. The secondary purpose was to review these documents and supplemental resources designed to support the integration of outcome measurement and evaluation methods, regarding their meaning in the context of a clinical practice model. The tertiary purpose was to identify gaps in mandate and meaning and make recommendations.  3.2  METHODS To address our purposes, we developed a framework for evaluation based on the APTA practice model augmented with constructs from the ICF and those   79 relevant to outcome measurement and evaluation. We then compared regulatory documents of the provincial PT regulatory boards of the English-speaking provinces with the elements of our evaluation framework. Finally, we reviewed the websites for the colleges and for the CPA and its provincial branches for additional outcome evaluation resources.  3.2.1  Review Framework The review framework included 15 concepts and constructs. Of these, one was the definition of PT practice, two were from the ICF framework (function and disability)32 and seven were drawn from the elements of the APTA practice model (examination, evaluation, diagnosis, prognosis, treatment plan, intervention, and outcome).30 Four additional concepts were included because of their relevance to outcome measurement and evaluation (goals, outcome measure, outcome measurement, and outcome evaluation). Each concept was searched for usage and definition. Coding was ordinal with three levels: defined, used but not defined, and not found. This framework was used to review the regulatory documents, two key resources, and related supplemental resources from the website search.  3.2.2  College Regulatory Document Review Physical therapy regulatory documents from the nine English-speaking Canadian provinces were searched for the 15 concepts and constructs described in the review framework. Regulatory documents included provincial acts, regulations, rules, bylaws, codes of ethics (where the code was published separately from   80 regulations or bylaws), advisory statements, and other documents drafted by a college as a directive or expectation of practice.  3.2.3  College and Canadian Physiotherapy Association Website Review During the regulatory review, additional web pages or documents posted or linked to the college websites that appeared to have content related to outcome measurement or evaluation were listed. Websites for the CPA and its provincial branches were searched similarly. Given their extent and variety, documents were selected for review based on having content relevant to outcome measurement and evaluation practices.  3.3  RESULTS 3.3.1  Regulatory Review Regulatory documents were found on the websites for PT colleges of eight of the nine English-speaking provinces.33-40 As the regulatory board for Newfoundland and Labrador does not have a website, their regulatory documents were found elsewhere.41,42  3.3.1.1 Definition of Physical Therapy Practice All nine English-speaking provinces defined the practice of PT in either their acts,41,43-48 regulations,49 or bylaws50,51 (Table 3-1). The terms physiotherapy/physical therapy and physiotherapist/physical therapist generally appeared to be considered synonymous, with regulation in some provinces   81 protecting multiple terms.43,46 Although these definitions varied, with some being more extensive than others, each definition contained statements of what constituted PT practice, the objectives of practice, and the methods of practice. All included general statements about PT methodology and most listed specific techniques. Although most provinces included assessment and treatment of the human body, there was a split between those that had more technical definitions41,44,45,49 specifying ‘by physical or mechanical means,’ versus those that had more conceptual definitions43,46-48,50,51 specifying ‘the application of professional, knowledge, skill, judgment, and ethical conduct.’ Although the construct of function appeared in all provincial definitions, wide variations in terminology were found, including the use of qualifiers like optimal, physical, independence, performance, and converse terms like dysfunction, disabilities, handicaps, and impairments. There was also wide variation in terms describing the object of PT with regard to function, including identification, alleviation, and prevention (of dysfunction), or to obtain, regain, and retain (function). How these changes in function are to be achieved was defined generally with statements like plan, administer, and evaluate a course of PT that includes, for instance, education, ergonomics, and interventions; or the art and science of therapeutic movement of the human body. Specific techniques were listed in eight definitions. These were manipulation (n=7);41,43-47,49 exercise, massage, electrotherapy (n=6);43,44,46-49 heat or radiant energy, 43,44,46-48 hydrotherapy44-49 (n=5); mechanical energy43,44,46,47 (n=4); mobilization,41,44,47 acupuncture,44,46,47 tracheal suctioning44,45,47 (n=3), application of bandages49 or taping47 (n=2), laser,48   82 administration of PT-related medications,47 bracing or splinting,47 mobility aids,47 ergonomic evaluation and modification,47 proprioceptive neuromuscular facilitation and muscle energy techniques,47 and physical agents,41 (n=1).  3.3.1.2  ICF Constructs Although the constructs of function, and to a lesser extent disability, appeared throughout the regulatory documents, they were not clearly defined, with one exception (Tables 3-2 and 3-3). The Alberta Practice Standards for Physical Therapists   defined disability as “A restriction or inability to perform an activity in the manner or within the range considered normal for a human being, mostly resulting from impairment.”52 The term function was used in the regulatory documents of all provinces but many synonyms were found with varied usage within and between provinces. Most of these variations were found in the PT definitions of practice and in the regulatory documents describing clinical record requirements. 3.3.1.3  American Physical Therapy Association (APTA) Practice Model Concepts Of the seven concepts drawn from the APTA practice model, three were defined by one province, two were used in all provinces, and one was not found in the regulatory documents of seven provinces (Tables 3-2 and 3-3). The Alberta Practice Standards for Physical Therapists defined evaluation, diagnosis, and planning.52 The concepts of examination and intervention were used in the documents of all nine provinces, and evaluation, diagnosis, plan, and outcome were   83 used in about two-thirds of jurisdictions, but prognosis was used in only two. Again, there were many variations in terminology with assessment and treatment being the most widely used.  3.3.1.4  Outcome Measurement and Evaluation Concepts Of the remaining four concepts, outcome measure was defined by the Alberta College and used in Ontario, outcome was used in six provinces, and goals, outcome measurement, and outcome evaluation were used in less than half the provinces (Tables 3-2 and 3-3).  3.3.2 Regulatory and Professional Website Search Search of the provincial PT regulatory board websites yielded 12 documents from four provinces (Table 3-4). The Alberta College provided most of these in a range of resources. In addition to the profile were: a discussion paper on primary healthcare, guides for disability management of injured workers and reporting for automobile insurance claims, a web page listing a wide variety of standardized measures, and the college’s Practice Standards for Physical Therapists Search of the CPA websites yielded 15 resources (Table 3-5). Four of these were found on the national website including the CPA code of ethics, the profile, web  (2005) document. Manitoba provided a second discussion paper on primary healthcare, Ontario posted documentation from the development of this college’s quality management program and the Nova Scotia College posted its continuing competence guidelines in addition to the profile.   84 pages for best-practice/outcome measures and Health Information Sheets, and results of a recent professional development survey. Search of the nine CPA provincial branch websites yielded another four resources including a scope of practice review from the Ontario Physiotherapy Association website, and two practice guidelines and an online outcome database from the Physiotherapy Association of British Columbia website. This database, however, is accessible only to members of this Association.  3.3.3  Select Resource Review The resources selected for this review represent a range of professional documents developed for various purposes. Thus, the process used to review them was qualitative and focused on terminology issues and representation of concepts and constructs related to the ICF framework, the APTA practice model, and outcome measurement and evaluation processes. We looked for use and definitions of ICF constructs and terminology. We also looked for evidence of practice model elements from the regulatory review, and the context in which they related to the body function and structure, activity, and participation levels of the ICF.  3.3.3.1 Physical Rehabilitation Outcome Measures, Second Edition Review of the handbook focused on the outcome measurement and evaluation process described in the six chapters of its Part I. In Chapter 1, the authors provided a brief history of outcome measurement in rehabilitation to date as a rationale for their work. In a 1998 survey, they noted that although there was awareness of the   85 availability of standardized measures and importance placed by the profession on their use in outcome measurement and evaluation, incorporation into practice remained largely unchanged. Lack of time and knowledge remained the most commonly reported barriers. Respondents indicated that adoption would be facilitated by having resources that guide decisions about selecting from a range of measures, applying them to client populations, and interpreting the results to inform client care and overall program planning. Also this chapter of the handbook stated the terminology of its constructs was based on the ICF framework.2 Chapter 2 provided a discussion of some paradigms of measurement including quality of life and the health utility index in relation to the ICF constructs. Also, a framework was provided for matching the ICF constructs to levels of clinical application. In this framework, strategies represent approaches or techniques relative to the body functions and structures level of the ICF framework, interventions are composed of strategies and relate to the activity level, and programs are comprised of multiple interventions relating to the participation level or health-related constructs such as quality of life. Outcome was defined as “a characteristic or construct that is expected to change owing to the strategy, intervention, or program that is offered, and least affected by outside influences.” Chapter 3 provided guidance on the selection of measures related to the constructs of function, disability, quality of life or relevant outcome, with guidance on how to select from a range of available measures related to the construct of interest, the purpose of measurement and the parameters of the client population. Types of measures were categorized as generic versus specific, and performance versus   86 self-report. Other issues to consider included review of measurement properties, feasibility of administration (e.g., cost, time, and respondent burden), and matching the research population studied to the client population of interest. Following selection of appropriate measures and prior to implementation of the outcome measurement process in clinical practice, pilot testing and development of an outcome measurement plan were recommended. Chapter 4 provided a comprehensive review of the properties of outcome measures including scaling issues, reliability and validity, and coefficients of change. Change indices were discussed including the effect size and the standardized response mean for evaluating change in a single group over time, and receiver operator characteristic curves for assessing change in multiple groups (such as treatment and control groups). Two types of individual change coefficients were also discussed, namely, the minimal detectable change and the minimal clinical important difference. Chapters 5 provided a framework for evaluating clinical outcome and making decisions about individual clients. This was based primarily on the use of activity- level measures and to a lesser degree measures of participation. The authors acknowledged that scores from such measures often do not have meaning to clinicians. However, Finch and colleagues challenge the reliance that clinician’s place upon measures of impairment as their measurement properties tend to be less discernable which creates a risk of misinterpretation. Risk of misinterpretation including evaluation of change over time exists largely because impairment measures only relate to the individual client who is being assessed. They correlate   87 poorly with activity and participation, and often have undocumented measurement properties. The measurement and evaluation framework was described in the context of a clinical scenario described in two parts. The initial assessment was framed by questions like what is the client’s status today, when will you reassess, and what will influence the reassessment interval? The follow-up assessment was framed by questions like has the client changed and was the change important? Chapter 6 provided a framework for measurement, evaluation and decision- making about rehabilitation programs and included five steps for focusing the evaluation, selecting evaluation methods, choosing methods, gathering and analyzing data, and making decisions. Again, a scenario was used to depict the process, in this case, an outpatient pulmonary rehabilitation program. The handbook provided a comprehensive description of the rationale and methods for outcome measurement and evaluation in PT but it has several limitations. Despite having based their constructs on the ICF framework and having included definitions in the glossary, the authors have at times used the terms function and disability to refer specifically to the activity level. A hierarchy organizing clinical activities has been provided in terms of strategies, interventions, and programs, with processes described for measurement and evaluation of an intervention with a single client or a structured program with multiple clients. A process to aggregate data of individual clients with scores from the same measure, perhaps the most common application for clinicians, was not provided. Although more informative than “the result of client management,”30 the definition of the term outcome was confusing. It appeared to exclude the end results following intervention   88 or client management, does not indicate the timeframes over which change might be expected, or guide how to evaluate more than two time points. And, the requirement that the outcome be “least affected by external influences” is challenging considering that clinically we do not know the actual extent to which a PT intervention influences the end results. The indices described in Chapter 4 have been used to evaluate change in research studies. Although useful, they may not be the best indices to evaluate grouped data of individual changes, however another index, the reliable change index, has been reported to do just that.53 Further, the group change indices can only evaluate change over two time points based on a difference score. This can be problematic if change is non linear as the magnitude and possibly direction of change depend on the segment of the recovery path included between the two time points.54,55 Alternative approaches that view change as a nonlinear function recommend evaluation over multiple time points54,55 This approach has been applied to evaluating outcome paths for workers with low back pain.56 In terms of measurement and evaluation processes, a trade-off has been accepted within the handbook that may have sacrificed conceptual utility for meaning with respect to its target audience. The authors chose to focus on measures of activity, participation, and quality of life constructs. Clinicians, however, continued to report difficulty interpreting scores from such measures. This may represent an instance of theory-practice gap where the distance between current and desired (theory-based) practice was too great.12 One alternative would be to bundle complementary measures of impairments, activity limitations and   89 participation restrictions in a system of measurement. This could provide a common basis for meaning while moderating the administrative burden.  3.3.3.2 Essential Competency Profile for Physiotherapists in Canada In the three dimensions of the profile reviewed, terminology did not reflect that of the ICF framework. The constructs of function and disability were represented in the terms physical performance, physical functioning, functional abilities, functional needs, impairments, disabilities, limits to participation, and client abilities. Disability appeared to represent activity-level deficits specifically. However, three other constructs of the ICF framework were represented: health status, personal factors, and environmental factors. Some terms were defined in the glossary, most of which were sourced from the handbook (outcome, outcome measure), some of which were new (physiotherapy diagnosis/clinical impression), while others remained undefined (prognosis, intervention). Despite these inconsistencies in definitions and use, all elements of the practice model were represented within the three competency dimensions. The practice element of examination was described in dimension four, where both assessment and examination were used interchangeably. This process was described as a gathering of information on the client’s health status, multiple levels of function, and relevant personal and environmental factors that could affect function or expected outcome. Evaluation was described under dimension five as the analysis of assessment findings to determine client abilities, functional needs, and potential outcomes. Two   90 steps were defined: the identification of multiple levels of function and of personal and environmental factors, and the prediction of expected changes and progress toward realistic outcomes. The physiotherapy diagnosis/clinical impression was defined as a “conclusion about physical function based on the subjective and objective assessment and analysis by physiotherapists to investigate the cause or nature or client’s condition or problem.”31 Dimension five states that the diagnosis should be relevant to commonly used diagnostic and classification systems such as the ICF, and identify both the need for and potential value of physiotherapy intervention.31 Prognosis and intervention planning were also represented under dimension five. Prognosis was represented as establishing and prioritizing with the client, expected health outcomes based on the client’s goals, functional potential, environmental demands, and prognostic indicators based on the best evidence.31 Planning was described as establishing and prioritizing a general intervention strategy and selected interventions, consistent with the client’s needs, goals, and physiotherapy resources.31 Intervention and evaluation were described under dimension six. Intervention was termed implementation which includes performing selected physiotherapy interventions and making adjustments based on the client’s response.31 Evaluation was described as monitoring client responses and changes in status during the interventions, evaluating effectiveness of the intervention strategy on an ongoing basis using valid measures and, in consultation with the client, redefining goals and modifying intervention strategies as necessary or discontinuing interventions that are no longer necessary or effective.   91 3.3.3.3 Review of Supplemental Resources and Supporting Material Brief commentary is provided for five of the additional resources reviewed from the websites of the colleges and the CPA (Tables 3-4 and 3-5): Disability Management of Injured Workers: A best-practices resource guide for physical therapists and its accompanying appendix,57 the Alberta Practice Standards for Physical Therapists In 2005, the Alberta college published its ,52 the CPA Professional Development Survey Results,58 and the Physiotherapy Association of British Columbia’s outcome database.16,59 Practice Standards for Physical Therapists Definitions for terms assessment of analysis, intervention and planning were provided but not referenced. The elements of the practice model are represented in Standards 4-9, under the headings of Assessment, Physical Therapy Diagnosis and Treatment Planning, and Implementation and Evaluation. Examination and assessment are used interchangeably. Although “assessment and analysis” is defined in the glossary, the Assessment section opens with “physical therapists understand what constitutes an appropriate assessment.” Although analysis was defined in conjunction with assessment, the term is not used in the text for standard 4. The terms diagnosis, prognosis, treatment plan, goals, baseline outcome  in a formal document.52 Although ICF terminology appeared in the document, it was not consistent with the ICF definitions. For instance, function and disability were not used as umbrella terms, but were used in parallel with health and physical performance, and impairment and disease, respectively. A glossary is provided citing the ICF32 for definitions of disability and impairment, and Finch and colleagues2 for definitions of outcome measure and standardized measure.   92 measures, and expected outcome are used in standards 5-7. Implementation is used as a higher order term than intervention and was defined as the performance of necessary and appropriate interventions to achieve the desired benefit for the patient with minimal risk. Interventions, on the other hand, are defined as being either direct (e.g., manual techniques and exercise programs) or indirect in nature (e.g., injury prevention education and the prescription of assisted devices). Standard 9 describes the outcome evaluation process, both during the episode of care and at discharge, across which comparison of discharge status and baseline values are to be recorded during initial assessment based on standardized measures. In addition, the Alberta college has published two documents specifically for work-related cases in 2006, titled Disability Management of Injured Workers: A Best- Practices Resource Guide for Physical Therapists In May 2008, the CPA conducted a professional development survey. Results from more than 1200 respondents were presented on the CPA website in PowerPoint slide format.58 There are two slides of note. With respect to professional development, members believed that existing courses would continue to meet their educational or clinical needs, 26 respondents indicated evidence-based practice and , and its accompanying appendix.57 Although descriptive and based on research evidence in work disability prevention, these documents are similar to the Alberta standards, and that all elements of the practice model are represented, but ICF terminology is not used. Although the elements of an outcome measurement and evaluation process are present, and a bank of relevant standardized measures is described, the process of outcome measurement and evaluation is not clearly defined.   93 18 indicated best practices. When asked to list specific courses, the topic of outcome measures was selected second to their last choice of geriatrics by 21 respondents.58 In 2005, the Physiotherapy Association of British Columbia developed an outcome database for its members. After a test period, the database was launched in September 2006 with an announcement and instructions in its August 2006 PABC Directions newsletter. Members were encouraged to enter their client data for the Neck Disability Index,60 which was the first measure available in the database, in response to the need for measured outcomes along with the Whiplash Associated Disorders clinical practice guideline. The database won early acclaim for this association in 2005 with the “Above and Beyond ACE” award from the British Columbia chapter of the Canadian Society of Association Executives. The database was also highlighted in a poster at the 2007 Congress of the World Confederation for Physical Therapy,59 which stated its multiple purposes were • to allow PABC members to track their client outcomes, • to assemble clinic-level aggregate descriptive statistics for anonymized client data from participating members, and • to assemble provincial-level aggregate descriptive statistics for anonymized client data from participating members to be used for planning and fee negotiation. Additional promotion in the Directions newsletters and by email to the membership list resulted in a spike in the frequency of new client records but this tapered off after April 2006. A representative of the Physiotherapy Association of   94 British Columbia reported in an e-mail dated January 11, 2009 that a total of 168 client records had been entered into the data base over the eight-month period.61 Several members reported that they did not see eligible clients at that time to be able to participate. A pre-post Whiplash Associated Disorders Clinical Practice Guideline survey was conducted by the association to evaluate changes in PT practice but the response was not sufficient for evaluation.  3.4  DISCUSSION Based on this review, neither a clear mandate nor a meaningful, nationally accepted model of PT practice exists in Canada. Despite the profession’s apparent commitment to outcome measurement, this may have limited the incorporation of outcome measurement and evaluation into clinical practice. To develop a meaningful model that supports a regulatory definition of PT, physical therapists need to address the lack of congruence within and between regulation and theoretical practice, and professional development. This necessitates examination of regulatory issues, adoption of classification systems and standards for terminology and definitions, and a national consensus on the use of these resources for professional development.  3.4.1 Mandate Descriptions of PT practice in provincial acts and regulations may be expected to be outdated given that legislation changes require time and political will. This, however, neither explains the inconsistent use of terminology nor the absence of   95 outcome measurement and evaluation from standards and advisory statements. Standards and advisory statements are typically drafted and adopted for use by the professional colleges and can be revised more frequently than legislation. Furthermore, the resources provided by the profession are not subject to government approval, thus, should represent the profession’s most current view of practice. The profile was developed in collaboration with the Alliance with an intended use for regulators “in the development of … standards of practice…” (page 2).31 Minimally, we might expect it to be disseminated to college registrants as an independent resource as well as guiding the development of practice standards. However, it was posted on only two college websites.34,40 Although regulatory definitions of PT may take time to change, a collaborative process like that used to develop the profile could be enacted to develop a prototypic national standard definition. In doing so, representatives from across jurisdictions and practice areas could deliberate the elements and content of the definition. This would provide a basis for the development of other regulatory documents. In terms of the development of practice standards, the Alberta College has the most advanced and easily accessed collection of resources. The array of documents from the practice standards and best-practice guides to the bank of outcome measures that are freely available from its website provides information useful to practitioners across practice areas.      96 3.4.2  Meaning 3.4.2.1  The International Classification of Functioning, Disability and Health (ICF) Despite the promotion of the ICF by the World Confederation for Physical Therapy, a representative of the CPA reported by e-mail dated March 12, 2009 that the ICF framework had not been officially endorsed and adopted by the CPA,62 and the ICF constructs and terminology had not been consistently incorporated in the documents reviewed. Even the handbook in which constructs were based on the ICF did not consistently adhere to the ICF definitions for function and disability. Confounding of terms related to function and disability was noted in most regulatory documents, including the Alberta college practice standards and the profile.31,52 The American Physical Therapy Association has recently adopted the ICF63 which has implications for review and revision of their print and web resources including their Guide to Physical Therapist Practice The current version of the ICF has been available since 2001, yet citations to the ICF terms in the handbook, the profile, and the Alberta Practice Standards document do not consistently reflect the 2001 ICF framework. Specifically, the terms function and disability have been used with numerous connotations in both regulatory and resource documents. Altman described this problem in the context of disablement models, and recommended clearly defining the terms and the context in which they are used. The principal value of the ICF’s use of the terms function and disability is that they are defined as umbrella terms with reciprocal meaning, permitting their interchangeable use. However, this demands specific definition of .   97 the level and nature of function or disability. One needs to define whether impairments, limitations, and/or restrictions are relevant to the specific client and his or her life situation, as well as the nature of those impairments, limitations, and restrictions, and how they relate to the client’s health state, other contextual factors (personal and environmental), and the client’s aspirations. Darrah proposed that in doing so, we must consider the Activity and Participation levels first, as these are most relevant to the client and his or her ability to function in life.27 Although outcome was advocated as the object of PT almost 25 years ago,25 there remains no established definition of outcome that clearly addresses the multiple levels of function and disability relevant to PT practice. Given the move to adopt the ICF framework into PT practice, definitions of outcome and related terms are needed that recognize the distinctions of the terms function and disability at all three levels of the ICF framework, across the client’s episode of care, and beyond. Thus, outcomes can be defined for impairments (at the body function and structure level), limitations (at the activity level), and restrictions (at the participation level) over multiple time points from assessment to discharge, and post-discharge intervals. Interim reassessments can evaluate an immediate response to a specific treatment regimen, early response to intervention after days, visits, or weeks, with multiple time points to map the outcome path. Measurement post discharge can evaluate durability of the levels of function attained at completion of the PT intervention and evaluate preventive interventions by monitoring for (non)occurrence of the undesired events.    98 3.4.2.2  Elements of Practice From the regulatory perspective, our model of practice in Canada is ’front- loaded’ with the concepts of examination and intervention frequently used but not clearly defined. However, the concepts of evaluation (i.e., an analytical reflection on the findings from the examination process), prognosis, goal setting, and planning are not well described, and the concepts of outcome measurement and evaluation are often not mentioned in the regulatory milieu. Although found variably in resource documents provided both by colleges and the CPA, these concepts are not described within a common framework that can be easily understood and recognized by practitioners across the country. The lack of consistent definition and use of concepts in professional jargon may explain why outcome measurement and evaluation have lagged in terms of being integrated into PT practice.  3.4.2.3 System of Outcome Measurement and Evaluation The profession has defined key terms of outcome, outcome measure, and outcome evaluation in the handbook and the profile. These may not be sufficient however to describe a broader conceptualization of outcome or to guide the evolution of practice. Outcome has been defined as “a characteristic or construct that is expected to change as a result of the provision of a strategy, intervention, or program. A successful outcome includes improved or maintained physical function when possible, the slowing of functional decline where the status quo cannot be maintained, and/or the outcome is considered meaningful to the client.”2,31 An outcome measure has been defined as “a measurement tool (e.g., instrument,   99 questionnaire, or rating form) used to document change in one or more constructs over time.”2,31 Outcome evaluation has been defined as “the systematic evaluation of the impact of a program or intervention to determine whether it meets its objectives.” Before these definitions are addressed, the concept of outcome warrants attention with respect to the multiple dimensions and applications that are relevant across PT practice areas. Like other constructs, an outcome is not a concrete entity that can be directly measured.2,64 Rather, the characteristics or constructs that may change as a consequence of a PT intervention can provide indicators of outcome. Characteristics like joint range of motion can be measured with goniometry, blood pressure can provide an indirect indicator of cardiovascular function, and a six- minute walk test can provide an indication of whole-body mobility. Further, work status of full duties and full hours does not provide a comprehensive representation of how a worker is managing the multitude of changing demands of his or her job, not to mention other aspects of life. The constructs of validity and reliability can be differentiated into variants such as criterion and construct, and internal consistency and test-retest, respectively.2,64 Outcome paths can be easily defined for the ICF constructs, quality of life measures, and other constructs, or for segments of the episode of care or the natural history of a given health condition. For example, the outcome of a spinal manipulation may be assessed immediately as reduced pain or a gain in segmental mobility. A self-report questionnaire may be administered at intake and after two weeks to assess early response at the activity level and again at discharge to evaluate the overall change in status on a standardized measure. Level of work may be monitored during   100 intervention at discharge and months afterwards as a participation-level indicator of success and durability of gains associated with an episode of PT care. However, the outcome is not specific to any of these intervals, but rather represents the paths of change delineated for the characteristics and constructs relevant to the client and the PT intervention over the entire episode of care and beyond discharge. The risks associated with selecting any one characteristic or construct over any two time-point intervals include misinterpreting cyclical instantaneous improvement and inter-visit regression as cumulative improvement, and early rapid change on an activity-level scale as indicative of further linear gains to discharge. Additionally, the selection of one ‘best’ indicator, such as early return to full work duties and hours, as ‘the outcome’ may be misleading, as other characteristics or constructs may recover at different rates or be remediated more quickly or more slowly. Thus an outcome may be realized at one level of functioning (e.g., participation) where other levels (impairment or limitation) may still be deficient. Any new definitions related to outcome must address these issues and risks. The demand for rigor in PT practice and related research has necessitated scrutiny of measurement terminology including the term outcome measure itself. We argue that this term is a misnomer and syntactically questionable. Although definitions of outcome have specified ‘the documentation of change over time,’2,31 this represents an attribute of the application of such a measure but not the measure itself. Labeling measures as such risks the inference that evaluation at one point in time produces a measured outcome. Measures can be validated for discriminative,   101 prognostic, or evaluative functions;2,64 describing measures by their attributes may be more appropriate. If the construct of outcome is redefined as a complex integration of indicators of multiple levels of characteristics and constructs of function and health, then outcome evaluation requires redefinition as being more than the impact of a program or intervention relative to its objectives. Although this definition does not specify this comparison of a simple difference-score between two points (i.e., assessment and discharge) with the expected change, this may well be the predominant interpretation. Researchers in the area of psychological measurement have debated the limitations of the difference score54,55 and have recommended viewing change in characteristics or constructs of interest as a growth curve.54,55 Statistically, growth curves permit evaluation of a linear regression or trajectory of data points for each client on a given measure, then modeling the slopes and intercepts as variables to group the data. This method has relevance to outcome evaluation because the individual variation that is otherwise lost in the error terms of grouped comparisons (e.g., effect size), becomes the variable of interest.54,55 Such approaches have been recently applied to differentiate recovery paths for workers with low back pain,56 and to map recovery curves in populations with hip and knee joint replacements.65 Conceptually, however, we may want to consider the ‘outcome’ as being represented not just by such a trajectory, but rather by the paths mapped out by multiple scores gathered over time, for indicators at multiple relevant levels of functioning and health. We believe that such a conceptualization of an outcome path versus an outcome measure is both more accurate and more defensible.   102 Outcome evaluation can provide a systematic method to compare the paths of indicators of function and health predicted by the prognosis with respect to indicators attained. Such critical appraisals can be made at points along each path, without losing sight of either a given path, or those representing other levels of function. For example, are the paths unfolding as predicted, or are they exceeding or not meeting expectations? Are there conflicting findings, with change on one path (e.g., reduced impairment) that did not translate into a gain in higher-level function (e.g., increase in level of work status)? Evaluation of an immediate response in one visit can also be viewed as an interval on that path and in the context of higher level paths. The durability of the levels of function and health attained post discharge can be evaluated. Evaluation can also incorporate the measurement of outcome paths for preventive interventions. Resources found on both the websites of colleges and the CPA provided guidance in the outcome measurement and evaluation processes. However they do not appear to provide clear criteria for what to measure, at what time points, or how to interpret change over multiple time points. Nor do the regulatory documents provide a clear mandate whereby these methods can be integrated into PT practice, or identify them as an integral part of a practice model. Without a nationally accepted practice model both interpretation and implementation of outcome measurement and evaluation become the practitioner’s responsibility. This creates additional complexity and a potential barrier to measurement and evaluation of outcome by the practitioner rather than facilitating the adoption of systematic measurement practices.   103 3.4.2.4 Evolution of Practice Insight into the current meaning of measurement and evaluation processes to PT practitioners was found in two places. Responses from the CPA Canada-wide survey indicated how relatively low outcome measurement and evaluation rank in terms of professional development. Both outcome measurement and evidence- based practice ranked low compared with clinical skill development. Of note was that courses in geriatrics ranked lower than outcome measurement, which can be expected to be the most in-demand area of practice in a few decades. The other indicator of the meaning of outcome measurement and evaluation to practitioners was the apparent abandonment of the Physiotherapy Association of British Columbia’s outcome database. Developing a vision for Canadian PT practice will require the collaborative efforts of the colleges, the CPA, university programs, clinicians, researchers, academics, clients and other stakeholders. A national standard practice model that cohesively ties the familiar elements of practice together with the recent advances in defining functioning and health, and outcome measurement and evaluation could serve as a vehicle to unify our perspectives of practice. That vehicle, however, will have to be driven by coordinated integration of those changes into regulation, entry- level and continuing education, and research to influence practice. Ongoing training and support will be necessary to change practice behaviors in addition to knowledge and awareness. These resources and supports could take advantage of web-based technologies to ensure that further evolution takes place on a planned schedule rather than as a delayed unstructured reaction.   104 3.4.3  Limitations The primary limitation of this study is that one examiner performed data extraction. This precluded an assessment of reliability. Further because regulatory documents are complex and diverse, there is the potential for error or omission of concepts or terms. Because of the interpretative nature of both the data extraction from the regulatory documents and in qualitative review of resource documents, the potential exists for misinterpretation, error or omission. Also, as we did not review regulatory documents for the province of Quebec or the three territories, this study is not fully representative of PT practice in Canada. Quebec does not have a member branch of the CPA and the territories are represented by two CPA councils. Regulation in the territories also differs. In 2007 the Yukon territory regulated physical therapists under their Health Professions Act66 but Nunavut and the Northwest Territories have not passed professional regulations for PT, and none has an established PT college. Despite these limitations, the results of this study will best serve as a basis for professional debate and discussion.  3.5 CONCLUSIONS AND CONSIDERATIONS To the best of our knowledge, no study has previously compared the PT regulatory documents from Canadian provinces in a systematic manner. This review has identified strengths and limitations in the primary resources available to guide physical therapists with respect to the use of standardized methods for measurement and evaluation of outcomes in their practices. Two primary resources that have been endorsed by the CPA, namely the profile and the handbook,   105 demonstrate collaborative efforts of the profession to advance the knowledge and awareness of concepts of the elements of PT practice and of outcome measurement and evaluation, thus defining them as expectations of competent practice. Both documents are positioned to influence entry-level curricula and future generations of practitioners. However, until integrated into provincial regulation, these practice recommendations are not enforceable. Given what is known about effective knowledge translation methods, these resources may not be well positioned to influence change in practice behavior for practitioners despite their knowledge and awareness about outcome measures. With respect to the documents which have traditionally been provided in hard copy, they run the risk of containing information that is outdated by the time they are published. They cannot keep pace with advances and be updated until the next cycle of revision. Finally, neither of the two primary documents provided a graphic representation of a practice model nor of the outcome measurement and evaluation component. These limitations warrant scrutiny for any future efforts to overhaul a PT practice model or to integrate outcome measurement and evaluation methods into it. To integrate measurement and evaluation of clinical outcomes into practice, the PT profession in Canada needs to address deficiencies in its mandate and in the clarity and consistency of meaning of outcome measures and of measurement and evaluation processes and outputs. This needs to be done in anticipation of revising key resources and developing strategies to support the systematic implementation of outcomes measures. This is likely to be best achieved through a unified multi- pronged approach engaging the CPA and its provincial branches, regulatory bodies,   106 and stakeholders including clinicians, researchers, educators and administrators. To stimulate discussion and debate, we propose the following activities be considered to advance the development of standards of outcome measure use in the profession based on this preliminary review: 1. Convene a national advisory group to reach agreement on a standard definition of practice and a practice model 2. Establish consensus on terminology including outcome measure, outcome measurement, and other related terms. Consistent with advocacy for the ICF by the World Confederation for Physical Therapy, the CPA should consider formal adoption and integration of the ICF  into the Canadian practice model. In this context, the term outcome warrants clarification given it applies to multiple levels within the ICF framework (i.e., body function and structure, activity and participation) and over time (immediate, intermediate, discharge and periodic follow-up). 3. Review and revise terminology in regulatory and resource documents to align the mandate to practice and facilitate consistent understanding and communication within the profession. 4. Engage external stakeholders such as clients, other health professionals, and third-party payers to take advantage of their perspectives and acknowledge their interests in refining a PT practice model. 5. Revise the Essential Competency Profile to a. incorporate the ICF and unified practice terminology and constructs (yet to be adopted and defined)   107 b. incorporate a national practice model (yet to be adopted) c. establish a process for regular periodic review and update of the profile and supplemental documents to support on-going development and advancement of PT practice 6. Review and revise the Physical Rehabilitation Outcome Measures handbook in a third edition to a. incorporate ICF constructs and terminology b. align with the national practice model (yet to be adopted) c. integrate a plan for regular periodic review to accommodate advances in practice d. convert to a web-based resource to facilitate updating and revision and thus facilitate its accessibility. To the best of our knowledge, this is the first proposal within the PT community in Canada to promote current and systematic alignment of regulatory resources and professional position statements with respect to an agreed upon conceptual framework and terminology. Implementation of the activities proposed above for consideration could facilitate the development and dissemination of practice guidelines and regulatory policies informed by a unified practice model. This could facilitate the implementation of a national strategy for effective knowledge translation to integrate outcome measurement and evaluation processes into PT practice in Canada. Such an initiative could benefit clinical practice, professional education, and research by unifying terminology and common concepts and constructs.    108 Table 3-1.  Definition of Physiotherapy and/or Physical Therapy in English-Speaking Canadian Provincial Legislation  Province Definition  British Columbia  “physical therapy” means the treatment of the human body by physical or mechanical means, by manipulation, massage, exercise, the application of bandages, hydrotherapy and medical electricity, for the therapeutic purpose of maintaining or restoring function that has been impaired by injury or disease.   Alberta  “physical therapy” means the application of professional physical therapy knowledge in the assessment and treatment of the human body in order to obtain, regain and maintain optimal function by the use of any suitable medium of therapeutic exercise, massage and manipulation or by radiant, mechanical and electric energy, but does not include an assessment or treatment that is outside the scope of section 87 of the Medical Profession Act.   Saskatchewan  The practice of physical therapy is the use, by a physical therapist, of specific knowledge, skills and professional judgment to improve clients’ functional independence and physical performance, manage physical impairments, disabilities and handicaps, and promote health and fitness.   Manitoba  The practice of physiotherapy is the assessment and treatment of the body by physical or mechanical means for the purpose of restoring, maintaining or promoting physical function, mobility or health, or to relieve pain. Subject to the regulations, in the course of engaging in the practice of physiotherapy, a physiotherapist may plan, administer and evaluate a physiotherapy program that includes, but is not limited to, education, ergonomics and interventions such as exercise, massage, articular and soft tissue mobilizations and manipulations, acupuncture, hydrotherapy, tracheal suctioning, and the use of radiant, mechanical and electrical energy.   Ontario  The practice of physiotherapy is the assessment of physical function and the treatment, rehabilitation and prevention of physical dysfunction, injury or pain, to develop, maintain, rehabilitate or augment function or to relieve pain. Authorized acts: In the course of engaging in the practice of physiotherapy, a member is authorized, subject to the terms, conditions and limitations imposed on his or her certificate of registration, to perform the following: 1. Moving the joints of the spine beyond a person’s usual physiological range of motion using a fast, low amplitude thrust. 2. Tracheal suctioning.     109 Table 3-1 (continued).   Definition of Physiotherapy and/or Physical Therapy in English-Speaking Canadian Provincial Legislation  Province Definition  New Brunswick  “physiotherapy” and “physical therapy”, the terms being synonymous, means (a) the assessment, (b) the identification, (c) the alleviation, and (d) the prevention of physical dysfunction or pain based on the art and science of therapeutic movement of the human body which may be supplemented by exercise, massage, manipulations or the selective application of such physical mediums as:  hydrotherapy; or radiant, mechanical, or electrical energy, including acupuncture; and the use of such means in the restoration and maintenance of optimal functions and includes, without limiting the generality of the foregoing: (i) the planning, administration and evaluation of physiotherapy remedial, preventive and health maintenance programs, and (ii) the provision of consultative, educational, advisory, research and other physiotherapy professional services.   Nova Scotia  “physiotherapy” or “physical therapy” means the application of professional physiotherapy knowledge, skills and judgement by a physiotherapist to obtain, regain or maintain optimal health and functional performance and includes, but is not limited to,  (i) assessment of neuromusculoskeletal and cardiorespiratory systems and establishment of a physiotherapy diagnosis,  (ii) development, progression, implementation and evaluation of therapeutic exercise programs, (iii) education of clients, caregivers, students and other health service providers, (iv) manual therapy treatment techniques including, but not limited to, massage, proprioceptive neuromuscular facilitation and muscle energy techniques,  (v) spinal and peripheral joint manipulation,  (vi) spinal and peripheral joint mobilization,  (vii) pain relief, including invasive acupuncture,  (viii) administration of physical therapy related medications as prescribed by a physician,  (ix) prescription, manufacture, modification and application of braces, splints, taping, mobility aids or seating equipment,  (x) hydrotherapy, electrotherapy and the use of mechanical, radiant or thermal energy,  (xi) ergonomic evaluation, modification, education and practiced,  (xii) tracheal suctioning, and  (xiii) such other aspects of physiotherapy as may be prescribed in regulations approved by the Governor in Council.   Prince Edward Island  “physiotherapy” means physical therapy practiced in a continuing way to remove, alleviate or prevent movement dysfunction or pain, in a manner that requires the practitioner’s independent exercise of professional knowledge, skill, judgment, and ethical conduct, and includes diagnostic assessment, design and conduct of treatment involving exercise, massage, hydrotherapy, heat, sonic, laser and electrical techniques, evaluation of progress, patient instruction, research and educational or preventative measures.   Newfoundland and Labrador  “physiotherapy” means the application of professional physical therapy in the assessment and treatment of the human body in order to obtain, regain or maintain optimal function by the use of suitable therapeutic methods, including mobilization, manipulation and the use of physical agents.    110 Table 3-2.  Definition and Use of Terms Descriptive of Concepts in Physical Therapy Practice in English-Speaking Canadian Provincial Legislation (Western Provinces)  Concepts British Columbia1-3,5,6 Alberta 1-3,5,6 Saskatchewan1,3 Physiotherapy (PT) Not found, but physiotherapist used2 Synonymous to physical therapy/ist1 Not found, but physiotherapist used1 Physical therapy (PT) Defined2  Defined with physical therapist1 Defined3 & physical therapist used1 Function Used2 Used with ability, physical functioning, functional ability, & functional status5,6 Functional independence, physical performance, health, & fitness used3 Disability Mental disability used6 Defined with impairment,5 & used with impairment, disease, & dysfunction5 Used with physical impairments and handicaps Examination  Physical examination3 & assessment3,5,6 used Assessment defined5 & used1,2,5,6 with clinical examination5 Used with assessment & evaluation3 Evaluation  Used with conclusions drawn from assessment5 Analysis defined5 and functional analysis5 & treatment rationale6 used Not found Diagnosis Used5 Defined as outcome of assessment,5 & used5,6 Clinical diagnosis used3 Prognosis Used5 and expected outcome used Used5,6 and expected outcome used Not found Plan  Used3,5 Planning defined5 and used2,5,6 with intervention plan5 Used 3 Goals Used5 Used2,6 Not found Intervention Treatment used2,5,6 Used5 with treatment1,2,6 & plan implementation Treatment used3 Outcome(s) Expected outcome & Client’s progress used3,5,6 Used with expected outcome5,6 Not found Outcome Measure Not found Defined and used with standardized measure5 Not found Outcome Measurement Reassessment,3 re- evaluate,3 & change in status5 used Standardized measures to compare discharge status with baseline values5 used Not found Outcome Evaluation Used3 Evaluate response to treatment,5  evaluating effectiveness,6 & reflect on outcomes5 used Not found Physical therapy regulatory documents reviewed: 1=Act, 2=Regulations, 3=Bylaws, 4=Code of Ethics (independent), 5=Practice Standards, 6=Advisory or Position Statements, respectively published as separate documents. Note: some provinces imbed Standards or Codes of Ethics in other statutes such as Regulations of Bylaws.     111 Table 3-2 (continued).   Definition and Use of Terms Descriptive of Concepts in Physical Therapy Practice in English-Speaking Canadian Provincial Legislation (Central Provinces)  Concepts Manitoba1-4,6 Ontario1,5 New Brunswick1,2 Physiotherapy Defined1,2,4 Defined1 Defined as synonymous terms1 Physical therapy Used2 Used1 Function Physical functioning used1,4 Used1,5 with dysfunction 1,5 Used with dysfunction1 Disability Not found Not found Used with sick and infirm1 Examination  Used2 with assessment1, 2 Assessment used1,5 Assessment used1,2 Evaluation  Used1, 2, 4 Used5 Used 1,2 Diagnosis Used2 Used5 Used1 Prognosis Not found Not found Not found Plan  Used1,2,4  Used5 with planning & plan of care5 Used 2 with planning1 Goals Used2 Used5 Not found Intervention  Used1,2,4 Treatment used1,5 Alleviation, prevention, restoration, & maintenance used1 Outcome Used2,4 with outcomes attained4 Used5 Not found Outcome Measure Not found Reassessment used5 Not found Outcome Measurement Not found Used5 with Reassessment5 Not found Outcome Evaluation Re-evaluate & ongoing evaluation used2 Not found Not found Physical therapy regulatory documents reviewed: 1=Act, 2=Regulations, 3=Bylaws, 4=Code of Ethics (independent), 5=Practice Standards, 6=Advisory or Position Statements, respectively published as separate documents. Note: some provinces imbed Standards or Codes of Ethics in other statutes such as Regulations of Bylaws.     112 Table 3-2 (continued).   Definition and Use of Terms Descriptive of Concepts in Physical Therapy Practice in English-Speaking Canadian Provincial Legislation (Eastern Provinces)  Concepts Nova Scotia1,2,4,5 Prince Edward  Island 1,2,5,6 Newfoundland & Labrador1, 2 Physiotherapy Defined1 Defined1 Defined1 Physical therapy Defined1 Used1 Used1 Function Functional performance used1 Dysfunction used1,2 Used1 Disability Not found Not found Not found Examination  Used5, Assessment1 Assessment used1 Assessment used1 Evaluation  Used1 Used1,2  Not found Diagnosis Used1,5 Not found Not found Prognosis Not found Not found Not found Plan  Treatment Plan used5 Planning for treatment or care used5,6 Not found Goals Not found Not found Not found Intervention  Used5, and Treatment used1 Treatment1,6 & care5 used Treatment1 used Outcome Not found Patient responses to treatment used5,6 Not found Outcome Measure Not found Not found Not found Outcome Measurement Not found Not found Not found Outcome Evaluation Not found Evaluation of progress1 & progress according to the plan5,6 used Not found Physical therapy regulatory documents reviewed: 1=Act, 2=Regulations, 3=Bylaws, 4=Code of Ethics (independent), 5=Practice Standards, 6=Advisory or Position Statements, respectively published as separate documents. Note: some provinces imbed Standards or Codes of Ethics in other statutes such as Regulations of Bylaws.    113 Table 3-3.   Summary of Terminology Definition and Usage in Physical Therapy Practice in English-Speaking Canadian Provincial Legislation  Concepts Defined Used Not Found Variations Function  9 Ability, physical functioning, functional ability, & functional status, functional independence, functional performance, dysfunction Disability 1 1 7 Impairment, handicap, disease, dysfunction, sick and infirm Examination  9  Assessment (predominant term) Evaluation 1 6 2 Conclusions drawn from assessment, analysis, functional analysis, and treatment rationale Diagnosis 1 6 2 Physical therapy diagnosis, outcome of assessment, and clinical diagnosis Prognosis  2 7 Expected outcome Plan 1 7 1 Planning, intervention plan, plan of care, and planning for treatment Goals  4 5 Intervention  9  Treatment, plan implementation, alleviation, prevention,  restoration, and maintenance Outcome  5 4 Client’s progress and outcomes attained Outcome Measure 1 1 7 Standardized measure Outcome Measurement  3 6 Reassessment, re-evaluate, & change in status, and use standardized measures to compare discharge status with baseline values Outcome Evaluation  4 5 Evaluate response to treatment, evaluating effectiveness, reflect on outcomes, re- evaluate, ongoing evaluation, evaluation of progress and progress according to the plan   114 Table 3-4.  Additional Documents and Resources Found on College Websites with Content Relating to Outcome, Outcome Measurement, or Outcome Evaluation  Province Document or Resource British Columbia None found Alberta • Essential Competency Profile for Physiotherapists in Canada (2004) • Primary Healthcare and Physical Therapists: Moving the Profession’s Agenda Forward (2006) • Automobile Insurance in Alberta: a Reporting Guide for Physical Therapists (2002) • Disability Management of Injured Workers: A best practices resource guide for physical therapists and Appendix (2006) • Outcome Measures web page (web address: http://www.cpta.ab.ca/resources/publications_disabilitymanagement_out comemeasures.shtml) • Practice Standards for Physical Therapists (2005) Saskatchewan None found Manitoba • Physiotherapy and Primary Health Care: Evolving Opportunities (2005) Ontario • Onsite Assessment Pilot Test: Report (2006) • Quality Management Program Evaluation Report (2003) • Practice Review for the Physiotherapy Management of Soft-tissue Disorders of the Shoulder (2001) New Brunswick None found Nova Scotia • Essential Competency Profile for Physiotherapists in Canada (2004) • Continuing Competency Guidelines for Professional Portfolio (2006) Prince Edward Island None found Newfoundland and Labrador No website     115 Table 3-5.  Documents and Resources Found on Canadian Physiotherapy Association (CPA) National and Provincial Branch Websites with Content Relating to Outcome, Outcome Measurement, or Outcome Evaluation  Province Document or Resource CPA National Site • CPA Code of Ethics • Physical Rehabilitation Outcome Measures, Second Edition: A Guide to Advanced Clinical Decision-Making (2002) available for purchase • Essential Competency Profile for Physiotherapists in Canada (2004) • Best Practice/Outcome Measures web page • Professional Development Survey Results (PowerPoint slides) • Health Information Sheets web page British Columbia • Whiplash Associated Disorders Guidelines (2004) • Physiotherapy Low Back Strain Model of Care (2007) • Online outcome database, available to Physiotherapy Association of British Columbia (available to members only) Alberta None Found Saskatchewan None Found Manitoba • CPA Health Information Sheets (see National Site) Ontario • Physiotherapy Scope of Practice Review (2008) Atlantic Provinces • Primary Healthcare and Physical Therapists: Moving the Profession’s Agenda Forward (2006) • Physiotherapy and Primary Health Care: Evolving Opportunities (2005)      116 3.6 REFERENCES 1. Cole B, Finch E, C G, Mayo N. Physical rehabilitation outcome measures. Toronto: Canadian Physiotherapy Association; 1994. 2. Finch E, Brooks D, Stratford P, Mayo N. Physical rehabilitation outcome measures. Second ed. Hamilton: BC Decker; 2002. 3. Kirkness C, Korner-Bitensky N. Prevalence of outcome measure use by physiotherapists in the management of low back pain. Physiother Can. 2002;53(4):249-257. 4. Gross DP. Evaluation of a knowledge translation initiative for physical therapists treating patients with work disability. Disabil Rehabil. 2008;1(9):1-8. 5. Jette DU, Halbert J, Iverson C, et al. Use of standardized outcome measures in physical therapist practice: Perceptions and Applications. Phys Ther. 2009;89(2):125-135. 6. Oxman AD, Thompson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. Can Med Assoc J. 1995;153(10):1423-1431. 7. Grimshaw JM, Shirran L, Thomas R, et al. Changing provider behavior: an overview of systematic reviews of interventions. Med Care. 2001;39(8, Suppl 2):II-2-II-45. 8. Kay T, Myers A, Huijbregts M. How far have we come since 1992? A comparative survey of physiotherapists' use of outcome measures. Physiother Can. 2001;53(4):268-275.   117 9. Greenhalgh T, Robert G, Macfarlane F, et al. Diffusion of innovations in service organizations. Systematic review and recommendations. Milbank Q. 2004;82:581-629. 10. Bosch MC, van der Weijden T, Wensing M, Grol R. Tailoring quality improvement interventions to identified barriers: a multiple case analysis. J Eval Clin Pract. 2006;13:161-168. 11. McCluskey A, Lovarini M. Providing education on evidence-based practice improved knowledge but did not change behaviour: a before and after study. BMC Med Educ. 2005;5:40. 12. Timmons S. How does professional culture influence the success or failure of IT implementation in health services? In: Ashburner L, ed. Organisational Behaviour and Organisational Studies in Health Care: Reflections on the Future. Palgrave: Basingstoke; 2001. 13. Rogers EM. Diffusion of Innovations. 5th ed. New York: Free Press; 2003. 14. The  Canadian Alliance of Physiotherapy Regulators. The Alliance Home page. Available at: http://www.alliancept.org/. Accessed July 14, 2009. 15. Canadian Physiotherapy Association (CPA). CPA Home Page. Available at: http://www.physiotherapy.ca/. Accessed July 14, 2009. 16. Physiotherapy Association of British Columbia (PABC). PABC Home Page. Available at: http://www.bcphysio.org/app/index.cfm?fuseaction=pabc.home. Accessed July 14, 2009. 17. Alberta Physiotherapy Association (APA). APA Home Page. Available at: http://www.albertaphysio.org/. Accessed July 14, 2009.   118 18. Saskatchewan Physiotherapy Association. Saskatchewan Physiotherapy Association Home Page. Available at: http://www.saskphysio.org/. Accessed July 14, 2009. 19. Manitoba Branch of the Canadian Physiotherapy Association. Manitoba Branch of the Canadian Physiotherapy Association Home Page. Available at: http://www.mbphysio.org/. Accessed July 14, 2009. 20. Ontario  Physiotherapy Association (OPA). OPA Home Page. Available at: http://www.opa.on.ca/. Accessed July 14, 2009. 21. The Nova Scotia Physiotherapy Association. The Nova Scotia Physiotherapy Association Home Page. Available at: http://www.physiotherapyns.ca/. Accessed July 14, 2009. 22. Physiotherapy Associations of New Brunswick PEI, and Newfoundland and Labrador. Atlantic Provinces' Physiotherapy Associations Home Page. Available at: http://www.physiotherapynb.ca/en.php. Accessed July 14, 2009. 23. Scott-Findlay S, Golden-Biddle K. Understanding How organizational culture shapes research use. J Nurs Adm. 2005;35(7/8):359-365. 24. Van der Wees P, Hendriks E, Mead J, Rebbeck T. WCPT: International collaboration in clinical guideline development and implementation. Paper presented at: 15th International Congress of the World Confederation for Physical Therapy, 2007; Vancouver, Canada. 25. Dean E. Psychobiological adaptation model for physical therapy Practice. Phys Ther. 1985;65(7):1061-1068.   119 26. Cott C, Finch E, Gasner D, et al. The movement continuum theory of physical therapy. Physiother Can. 1995;47(2):87-95. 27. Darrah J, Loomis J, Manns P, et al. Role of conceptual models in a physical therapy curriculum: application of an integrated model of theory, research, and clinical practice. Physiother Theory Pract. 2006;22(5):239-250. 28. Wessel J, Williams R, Cole B. Physical therapy students' application of a clinical decision-making model. The Internet Journal of Allied Health Sciences and Practice. 2006;4(3):1-11. 29. Rothstein JM, Echternbach JL. Hypothesis-orient algorithm for clinicians: a method for evaluation and treatment planning. Phys Ther. 1986;66:1388-1394. 30. American Physical Therapy Association. Guide to physical therapist practice. Second ed. Alexandria, VA: American Physical Therapy Association; 2003. 31. National Physiotherapy Advisory Group, Accreditation Council of Canadian Physiotherapy Academic Programs, Canadian Alliance of Physiotherapy Regulators, et al. Essential competency profile for physiotherapists in Canada 2004. 32. World Health Organization. International classification of functioning, disability and health: ICF. Geneva: World Health Organization; 2001. 33. College of Physical Therapists of British Columbia (CPTBC). CPTBC Home Page. Available at: http://www.cptbc.org/. Accessed July 14, 2009. 34. College of Physical Therapists of Alberta (CPTA). CPTA Home Page. Available at: http://www.cpta.ab.ca/. Accessed July 14, 2009.   120 35. Saskatchewan College of Physical Therapists (SCPT). SCPT Home Page. Available at: http://www.scpt.org/. Accessed July 14, 2009. 36. College of Physiotherapists of Manitoba (CPM). CPM Home Page. Available at: http://www.manitobaphysio.com/. Accessed July 14, 2009. 37. College of Physiotherapists of Ontario (CPO). CPO Home Page. Available at: http://www.collegept.org/. Accessed July 14, 2009. 38. College of Physiotherapists of New Brunswick (CPNB). CPNB Home Page. Available at: http://www.cptnb.ca/publicE.html. Accessed July 14, 2009. 39. Prince Edward Island College of Physiotherapists (PEICP). PEICP Home Page. Available at: http://www.physiotherapynb.ca/english/College.htm. Accessed July 14, 2009. 40. Nova Scotia College of Physiotherapists (NSCP). NSCP Home Page. Available at: http://nsphysio.com/. Accessed July 14, 2009. 41. Government of  Newfoundland and Labrador QsP. Physiotherapy Act of Newfoundland and Labrador. Available at: http://canlii.org/nl/laws/sta/p- 13.1/20080715/whole.html. Accessed July 14, 2009. 42. Government of  Newfoundland and Labrador QsP. Physiotherapy Regulations of Newfoundland and Labrador. Available at: http://www.canlii.org/nl/laws/regu/c2007r.60/20090324/whole.html. Accessed July 14, 2009.     121 43. Government of Alberta QsP. Alberta Physical Therapy Profession Act and General Regulation. Available at: http://www.qp.alberta.ca/570.cfm?frm_isbn=9780779737802&search_by=link. Accessed July 14, 2009. 44. Government of  Manitoba QsP. The  Physiotherapists Act of Manitoba. Available at: http://web2.gov.mb.ca/laws/statutes/ccsm/p065e.php. Accessed July 14, 2009. 45. Government of  Ontario QsP. Physiotherapy Act, 1991 of Ontario. Available at: http://www.e- laws.gov.on.ca/html/statutes/english/elaws_statutes_91p37_e.htm. Accessed July 14, 2009. 46. College of Physiotherapists of New Brunswick (CPNB). Physiotherapists Act of New Brunswick. Available at: http://www.cptnb.ca/physioactE.html. Accessed July 14, 2009. 47. Government of  Nova Scotia QsP. Physiotherapy Act of  Nova Scotia. Available at: http://www.gov.ns.ca/legislature/legc/. Accessed July 14, 2009. 48. Government of  Prince Edward Island QsP. Physiotherapy Act of  Prince Edward Island. Available at: http://www.physiotherapynb.ca/english/documents/PhysiotherapyAct.pdf. Accessed July 14, 2009.     122 49. College of Physical Therapists of British Columbia (CPTBC). CPTBC Regulations. Available at: http://www.qp.gov.bc.ca/statreg/reg/H/HealthProf/288_2008.htm. Accessed July 14, 2009. 50. Saskatchewan College of Physical Therapists (SCPT). SCPT Administrative Bylaws. Available at: http://www.scpt.org/docs/pdf/adminbylaws.pdf. Accessed July 14, 2009. 51. Saskatchewan College of Physical Therapists (SCPT). SCPT Regulatory Bylaws. Available at: http://www.scpt.org/docs/pdf/RegBylawsSept23.pdf. Accessed July 14, 2009. 52. College of Physical Therapists of Alberta (CPTA). CPTA Practice Standards for Physical Therapists. Available at: http://www.cpta.ab.ca/resources/guidelines_CPS.pdf. Accessed July 14, 2009. 53. Schmitt JS, Di Fabio RP. Reliable change and minimum important difference (MID) proportions facilitated group responsiveness comparisons using individual threshold criteria. J Clin Epidemiol. Oct 2004;57(10):1008-1018. 54. Rogosa DR, Brandt D, Zimowski M. A growth curve approach to the measurement of change. Psychol Bull. 1982;92:726-748. 55. Zumbo BD. The simple difference score as an inherently poor measure of change: Some reality, much mythology. In: Thompson B, ed. Advances in Social Science Methodology. Vol 5: JAI Press; 1999:269-304.   123 56. Chen C, Hogg-Johnson S, Smith P. The recovery patterns of back pain among workers with compensated occupational back injuries. Occup Environ Med. 2007;64:534-540. 57. College of Physical Therapists of Alberta (CPTA). CPTA Disability Management of Injured Workers Resources. Available at: http://www.cpta.ab.ca/resources/publications_disabilitymanagement_main.sht ml. Accessed July 14, 2009. 58. Canadian Physiotherapy Association (CPA). CPA Professional Development Survey Results. Available at: http://www.physiotherapy.ca/public.asp?WCE=C=47|K=229503|RefreshT=229 495|RefreshS=LeftNav|RefreshD=2294957. Accessed July 14, 2009. 59. Sran M. The PABC web-based outcome measures system: promoting the use of outcome measures and province-wide data collection by members. Paper presented at: World Confederation for Physical Therapy Congress, 2007; Vancouver, Canada. 60. Vernon H, Mior S. The Neck Disability Index: a study of reliability and validity. J Manipulative Physiol Ther. Sep 1991;14(7):409-415. 61. Tunnacliffe R. PABC outcome database utilization. In: Kozlowski AJ, ed; 2009:Report of records submitted to PABC Outcome Database. 62. Mousseau M. Inquiry - CPA and ICF. In: Kozlowski AJ, ed; 2009:At the present time CPA does not have a position statement on the ICF model of health.    124 63. American Physical Therapy Association. APTA Webpage for the International Classification of Functioning, Disability, and Health (ICF). Available at: http://www.apta.org/AM/Template.cfm?Section=Info_for_Clinicians&TEMPLAT E=/CM/ContentDisplay.cfm&CONTENTID=51425. Accessed March 19, 2009. 64. Streiner DL, Norman GR. Health measurement scales: a practical guide to their development and use. Second ed. Oxford: Oxford University Press; 1995. 65. MacAuley C. TBA: Recovery curves in arthritic population. In preparation. 2009. 66. Government of Yukon DoCR. Physiotherapy Regulation Web Page. Available at: http://www.community.gov.yk.ca/consumer/physioreg.html. Accessed July 14, 2009.    125 CHAPTER 4. OUTCOME EVALUATION IN ORTHOPEDIC PHYSICAL THERAPY: APPLICATION OF AND REFLECTION ON A SIMPLE METHOD TO QUANTIFY CLINICAL PRACTICE‡   4.1 BACKGROUND By knowing the outcomes of their interventions, clinicians including physical therapists can better inform their clinical reasoning. The use of a system of outcome measurement and evaluation can facilitate this process1 by transforming a practitioner’s vague recollection and perceptions of past patient treatment responses into concrete quantities similar to those used by a researcher. Reflection on clinical outcomes can facilitate decision making, professional development, and communication with peers and other stakeholders.   Literature on outcome measurement in rehabilitation has grown over the past few decades. Standardized disability questionnaires have been designed and validated to evaluate functioning at multiple levels of the International Classification of Functioning, Disability, and Health (ICF). In orthopedic physical therapy (PT) practice, measures are available that relate primarily to activity limitations resulting from conditions of the lumbar spine,2,3 cervical spine,4,5 and the upper6,7 and lower8,9 extremities. Recently, stakeholders have demonstrated increasing awareness of the  ‡ A version of this chapter will be submitted for publication. Kozlowski, A.J., Dean, E., Horner, S. Outcome Evaluation in Orthopedic Physical Therapy: Application of and Reflection on a Simple Method to Quantify Clinical Practice.   126 value of using standardized questionnaires to measure outcome by mandating documentation changes requiring the reporting of scores from assessments of functioning and using outcome data to inform reimbursement methods.10,11 Professional associations have developed a variety of resources to support implementation of outcome measurement and evaluation processes in clinical practice.12-15 Although these resources can inform the selection of measures appropriate to clinical populations, they do not necessarily guide administrative or evaluative procedures that are needed to fully benefit from adoption of an outcome measurement and evaluation system in practice.  Outcome evaluation has been defined as “the systematic evaluation of the impact of a program or intervention to determine whether it meets its objectives.”12 In the United States, the Guide to Physical Therapist Practice, Second Edition13 advocates that physical therapists measure and evaluate their clinical outcomes,13 but the description of this process is limited primarily to measurement. Specifically, the iterative process of outcome evaluation, interpretation, and application of the results in clinical decision making is not clearly described. Recent literature has provided some guidance with respect to these procedures.16-18 Although methods and results of outcome evaluation studies have been reported,19,20 these may be insufficient to guide the individual practitioner to fully exploit outcome measurement in practice,21,22 and reflect on the effectiveness of interventions.1 Another option is to subscribe to a private database service for collection of outcome data, evaluation, and reporting of results.23 Although evaluations conducted by service providers can provide robust analysis and cross-facility comparison by applying complex statistical   127 and risk-adjustment methods, reporting of outcomes at this level may not help the clinician examine his/her outcomes and reflect on an individual patient or groups of patients. The primary purpose of this study was to describe a method to operationalize an iterative process of outcome evaluation, interpretation, and application of results in clinical decision making described in the literature12,13,16 within the context of the practice of an individual practitioner. We define the processes involved with selection and application of outcome measures and evaluation of the results as applied to a sample of patients, and discuss the value and utility of the process as a whole in light of clinical reasoning and decision-making. The secondary purpose was to describe the experience of the practitioner in implementing a change in practice to measure and evaluate clinical outcomes. Descriptions are provided of the circumstances under which the practitioner elected to pursue methodically measuring and evaluating her clinical outcomes and the methods of implementing the measurement process. The results provide a description of the impact of implementing outcome measurement and evaluation processes on her practice clinically and professionally. Terminology relating to elements of PT practice is based on the Guide and terminology relating to functioning is based on the ICF, with the exception where proper names of questionnaires include the term disability (e.g., Neck Disability Index). The application of these questionnaires, however, is described using the relevant ICF terminology of activity or activity limitations. This is based on the assumption that, although these scales include items relating to impairments of Body   128 Structures and Functions or Participation, the majority of items are at the Activity level of the ICF.  4.2  METHODS This study describes the rationale and experience, measurement process, and evaluation process applied to clinical outcome data collected on a sample of consecutive patients seen by one of the investigators (SH, a physical therapist with 10 years experience) in an outpatient department of a large multi-site hospital system. The institutional ethical review committee and University of British Columbia Behavioural Review Board provided approval for this study (Appendix C), which met the Health Insurance and Portability and Accountability Act requirements for disclosure of protected health information. Signed consent to treatment was required from patients prior to the initiation of PT services. To protect the privacy of patients, no personal identifiers were included in the analysis of demographic or intervention information, or the resulting outcome data. Patients in the subject’s practice were classified with a range of disorders of the back, neck, and upper and lower extremities, many of which could be assessed with a small battery of standardized questionnaires.  4.2.1 Practitioner’s Rationale for Adoption Decision The clinician reported being motivated to pursue the adoption of standardized questionnaires to measure clinical outcome by two converging circumstances. She reported a growing curiosity to know how she was performing professionally; to self-   129 assess her strengths and weaknesses. Such self-assessment could guide decisions to improve her clinical skills, but required an understanding of change in her patients’ status during their Episodes of Care on more than an intuitive level. Concurrently, the facility in which she practiced was implementing a quality improvement initiative. Adopting questionnaires to measure outcome appeared to be consistent with the philosophy of this quality initiative, but was still new in the professional literature. Despite this apparent congruence, organizational barriers existed. These included the lack of standards to guide selection of measures, collection of data, and risk-adjustment methods to address variability in case-mix. Without such standards, the potential for error or misinterpretation, such as false-negative results, was seen to outweigh the potential benefits to the organization. The clinician also reported beliefs from colleagues that “every patient had a good outcome” and that “no one else was doing it” as reasons to not pursue outcome measurement. Consequently implementation of this process within the facility was based on the clinician’s personal motivation, perseverance, and desire to make a contribution to professional knowledge.  4.2.2 Measurement Process The administrative process for data collection was implemented by a physical therapist, on of the authors (SH) motivated to do so independently rather than as a requirement of institutional policy. She selected valid and reliable standardized questionnaires based on review of the literature. She defined a range of outcome and independent variables and created a database for data collection and analysis.   130 Four standardized questionnaires were selected from the literature based on evidence of reliability and validity to measure change in activity limitations over time, prevalence of use, and apparent applicability to the subject’s patient population. They were the Neck Disability Index (NDI),4,5 the Revised Oswestry Disability Index (ODI),2,3 the Disabilities of the Arm, Shoulder, and Hand (DASH) questionnaire,6,7 and the Lower Extremity Functional Scale (LEFS).8,9 A database was created using Microsoft Access software§   that included 12 variables, two of which were derived by calculation from collected variables. Demographic data included the age and sex of each patient. Utilization data were collected for Examination date, Discharge date, and Number of visits. Duration of the Episode of Care was calculated in days from the Examination and Discharge dates. Two variables were collected that related to each patient’s status: Stage of healing, and Reason for discharge or discontinuance. Three collected variables related to the questionnaires. These were Name of the questionnaire, Initial score, and Discharge score. Change scores were calculated from the Initial and Discharge scores. Operational definitions of these variables are provided in Appendix A-1. 4.2.3 Evaluation Process The subsequent evaluation process, which was developed and applied independently by one of the authors (AK), was influenced by defined variables, data collected, and the availability of change indices and statistics from the literature. The purpose was to provide an approach to evaluation of outcome data collected  § Microsoft Access 2002, Microsoft Corporation, One Microsoft Way, Redmond, WA 98052-6399   131 clinically using appropriate methods derived from research. To this end, analytical methods reported in research studies applicable at the individual practitioner level were selected. The evaluation process was developed by partitioning data based on the logical partitioning of the data into clinically meaningful subsets. Logically, the data were collected using four measures with non-equal metrics, thus partitioning by questionnaire was necessary. Scores from cases that presented with ceiling or floor effects may represent invalid application of the questionnaire for those patients where change scores cannot be effectively calculated.12 Clinically, the data set was partitioned based on whether data were available for Examination only, both Examination and Intervention elements of the Episode of Care, representing the different decision-making processes inherent in the point-in-time diagnostic analysis and evaluation of change over time. We also looked for differences in the subsets of patients who did or did not respond to the Intervention. The resulting evaluation process included six steps, each disclosing information that could inform the clinician about the nature of subsets of the sample of patients and provoke reflection on the impact of decisions made at junctures in the Episodes of Care on the clinical outcome (Figure 4-1). This outcome evaluation process is intended for use by physical therapists in any clinical practice setting in which similar processes for measurement and data collection have been implemented at two time points such as admission and discharge. A physical therapist would have to accumulate data from a sufficient number of patients to evaluate outcome for groups of individuals; a larger pool of   132 data would permit drilling down to smaller subsets, such as condition-specific groups. This process should be applicable to data from any standardized questionnaire for which a minimum change threshold has been reported, e.g., minimal detectable change (MDC) or minimal clinically important difference.16 Although we feel this process provides a meaningful way of systematic analysis of outcome data, it is one of many ways of examining such data.  4.2.3.1  Post-Diagnostic Analysis Patient records were first categorized as Examination-Only or Intervention. Records representing Examination-Only visits are conceptually and practically different than those with multiple visits. Conceptually, the analysis of Examination findings conducted in the Evaluation element of practice determines whether or not clinical evidence indicates PT Intervention is warranted. Intervention may not proceed, for example, in the presence of red flags, with inappropriate referral diagnoses, or in circumstances of high risk for poor outcome. A single visit may also represent a successful intervention that may not require follow-up (e.g., self- management education). Practically, a patient presenting as a good candidate for Intervention may choose not to proceed or may be limited by geographical or financial barriers. And, since questionnaire data for single-visit records cannot be assessed for change over time, post-diagnostic analysis was identified as an important point to partition, compare, and reflect on the data.     133 4.2.3.2  Data Integrity Next, records were classified as being complete or incomplete, based on whether Initial and Discharge scores were present or not, as both are required to calculate a change score. Incomplete records may indicate patients who discontinued treatment, failure of administration of the questionnaire at either Examination or Discharge visits, or other circumstance. Complete and incomplete subsets were compared for differences on selected variables.  4.2.3.3  Case Validity Although outcome data may be complete, some cases may represent invalid application of a standardized questionnaire to an individual.24 The presence of ceiling effects is problematic specifically where the prognosis is for measurable improvement, as are floor effects where the prognosis is for deterioration.12 A ceiling effect was defined for records where the initial score was less than the MDC value for the questionnaire from the highest-functioning end of the scale. Although other methods exist to accurately estimate ceiling effects this approach can be easily applied by any clinician. A floor effect was not defined as all patients in the sample were expected to improve. Where a clinician’s case-mix includes patients where the prognosis is for deterioration and the purpose of Intervention is to prevent or mitigate deterioration, the outcome data would also need to be partitioned as such.      134 4.2.3.4 Body Region/Questionnaire At this step, remaining data were partitioned by standardized questionnaire (NDI, ODI, DASH, or LEFS). Table 4-1 summarizes their measurement properties. These questionnaires estimate Activity limitations related to conditions of four discrete body regions of the neck, back, upper extremity, and lower extremity, respectively. Although three of the scales range from 0 or 1-100, they have not been validated as being equivalent, and thus required partitioning of data by body region/questionnaire. Further partitioning could be done by joint or diagnostic category if sufficient data were available. Use of other standardized questionnaires that are more specific in application may require further partitioning if, for example, the standardized questionnaire is specific to functioning associated with specific joints, such as the hip, knee, shoulder or wrist, or to a condition like arthritis or stroke. Three variables were associated with questionnaire scores. They are the scores at Examination, at Discharge, and amount of change derived from their difference. The outcome completion proportion (OCP) for each questionnaire was also calculated. This proportion represents the number of records with complete change scores divided by the total number of case-valid records for that questionnaire. We feel this value aids in interpretation of a number of ways. It may provide an indication of the risk for selection bias in the sample where either the best or worst outcomes are systematically excluded from evaluation. Obtaining discharge outcome scores on all patients has been a challenge,19,25,26 and failure to do so risks selection bias and misinterpretation of the outcome evaluation. It provides an   135 indication of confidence in the extent to which the change indices reflect the clinician’s practice. Where the OCP is high, the change indices will reflect the clinician’s practice with that subset of patients. Also, it may indicate the effectiveness of administrative processes. For instance, a low OCP may help to identify where the planned timing in questionnaire administration has failed to result in completed questionnaires.  4.2.3.5 Change Indices This step involved determining mean change, effect size (ES),12,27 and reliable change proportion (RCP)16 for subsets of outcome data grouped by the four regional questionnaires. Effect size is an index of group change, defined as the mean change of the group divided by the standard deviation of the baseline score.27 Group change indices like ES combine patient scores that improved, worsened, or did not change, which may mute the magnitude of change yielding modest ES values seen in previous studies.19,20,26 This ES statistic was selected based on analysis of among- patient change in a heterogeneous patient sample.28 Interpretation of ES has been reported as follows: .20< small >.50> medium >.80> large,29,30 however this statistically derived index does not typically incorporate a threshold of clinical importance for non-randomized study designs.30  The RCP represents the proportion of complete records with change scores exceeding the MDC for that questionnaire,3 which has been described as a clinically intuitive index of responsiveness.16 This individual change index differentiates   136 patients who improved a measurable amount from those who did not. The MDC provides the best likelihood of correctly classifying a patient as having a true positive change on a given outcome measure.16 MDC values reflect both the characteristics of subjects included in a study group and confidence interval from which they were calculated. We selected moderate MDC values based on study groups with mixed orthopedic conditions and 90% confidence intervals to balance the risks of misclassifying patients as having changed or not. The RCP represents a ratio of patients who have demonstrated a true improvement (responders) to those who have not (non-responders). Interpretation criteria were arbitrarily set as follows: acceptable >50%, satisfactory >66%, above satisfactory >75%, and outstanding >90%. This was based on assumptions that more than half of Intervention patients that are expected to improve should demonstrate measurable improvement, that questionnaires have varying degrees of validity and reliability to measure change with any individual, and that normative standards to validate these thresholds have not yet been established.  4.2.3.6  Response Comparison This step compared differences between responders and non-responders on selected variables. Independent variables were defined for age, sex, Episode of Care duration, Number of visits, Stage of healing, and Reason for discharge/discontinuance (Appendix A-1). Descriptive statistics were reported for all variables (mean, standard deviation, median, and/or frequency). After partitioning patient records into responders and non-responders, differences between these   137 groups were examined using t-test or chi-square statistics, with an alpha value of 0.01. A Bonferroni correction yielded an alpha value of 0.001, which was considered too stringent a safeguard for a Type I error, unduly increasing the risk of a Type II error given the exploratory design of this evaluation process.  4.3 RESULTS This evaluation process provided a systematic means of partitioning the data of these patients’ response to treatment into logical, clinically meaningful subsets (Figure 4-1). Information generated at each step can provide the clinician with opportunities for reflection. A sample of cues to reflection has been included in Appendix A-2. Patient identifiers were removed from the records prior to exporting the data file to SPSS Version 12.0 for Windows** which was used to conduct the analyses, although the process was developed for use with Microsoft Excel® software†† Data were collected on all patients seen by one of the authors (SH) from July 2002 to April 2004. Patients were referred by physicians and completed the relevant . Linear regression-based methods including analysis of variance are appropriate for evaluating between-group differences and interactions. Such methods may not be readily available or familiar to practitioners. Functions for t-test and chi-square test of independence however are available and can be easily applied by practitioners with access to Excel® software and possibly with other spreadsheet applications.  ** Statistical Package for Social Sciences Version 12.0.0 , SPSS Inc. 233 S. Wacker Drive, Chicago, IL 60606-6307 †† Microsoft Excel 2002, Microsoft Corporation, One Microsoft Way, Redmond, WA 98052-6399   138 standardized questionnaire for the diagnosis on their initial visits. The sample included 296 individuals of whom 58.8% were women with a mean (standard deviation) age of 44.2 (18.1) years. Most patients were in the subacute and chronic stages of healing (Table 4-2, column 2). Discharge/discontinuance reason data were missing for 23 records as this variable was added subsequently to initiating data collection. Methodology for how the data were partitioned in each of the outcome evaluation process appears in Figure 4-1.  4.3.1  Post-Diagnostic Assessment  First, records were partitioned into Examination-Only and Intervention based results from the clinical evaluation. Of the total, 8% of patients were seen for an Examination Only, of which 80.9% were discharged for clinical reasons determined by the subject (Table 4-2, column 2). For all 271 Intervention patients, the Episode of Care had a mean (standard deviation) duration of 32.6 (26.5) days and 11.3 (9.0) visits. Both distributions were both positively skewed, with maximum values of 151 days and 68 visits respectively.  4.3.2  Data Integrity Next, Intervention records were partitioned as having complete or incomplete outcome data. The 61 patients with incomplete records were younger (39.1 vs. 45.8 years, p=.010), had shorter duration (24.8 vs. 35.0 days, p=.007), and fewer visits (8.5 vs. 12.2 visits, p=.004) than those with complete records, but did not differ for sex (p=.10) or stage of healing (p=.50) (Table 4-2, column 4). Discharge reasons   139 were coded for the 43 incomplete records. Of the complete records, 94% were for discharged patients, of whom 61.7% had met their goals (Table 4-2, column 5). Two records had discharge scores but were labeled with discontinuance rather than discharge reasons, suggesting a coding error.  4.3.3  Case Validity Four records were partitioned from the data set due to ceiling effects, all on the LEFS.  4.3.4  Body Region/Questionnaire Complete records on the four questionnaires ranged from 26 (NDI) to 62 (DASH). The OCP ranged from 65.5 (LEFS) to 92.3 (NDI), (Figure 4-1). No further partitioning of data was done by joint, diagnosis, or other type of classification due to the small number of records.  4.3.5 Change Indices The 26 NDI records had a mean improvement of 13.1 and standard deviation of 10.4 NDI points (Table 4-3). The median value of 23 indicated that Duration was positively skewed. Fewer patients presented as acute (19.2%). Magnitude of group change was large (ES=1.1) and individual change was above satisfactory (RCP=76.9%). The 62 cases with DASH records had a mean improvement of 20.7 standard deviation of 21.5 DASH points (Table 4-4). Distributions for change score, duration,   140 and visits appeared positively skewed while age was negatively skewed based on mean/median differences. Most patients presented as subacute (48.4%). Magnitude of group change was large (ES=1.03) and individual change was acceptable (RCP=62.9%). The 61 cases with ODI records with a mean improvement of 16.7 and standard deviation of 16.7 ODI points (Table 4-5). All distributions appeared normal based on mean/median comparison. Most patients presented as chronic (41.0%). The magnitude of group change was large (ES=1.21) and individual change was above satisfactory (RCP=82.0%). The 58 cases with LEFS records with a mean improvement of 19.2  and standard deviation of 14.9 LEFS points (Table 4-6). Duration appeared to be skewed positively. Most patients presented as subacute (43.1%). Magnitude of group change was large (ES=1.19) and individual change was satisfactory (RCP=72.4%).  4.3.6  Response Comparison When comparing responders and non responders on the NDI, no differences in patient characteristics were found, however this may be due to the small sample size. Responders on the DASH reported greater initial levels of activity limitation (54.1 vs. 72.0 DASH points, p<.0005) and were more acute (33% vs. 13%, p=.007). They had interventions with longer duration (42.1 vs. 25.3 days, p=.005) and more visits (15.1 vs. 8.9, p=.008). Responders and non-responders on the ODI did not differ on patient characteristics. Responders on the LEFS had lower initial activity scores (38.6 vs. 52.6 LEFS points, p=.002).   141 4.3.7 Impact on the Clinician The clinician reported having experienced different impacts from the experience of implementing the measurement process and from of the evaluation described in this study. The evaluation data provided in this study offered a perspective similar to that of a research study, and individual evaluation of the raw data as it was collected offered a perspective at the patient-level. At the aggregate level, the clinician reported dissatisfaction with her outcomes for patients with back- related disability, the larger of her clinical populations, despite the apparently reasonable values for the change indices. This dissonance led her to delve deeper into the data and led her to make changes to her approach to diagnostic analysis and intervention planning and treatment for patients with low back conditions. Despite a mean change of almost 17 ODI points with an ES of 1.2 and a RCP of 82%, the clinician calculated a mean discharge score of 37.4 ODI points (mean discharge scores were not provided), and felt that represented a substantially higher than satisfactory level of disability for patients who were ending Intervention. Values reported by Childs and colleagues in validating a clinical prediction rule for manipulation to treat low back pain conditions showed reductions in ODI mean scores from 40 to about 15 points following 1 or two treatments over one week.31 Regardless of case-mix differences, this discrepancy spurred the clinician to re-think what she was doing across the Episode of Care for this patient population. The clinician’s dissatisfaction with these discharge outcomes led her to collect additional data on units of treatment intervention for patients with low back pain conditions. With this she confirmed the high frequency at which she was using   142 procedures such as hot packs, cold packs, and traction, and that she was using little to no manual therapy where it might be indicated. This awareness arose from review of the growing evidence supporting a treatment based classification system for low back pain and self-reflection on how this evidence related to her practice. This, in conjunction with a critical appraisal of her diagnosis analysis process, choices in Intervention strategies and tactics (type, frequency, and timing of use), and other decisions across the Episodes of Care, may have contributed to attaining less than optimal outcomes for this patient subset. Integration of evidence on classification and using strategies and tactics supported by this evidence might result in such patients being discharged with ODI scores in the 12-15 point range. The outcome evaluation led her to dig deeper to better understand the physical therapy experience that led to the aggregate outcomes. This created an opportunity for self-reflection and professional growth with improvement of the patients’ experience and outcomes being the primary focus. She integrated a treatment based classification system for low back pain and a clinical prediction rule for manipulation into practice, pursued continuing education on lumbar manipulation, developed confidence in understanding and applying statistical information from literature like sensitivity, specificity and likelihood ratios into decisions on Intervention strategies.  The process of outcome evaluation required a substantial investment of time and effort to capture data for a sufficient number of patients within each body region, and the results of the outcome evaluation were provided to the clinician after an additional period of months. However, the clinician developed a different perspective   143 in the value of the use of outcome measurement tools following review of the evaluation. Immediate analysis of the individual-level data provided a better understanding of the patients’ status at admission and over time. She developed an ability to relate the scores on the questionnaires to the patients’ impairments, demonstrated Activity, and reported Participation status. She reviewed other variables such as the number of visits, initial, discharge, and change scores, and consciously thought about how the values related to the patients’ self-reported ability status and her perception of change over the Episodes of Care. She compared change scores to MDC and clinically important difference thresholds, but also looked at the discharge scores as indicators of residual disability. She related the Intervention strategies and tactics provided and the patients’ responses throughout the Episodes of Care in order to generate meaning of the questionnaire scores. The clinician reported a number of byproducts of this critical thinking exercise. She noted having improved ability compare and contrast her patients’ outcomes in metrics comparable to those reported in peer-reviewed literature, thus providing a benchmark to evaluate her performance and make decisions about implementing published practice guidelines and study recommendations. Her ability to predict duration, number of visits, and thus make cost estimates of for services improved. She used the information to answer patient questions on expectations. She would reflect on why she chose interventions and developed consistency in making choices.  She could identify cases where she made convenient decisions based on familiar circumstances, and developed confidence to move outside her comfort zone to implementing evidence. She reports feeling that she makes decisions to treat/not   144 treat more quickly given a better grasp on how patients tend to respond with Intervention. She also noted having gained comfort with uncertainty and evolution in her practice. She stated, “I'm not sure, but it seems that the tools also help improve the relationship with the patient. The connection I am able to make with them improves, which may indirectly or directly improve their outcomes.” Essentially she appears to have developed an ability to relate the point-in-time and change scores to the patients’ demonstrated performance, and self-reports of how they function in their life roles and environments as a basis to make decisions on what to offer to improve their ability to function in their lives. Generally she reported having developed confidence during interactions with patients while integrating new interventions or research recommendations while accepting uncertainty. She described this as a very humbling yet gratifying experience. Professionally, the clinician made choices in continuing education based on review of outcome data and implemented evidence for Examination and Intervention of patient with low back pain conditions. Conversely, awareness of less satisfactory outcomes with patients presenting with upper extremity conditions did not readily lead to as quick and gratifying a change in practice. Because the outcome evaluation aggregated data for patients with all upper extremity conditions, she reported being less able to critically appraise her clinical approach for specific subsets of patients, such as those with shoulder conditions. The evaluation indicated change indices for the DASH questionnaire to be acceptable for the RCP and high for mean change and ES. However, when she looked at scores for patients with shoulder conditions, she noted less satisfactory DASH scores. This discrepancy   145 raised additional questions. She wondered whether the reasons some patients were referred was related to the lack of response to treatment, whether patients referred after shoulder surgery respond to a level at which a minimal clinical difference is achieved, and whether categorizing DASH records into subsets specific to joint or diagnostic categories might provide better opportunities for self-reflection. Thus, the clinician decided more information was needed and began to collect data for patients with shoulder problems incorporating a variation of the classification system described by Millar.32 She also became active in an online community and met colleagues with similar interests in outcome evaluation. She presented her findings at a professional conference33 and continues to respond to email requests for information on understanding outcomes. She has had the opportunity to trial an online tool for predicting and measuring change developed by a colleague.34 Many of the changes in practice reported by this clinician could have developed without the adoption of standardized questionnaires. Thoughtful implementation of research findings into practice is a cornerstone of evidence-based practice. And, much of the insight she reported having gained is likely to have developed by routinely engaging in reflective activities. However, this clinician has reported a practical application of the interpretation of data from standardized questionnaires to link both evidence-based and reflective practice.      146 4.4 DISCUSSION The outcome evaluation process reported in this analysis of a sample of patient data from a single physical therapist’s practice can be used to guide clinical decision making by providing a framework for reflection on outcome data for groupings of individual patients. Examining subsets of data methodically partitioned within the sample enabled the subject to identify areas of service excellence and opportunities for improvement. Reflection on both the outcomes and the administrative processes used across Episodes of Care enabled the subject to differentiate processes and decisions resulting in optimal and less optimal service provision. This further enabled the subject to provide more effective and efficient services, facilitated communication with stakeholders, and guided choices in professional development such as conferences and continuing education. Analysis of this type, however, is done long after the discharge of the patients whose data are included. Review of and reflection on individual cases throughout the Episode of Care provides a more immediate but less comprehensive perspective on dimensions of the clinician’s practice Examination of the data by the physical therapist guided reflection on some critical decision-making points in the Episode of Care. Review of Examination-Only patients identified that most (81%) were discharged by the subject for clinically relevant reasons such as identification of pathology requiring a physician’s attention or attainment of an Intervention goal within the single visit. Discontinuance represented 19% of the reasons to end of the Episodes of Care, half of which were coded as insurance and attendance issues. Data integrity evaluation identified an overall OCP of 77.5% which exceeded that of some large studies.19,25,26 Comparison   147 of complete and incomplete records indicated differences which may represent a selection bias. For example, patients who were younger and/or female were more likely to discontinue before the planned discharge. Interventions may have been too long for some responders, or non-responders may not have valued the service, perceived a discrepancy between their goals and the subject’s clinical objectives, decided further treatment was not necessary, or that the patient-therapist relationship was unsatisfactory. Incomplete records with discharge reasons and complete records with discontinuance reasons could indicate records with coding errors. Reflection can be done at the group and individual patient levels with this type of evaluation. Between the questionnaire subsets, a lower OCP value was seen for the LEFS, whereas lower mean change, ES, and RCP values were seen for the DASH subset. The clinician reported having not administered the LEFS at discharge with some patients who had ‘clinically obvious’ outcomes; some who had attained high levels of demonstrated Activity and others for whom this appeared to have not changed or worsened. The risks of using a questionnaire with selected patients only to validate the clinician’s perception include introducing selection bias, decreasing the magnitude of OCP and change indices, and altering the response comparison findings by reducing both the number and variation of patient characteristics in those records. Lower change indices for the DASH subset might be indicative of a subset of patients with less potential for recovery, poorer outcome than was possible, generally poorer prognoses for upper extremity injuries, or heterogeneity of the   148 sample. Application of classification systems for low back conditions has been reported to improve matching interventions, thus outcomes.31,35,36 Advances in the clinically relevant stratification of upper extremity conditions could have improved our ability to interpret outcome data. Likewise, establishing profession-wide norms by standardizing processes for outcome evaluation could enhance interpretation. The response comparison found non-responders had lower initial activity limitation scores, longer duration/more visits, and more chronic conditions. This result might be due to relative ceiling effects, early identification of non-response, or less potential for improvement. Awareness of such factors by the clinician could contribute to attainment of greater gains in change or earlier decisions regarding discharge for future patients. Examining individual records of non-responders, in particular those with longest duration Episodes of Care, might also help the clinician identify opportunities for self- improvement. For instance, if the DASH non-responders had similar diagnoses, the clinician could review current best-practices for that condition. Clinical prediction rules37 and clinical practice guidelines38-40 can provide best-practice evidence for development of prognoses and plans for groups of patients with similar diagnoses. However, prognoses and planning for an individual must be adjusted based initially on awareness of a wide variety of characteristics that could influence outcome, and subsequently on the patient’s response to intervention strategies. Tailoring of initial and ongoing prediction and planning may be facilitating through reflection. A clinician’s awareness of his or her patient’s outcomes may also facilitate communication with stakeholders and decisions on professional development   149 strategies. Communication with physicians or insurers on whether or how to proceed with a patient can be supported by the clinician’s past performance. Previous success with patients with poor prognoses can provide a rationale for proceeding with intervention. For patients with poor prognoses, lack of detectable change on a questionnaire supported by previous outcome evaluations may provide a rationale to consider discharge and/or other options. Through reflection on long-duration Episodes of Care, our subject identified some patients where a physician persistently referred patients back to PT despite lack of progress and others who demonstrated gradual improvement and eventual goal attainment. In both cases, our subject reported anecdotally that stakeholder agreement was facilitated by the use of questionnaire scores and MDC values to support recommendations for continued intervention or discharge. Reflection on outcomes can also guide choices for professional development. Our subject reported having made choices about continuing education courses and conference sessions that addressed examination and intervention upper extremity conditions based on our findings from this outcome evaluation process. The outcome evaluation process provided a basis for our subject to methodically partition information from almost 300 patients into subsets that guided reflection and influenced reasoning and clinical decision. Ideally, aggregation of data from all patients would provide a representation of the cumulative clinical experience of the physical therapist on one dimension of outcome based on the selected standardized questionnaires. This study established a baseline from which this   150 clinician could benchmark evaluation of future patient samples and a basis for clinical reflection. However, much of the reflective activity reported by the clinician was facilitated by the experience of implementing the measurement process. This evaluation was provided over two years after discharge of the first patients and many months after discharge of the last patients whose records were included in the analysis. She reported having engaged in three types of reflection described by Wainwright and colleagues: reflection in action, or in-the-moment self-awareness of patient interactions; reflection on specific action, or retrospective critical review of specific patient interactions; and reflection on professional experience. The evaluation did facilitate a change in approach to care of patients with low back pain conditions, but much of the value of the experience was not captured by this study. It is likely that both the processes of implementing questionnaires and of methodically partitioning data into meaningful subsets can facilitate reflection. However, the attributes of the clinician in terms of motivation and capacity for reflection and willingness to change practice behaviors are important and perhaps necessary conditions.  4.4.1  Limitations  Limitations of this study relate to scope, design, and analytic methods applied to the data. The scope was limited to a sample of patient records collected by one clinician, who was motivated to independently adopt the use of standardized questionnaires to measure clinical outcome. Some of those records were incomplete. The design was exploratory which can only be interpreted in light of this   151 sample of patients and this clinician. Lack of comparison groups and external benchmarks (i.e., population norms or risk-adjustment standards) limit the degree to which the effectiveness of intervention can be interpreted. Incomplete records and missing data present a risk of a selection bias. Use of t-test and chi-square statistics rather than multiple linear regression methods can only identify differences on individual variables independently. Differences due to chance, shared variance, interaction, or suppression cannot be readily extracted leading to potential misinterpretation. Such comparisons can only be viewed as exploratory in nature, providing more information than an isolated anecdote rather than solid evidence of differences. Collection of standardized questionnaire scores for only two time-points (Examination and Discharge) can only permit calculation of a simple difference score. This may mask different paths of outcome limiting the clinician’s ability to interpret differences of individual responses to Interventions. Bias may also exist due to measure selection, administration, scoring, and interpretation by the clinician. Independent variables of importance may not have been captured such as pain, fear, motivation, beliefs, healthy and unhealthy behaviors.  4.4.2  Future Research Standardization of outcome measurement and evaluation processes covering the depth and breadth of clinical PT practice provides a sound basis for individual and group comparisons at the clinician management and systems levels. Establishing normative benchmarks for change indices would provide clinicians with an external standard. Developing methods to risk-adjust records to a “standard   152 patient record” could reduce the issue of non-comparability of patients. Refinement of clinical reflection algorithms based on outcome evaluation could facilitate efficient yet systematic examination of the clinician’s patients and their practices. Promoting administration of standardized questionnaires at multiple time points would enhance clinical decision-making during Intervention and subsequently in outcome evaluation. Administering questionnaires over the course of a patient’s Intervention could identify early deviations from the prognosis (better or worse) and provided additional time points providing a more accurate representation of the outcome path. Post- discharge administrations could permit evaluation of preventive Interventions and durability of outcome, and having data collected from at least three time-points would permit mapping of outcome paths as curves rather than a simple difference score which would also permit use of latent growth curve analysis. Combined, these outcome evaluation alternatives could provide complementary prognostic and evaluative tools to enhance clinical decision-making throughout the Episodes of Care. Additionally, such repeated administration could facilitate more immediate reflection on the path of outcome individual patients take, however further research on the process of reflection and the role of outcome measurement in facilitating reflective practice is necessary.  4.4.3 Conclusion This study described a process for outcome evaluation applied to patient data collected by one orthopedic physical therapist.  Documenting her reflections of implementing self-report questionnaires to measure and evaluate outcomes   153 provided insight to the impact of the experience on her clinical practice. The clinician’s perspective was that the systematic use and implementation of the questionnaires to measure outcome in her practice enabled her to streamline her clinical decision-making processes and identify areas on which to focus her professional development and continuing education efforts. Whether patient outcomes actually improve with such systematic use of outcome measures, however, warrants further study. Another challenge for future research will be to find effective means to integrate advances in analysis with effective implementation of knowledge translation research to promote adoption of outcome measurement and evaluation as a routine element of clinical practice, and to explore the role of these processes on reflective practice.    154 Figure 4-1. Outcome Evaluation Process     Examination Only n=25 Intervention n=271 Outcome Records Incomplete n=61 Outcome Records Complete n=210 Selected Questionnaires n=206  Ceiling Effects  n=4 (LEFS) All Patients n=296 Post-Diagnostic Evaluation Data Integrity Case Validity Body Region/ Questionnaire  Change Indices Response Comparison NDI n=26   DASH n=62   ODI n=61   LEFS n=57   Mean ES RCP  13.1 1.12 76.9  20.7 1.03 62.9  16.7 1.21 82.0  19.2 1.19 72.4  Differences between responders & non- responders on Initial Score, Age, Sex, Visits, Duration, & Stage of Healing  Table 3 Table 4 Table 5 Table 6 OCP  92.3  65.5  80.2  86.1    155 Table 4-1. Summary of Measurement Properties for Standardized Outcome Measures  Measure Items (n) Item Range Cumulative Scale Internal Consistency (coefficient alpha) Test-retest Reliability (intraclass correlation coefficients) Change Threshold (MDC) NDI 10 0 – 5  0 – 100* .80 - .87(5, 6) .89 - .94(5, 6)  9.0(6) DASH 30 1 – 5  0 – 100* .96 - .97(7, 8) .92 - .96(7, 8) 10.7(8) ODI 10 0 – 5  0 – 100* .82 - .90(3, 4)  .88 - .94 (3, 4) 10.5(4) LEFS 20 0 – 4 0 -   80   .93 - .96(9, 10)    .85 - .94(9, 10)    9.0(10) *NDI, DASH, and ODI scales were transformed to range from 0= high level of activity limitation to 100= no activity limitation with positive change scores indicating reduction of activity limitation.   Table 4-2. Patient Data by Partitioned for Post-Diagnostic Evaluation and Data Integrity (Steps 1 and 2)   Intervention with Outcome Data  All Cases Exam Only Incomplete Complete  N (%) 296 (100) 25 (8.4) 61 (21.6) 210 (70.9) Sex (F)  174 (58.8)  20 (80.0) 40 (65.6)a 114 (54.3) Age (yrs) *    44.2 (18.1) 46.5 45.0 (16.1) 44.0 39.9 (19.4) 40.0b 45.4 (17.8) 47.0 Duration (days) *  - 0.7 (1.1) 0.0 25.2 (29.5) 15.0c 34.7 (25.2) 28.0 Visits (no.) * - 1.4 (0.5) 1.0 8.7 (9.0) 6.0d 12.1 (8.9) 10.0  Stage of Healing  296 (100) 25 (100)  61 (100)e 210 (100) Acute 69 (23.3) 5 (20.0) 13 (21.3) 51 (24.3) Subacute 122 (41.2) 9 (36.0) 23 (37.7) 90 (42.9) Chronic 105 (35.5) 11 (44.0) 25 (41.0) 69 (32.9)  Discharge/ Discontinuance 273 (100)  21 (100) 59 (100) 193 (100) Discharge Reasons 141 (88.3) 17 (80.9) 43 (72.9) 176 (93.9) Goals met 135 (49.5) 4 (19.0) 14 (23.7) 117 (60.7) Minimal progress 42 (15.4) -  5 (8.5) 37 (19.2) Refer to physician 52 (19.0) 8 (38.1) 19 (32.2) 25 (13.0) Service inappropriate 12 (4.4) 5 (23.8) 5 (8.5) 2 (1.0) Discontinuance Reasons 32 (11.7) 4 (19.1) 16 (27.1) 12 (6.1) Did not return 15 (5.5) 2 (9.5) 7 (11.8) 6 (3.1) Insurance issues 4 (1.5) 1 (4.8) 1 (1.7) 2 (1.0) Moved 2 (0.7)  -  - 2 (1.0) Attendance issues 11 (4.0) 1 (4.8) 8 (13.6) 2 (1.0) Missing 23 (N/A) 4 (N/A) 2 (N/A) 17 (N/A) Age, Duration, and Visits reported as Mean (Standard Deviation) Median, all others as Frequency (Proportion) Intervention differences: aχ2=2.644 p=.104, bt=2.602 p=.01, ct=2.705 p=.007, dt=2.869 p=.004, eχ2=1.379 p=.502   156 Table 4-3. Response Comparison for Neck Disability Index (NDI) Data  Group  All Responder* Non- Responder t χ 2 df p N   26 20 6 - - - - NDI Initial Score§ Mean 68.6 69.7 65.0 1.835  -  24  .079 SD 11.6 11.8 11.2 Med 71.0 71.0 70.0 NDI Change Score§ Mean 13.1 16.8 0.7 -  -  -  - SD 10.4 8.7 3.5 Med 10.0 13.0 2.0 Age (years) Mean 53.2 54.0 50.5 0.806  -  24  .428 SD 13.3 12.5 16.9 Med 51.0 51.5 46.0 Sex (Female) n  15.0 12.0 3.0 - 0.189 1 .664 %  57.7 60.0 50.0 Duration (days)  Mean 28.2 28.7 26.8 -0.830  -  24  .415 SD 23.3 26.3 8.5 Med 23.0 20.0 25.5 Visits (n) Mean 9.5 10.0 8.2 -0.766  -  24  .451 SD 6.3 7.1 2.8 Med 7.5 7.5 7.5 Stage of Healing (n) Acute 5 4 1 -  0.445  2  .800 Subacute 11 9 2 Chronic 10 7 3 *Responders were defined as patients having improved by at least the MDC90=9.0 NDI points. §NDI scale was inverted to range from 0= high level of activity limitation to 100= no activity limitation with positive change scores indicating reduction of activity limitation.    157  Table 4-4. Response Comparison for Disabilities of the Arm, Shoulder, and Hand (DASH) Questionnaire Data  Group  All Responder* Non- Responder* t χ 2 df p N   62 39 23 - - - - DASH Initial Score§ Mean 60.7 54.1 72.0 3.375  -  24  .0005 SD 20.1 18.9 16.9 Med 59.0 57.0 77.0 DASH Change Score§ Mean 20.7 32.7 0.2 -  -  -  - SD 21.5 17.5 8.1 Med 16.0 29.0 3.0 Age (years) Mean 42.9 41.3 45.8 -0.966  -  24  .428 SD 17.8 19.2 15.2 Med 47.0 43.0 47.0 Sex (Female) n  25 17 8 - 0.466 1 .495 %  40.3 43.6 34.8 Duration (days) Mean 35.8 42.1 25.3 2.403  -  24  .005 SD 27.6 31.9 13.2 Med 27.5 33.0 23.0 Visits (n) Mean 12.8 15.1 8.9 2.239  -  24  .008 SD 10.8 12.7 4.0 Med 9.0 12.0 8.0 Stage of Healing (n) Acute 16 13 3 -  9.985  2  .007 Subacute 30 21 9 Chronic 16 5 11 *Responders were defined as patients having improved by at least the MDC90=10.7 DASH points. §DASH scale was inverted to range from 0= high level of activity limitation to 100= no activity limitation with positive change scores indicating reduction of activity limitation.    158 Table 4-5. Response Comparison for Oswestry Disability Index (ODI) Data  Group  All Responder* Non- Responder t χ 2 df p N   61 50 11 - - - - ODI Initial Score§ Mean 54.3 53.0 60.2 0.454  -  59  .642 SD 13.8 14.0 12.2 Med 56.0 54.0 62.0 ODI Change Score§ Mean 16.7 21.8 -6.6 -  -  -  - SD 16.7 13.6 7.1 Med 18.0 20.0 -6.0 Age (years) Mean 45.7 46.0 44.1 -0.388  -  59  .700 SD 16.3 17.1 12.4 Med 44.0 44.0 47.0 Sex (Female) n 36 30 6 - 0.111 1 .739 %  59.0 60.0 54.5 Duration (days) Mean 31.6 32.2 29.3 0.740  -  59  .462 SD 19.6 20.1 18.1 Med 29.0 29.5 28.0 Visits (n) Mean 10.9 11.2 9.7 0.862  -  59  .392 SD 6.9 6.9 7.1 Med 10.0 10.0 8.0 Stage of Healing (n) Acute 14 13 1 -  3.091  2  .213 Subacute 22 19 3 Chronic 25 18 7 *Responders were defined as patients having improved by at least the MDC90=10.5 ODI points. §ODI scale was inverted to range from 0= high level of activity limitation to 100= no activity limitation with positive change scores indicating reduction of activity limitation.   159 Table 4-6. Response Comparison for Lower Extremity Functional Scale (LEFS) Data  Group   All Responder* Non- Responder t χ 2 df p N   58 42 16 - - - - LEFS Initial Score§ Mean 42.4 38.6 52.6 -3.183  -  56  .002 SD 16.1 14.5 16.1 Med 42.5 38.5 58.5 LEFS Change Score§ Mean 19.2 26.0 1.4 -  -  -  - SD 14.9 11.5 4.3 Med 19.0 24.0 2.0 Age (years) Mean 45.6 44.2 49.3 -0.869  -  56  .389 SD 19.8 19.9 19.6 Med 48.0 47.5 49.0 Sex (Female) n  36 27 9 - 0.318 1 .573 %  62.1 64.3 56.3 Duration (days) Mean 40.7 44.1 31.9 1.497  -  56  .140 SD 27.8 29.8 20.1 Med 36.0 39.5 28.0 Visits (n) Mean 14.1 15.1 11.4 1.358  -  56  .180 SD 9.5 10.2 6.9 Med 12.5 13.5 11.5 Stage of Healing (n) Acute 16 14 2 -  3.397  2  .183 Subacute 25 18 7 Chronic 17 10 7 *Responders were defined as patients having improved by at least the MDC90=9.0 LEFS points. §LEFS scale ranges from 0= low level of activity to 100= high level of activity with positive change scores indicating improvement in activity.      160 4.5 REFERENCES 1. Higgs J, Jones M. Clinical reasoning in the health professions. 2nd ed. Oxford: Butterworth Heinemann; 2000. 2. Fritz JM, Irrgang JJ. A comparison of a modified Oswestry low back pain disability questionnaire and the Quebec back pain disability scale. Phys Ther. 2001;81(2):776-788. 3. Davidson M, Keating JL. A comparison of five low back disability questionnaires: reliability and responsiveness. Phys Ther. 2002;82(1):8-24. 4. Vernon H, Mior S. The Neck Disability Index: a study of reliability and validity. J Manipulative Physiol Ther. Sep 1991;14(7):409-415. 5. Stratford PW, Riddle DL, Binkley JM, Spadoni G, Westaway M, Padfield B. Using the Neck Disability Index to make decisions concerning individual patients. Physiother Can. 1999;51:107-112, 119. 6. Beaton DE, Katz JN, Fossel AH, Wright JG, Tarasuk V, Bombardier C. Measuring the whole or the parts? Validity, reliability, and responsiveness of the Disabilities of the Arm, Shoulder and Hand outcome measure in different regions of the upper extremity. J Hand Ther. Apr-Jun 2001;14(2):128-146. 7. McConnell S, Beaton D, Bombardier C. The DASH outcome measure user's manual: Institute for Work and Health; 1999. 8. Stratford PW, Binkley JM, Watson J, Heath-Jones T. Validation of the LEFS on patients with total joint arthroplasty. Physiother Can. 2000;52:97-105.   161 9. Binkley J, Stratford P, Lott S, Riddle D, al e. The Lower Extremity Functional Scale (LEFS): scale development measurement properties and clinical application. Phys Ther. 1999;79:371-383. 10. Centers for Medicare and Medicaid Services. Medicare and Medicaid Services Manual System, Pub 100-02 Medicare Benefit Policy Transmittal 63 December 29, 2006. 2006. 11. Hart DL, Connolly JB. Pay-for-performance for physical therapy and occupational therapy: Medicare Part B Services. Knoxville, TN: Focus On Therapeutic Outcomes, Inc.; 2006. 12. Finch E, Brooks D, Stratford P, Mayo N. Physical rehabilitation outcome measures. Second ed. Hamilton: BC Decker; 2002. 13. American Physical Therapy Association. Guide to physical therapist practice. Second ed. Alexandria, VA: American Physical Therapy Association; 2003. 14. Centre for Evidence-Based Physiotherapy (CEBP). Physiotherapy evidence database (PEDro) website. Available at: http://www.pedro.org.au/. Accessed June 14, 2007. 15. American Physical Therapy Association. APTA Hooked on evidence website. Available at: http://www.hookedonevidence.com/. Accessed June 29, 2010. 16. Schmitt JS, Di Fabio RP. Reliable change and minimum important difference (MID) proportions facilitated group responsiveness comparisons using individual threshold criteria. J Clin Epidemiol. Oct 2004;57(10):1008-1018. 17. Haley SM, Fragala-Pinkham MA. Interpreting change scores of tests and measures used in physical therapy. Phys Ther. 2006;86(5):735-743.   162 18. Childs JD, Cleland JA. Development and application of clinical prediction rules to improve decision making in physical therapist practice. Phys Ther. 2006;86(1):122-131. 19. Resnik L, Hart D. Using clinical outcomes to identify expert physical therapists. Phys Ther. 2003;83(11):990-1002. 20. Jette D, Jette A. Physical therapy and health outcomes in patients with spinal impairments. Phys Ther. 1996;76(9):930-941, 942-945. 21. Kirkness C, Korner-Bitensky N. Prevalence of outcome measure use by physiotherapists in the management of low back pain. Physiother Can. 2002;53(4):249-257. 22. Kay T, Myers A, Huijbregts M. How far have we come since 1992? A comparative survey of physiotherapists' use of outcome measures. Physiother Can. 2001;53(4):268-275. 23. Focus on Therapeutic Outcomes Inc. FOTO Website. Available at: http://www.fotoinc.com/.  Accessed on June 14, 2007. 24. Messick S. Validity of test interpretation and use. Vol 90. Princeton, NJ: Educational Testing Service; 1990. 25. Jette DU, Halbert J, Iverson C, Miceli E, Shah P. Use of standardized outcome measures in physical therapist practice: perceptions and applications. Phys Ther. 2009;89(2):125-135. 26. Jette D, Jette A. Physical therapy and health outcomes in patients with knee impairments. Phys Ther. 1996;76(11):1178-1187.   163 27. Kazis LE, Anderson JJ, Meenan RF. Effect sizes for interpreting change in health status. Med Care. 1989;27(Suppl 3):S178-S189. 28. Stratford PW, Riddle DL. Assessing sensitivity to change: choosing the appropriate change coefficient. Health Qual Life Outcomes. 2005;3(1):23-37. 29. Cohen J, Cohen P. Applied multiple regression analysis for the behavioral sciences. Hillsdale, NJ: Lawrence Erlbaum; 1983. 30. Husted JA, Cook RJ, Farewell VT, Gladman DD. Methods for assessing responsiveness: a critical review and recommendations. J Clin Epidemiol. 2000;53:459-468. 31. Childs JD, Fritz  JM, Flynn TW, et al. A clinical prediction rule to identify patients with low back pain most likely to benefit from spinal manipulation: a validation study. Ann Intern Med. 2004;141:920-928. 32. Millar LA, Jasheway PA, Eaton W, Christensen F. A Retrospective, Descriptive study of shoulder outcomes in outpatient physical therapy. J Orthop Sports Ther. 2006;36(6):403-414. 33. Horner S. You don't know what you don't know! How to utilize evidence-based patient reported outcome measures to improve care and market your practice. Paper presented at: Combined Sections Meeting, American Physical Therapy Association, 2009; Las Vegas, NV. 34. Advise Rehab. Advise Rehab website. Available at: http://www.adviserehab.com/. Accessed June 29, 2010.    164 35. Werneke MW, Hart DL. Categorizing patients with occupational low back pain by use of the Quebec Task Force Classification System versus pain pattern classification procedures: discriminant and predictive validity. Physical therapy. 2004;84(3):243-254. 36. Fritz JM, George S. The use of a classification approach to identify subgroups of patients with acute low back pain. Interrater reliability and short-term treatment outcomes. Spine. 2000;25(1):106-114. 37. Beneciuk JM, Bishop MD, George SZ. Clinical prediction rules for physical therapy interventions: a systematic review. Phys Ther. 2009;89(2):114-124. 38. Childs JD, Cleland JA, Elliott JM, et al. Neck pain: clinical practice guidelines linked to the International Classification of Functioning, Disability, and Health from the Orthopaedic Section of the American Physical Therapy Association. J Orthop Sports Phys Ther. 2008;38(9):A1-A34. 39. Fritz JM, Delitto A, Erhard RE. Comparison of classification-based physical therapy with therapy based on clinical practice guidelines for patients with acute low back pain: a randomized clinical trial. Spine. 2003;28(13):1363-1371. 40. McPoil TG, Martin RL, Cornwall MW, Wukich DK, Irrgang JJ, Godges JJ. Heel pain—plantar fasciitis: clinical practice guidelines linked to the International Classification of Function, Disability, and Health from the Orthopaedic Section of the American Physical Therapy Association. J Orthop Sports Phys Ther. 2008;38(4):A1-A18.    165 C HA P T E R  5:  INT E G R A T ING  T HE  INT E R NA T IONA L C L A S S IF IC A T ION OF  F UNC T IONING  (IC F ), C L INIC A L DE C IS ION-MA K ING  A ND OUT C OME  A S S E S S ME NT  INT O P HY S IC A L  T HE R A P Y  P R A C T IC E :  A  P R OP OS E D F R A ME WOR K ‡‡   5.1 BACKGROUND AND PURPOSE Clinical professions can be defined conceptually through theories of practice,1-3 legally through regulations and practice standards,4 and practically through the clinical skills and decision-making5,6 applied by practitioners. Physical therapy (PT) practice has been informed by various conceptual models,1-3 clinical decision- making models (CDMMs),1,4,7-9 and health and disablement models.10,11 No single framework however has been universally accepted. The International Classification of Functioning, Disability, and Health (ICF) provides a framework for functioning and disability with respect to health, and a classification system that has been internationally accepted. The ICF provides a multilevel structure to code the functional consequences of health conditions in a way that is intended to cross disciplines, cultures, and environments. It this way, it provides a common language for research and practice with respect to individual  ‡‡ A version of this chapter will be submitted for publication. Kozlowski, A. J. , MacDermid, J. C. Solomon, P. Integrating the International Classification of Functioning (ICF), Clinical Decision-Making, and Outcome Assessment into Physical Therapy Practice: A Proposed Framework   166  clients, health care services and health policy.11,12 and PT.13 The American Physical Therapy Association (APTA) recently adopted the ICF framework14 as a framework for clinical practice guidelines.15,16 The ICF has also been incorporated into PT conceptual models1 and CDMMs.1,7 Increasingly, the ICF is being used to establish classification standards for assessing and reporting function and health12 by defining core sets for different health conditions,17-19 by linking to validated measures used in evaluating outcome,20,21 in the definition of constructs of outcome measurement and evaluation§§  In entry-level PT education, CDMMs are commonly used to frame the processes by which physical therapists make decisions about optimizing client functioning at all levels1,7 and can guide development of entry-level and continuing education curricula.1,4 While outcome evaluation is considered integral to clinical practice, few frameworks explicitly integrate outcome evaluation processes. Of those that have,1,7,8 none has clearly integrated ICF components and an outcome evaluation process throughout the commonly accepted elements of PT practice. Development of a CDMM that explicitly integrates ICF components with a clearly defined outcome evaluation process across practice elements could provide a practical tool to guide professional education, clinical practice, and research.  and as a framework for managing individual clients.23    §§ The term outcome evaluation is used to implicitly include both the measurement and evaluation components except where specifically referred to as either the measurement or evaluation component.   167  5.1.1 The International Classification of Functioning, Disability, and Health (ICF) The World Health Organization (WHO) developed the ICF classification system and model to allow systematic coding of health-related functioning states for comparison across countries and professions and to improve communication through a common language.11 The ICF framework includes three functioning components which interact with two contextual factors and the individual’s health condition (Figure 5-1). Within the ICF, disability and functioning are viewed as the interaction between health conditions and contextual factors. The functioning components in the model (Figure 5-1) are 1. Body Functions (b) and Structures (s), 2. Activities (A), and 3. Participation (P), and the contextual factors are Personal Factors (p)*** and Environmental Factors (e). Coding under the ICF, however, is divided into four constructs of Body Functions (f), Body Structures (s), Activities and Participation (d), and Environmental Factors (e). The domains of the ICF are then arranged in a coding hierarchy with more detailed classification at each of the successive four sublevels. The Health condition includes disorders or diseases which are categorized extensively in the partner classification system International Classification of Diseases, Tenth Revision (ICD-10).22 Or, as stated by the World Health Organization “In short, ICD-10 is mainly used to classify causes of death, but ICF classifies Health”. (See the beginners training guide at http://www.who.int/classifications/icf/training/icfbeginnersguide.pdf)  *** Our model includes the abbreviation p, but under the ICF personal factors are not coded.   168  Physical therapy has traditionally used impairment measures like goniometry and manual muscle testing but recently emphasis has been placed on Activity and Participation measures. Activity-level measures include standardized valid and reliable self-report questionnaires23 and measures of demonstrated capacity or performance††† Health professions including PT are increasingly incorporating the ICF components into education,1,7 clinical decision-making,1,4,7 and clinical practice guidelines.15, 16. Literature has also reported on application of the ICF in guiding clinical practice,25, 26 in validation of core sets of the classification system,17-19 and in linking of validated measures to the ICF.20, 21 The World Confederation for Physical Therapy has recommended adoption of the ICF framework and the first two classification levels into clinical practice,13 and the ICF construction of functioning  such as standardized performance-based tests (e.g., dexterity tests and the six minute walk test). Participation-level measures include self-report questionnaires like the Craig Handicap Assessment and Reporting Technique24 and return–to-work status. Self report measures often mix items that address Body Functions and Structures, Activity, and Participation with one of the levels predominantly represented. Interest in standardized measurement of Activity and Participation has grown as these levels of functioning tend to have more meaning to clients as well as other stakeholders.23 and may provide better indicators of milestones such as return to work.  ††† Finch and colleagues (2002) labelled these as performance measures, however in the ICF, performance is defined as an indicator of Participation that includes the environment and societal contexts in which a person exists. Capacity is defined as an indicator of Activity an can be thought of as an individual’s ability adjusted to a ‘uniform’ or ‘standardized’ environment.   169  and disability has been promoted in health care.27 Despite these initiatives, the ICF is not yet fully integrated into educational approaches or professional modeling of clinical decision-making and practice patterns.  5.1.2 Physical Therapy Models Conceptual PT practice models have been proposed,1-3,7 some of which incorporate the ICF components explicitly.1 Likewise, CDMMs exist,1,4,7-9,28 of which some incorporate ICF components.1,7 The McMaster CDMM7 was adapted from the Hypothesis-Oriented Algorithm for Clinicians (HOAC)9 to simplify the decision- making algorithm for entry-level education and to incorporate the ICF constructs. The APTA has defined five elements of practice (Examination, Evaluation, Diagnosis, Prognosis including Treatment Plan, and Intervention; Figure 5-2) in the Guide, providing a national standard model,4 which was adapted for entry-level education.28 The HOAC II was published to complement the Guide as a detailed decision-making tool by providing physical therapists with an algorithm for making clinical decisions while moving through the elements of practice for a given client.8 The CORxE model was also developed to guide entry-level education integrating the ICF components with the elements of clinical practice, theory and research, and outcome and evaluation.1 While these models share elements in terms of the clinical decision-making process or linkage to the ICF, none fully integrates these and none is accepted as a universal standard. The CORxE CDMM most explicitly incorporates the ICF framework into the decision-making elements of Assessment, Intervention, and   170  Outcome & Evaluation.1 While conceptually appealing, the presentation of decision- making and its iterative process may be insufficient for clinical hypothesis-testing. The McMaster CDMM provides a linear process incorporating some ICF components into some practice elements.7 The APTA model provides a graphic representation (Figure 5-2) with decision-making detailed in textual descriptions. However, the current version is not based on the ICF.4 The HOAC II provides a detailed algorithm,8 which may be valuable for developing or refining decision- making skills but potentially too complex for routine use. The models put forth by Cott and colleagues (1995) and Dean (1985) both provided conceptual frameworks rather than practical guides to decision-making and predate the ICF.2,3  5.1.3 Outcome Measurement and Evaluation Processes There has been increasing emphasis within the profession on the development and use of standardized measures to evaluate the outcome of interventions and on the science behind clinical measurement.23,29 While many of the outcome measurement tools available to physical therapists address elements of Body Functions and Structures, and Activity and Participation, few were developed specifically with ICF concepts in mind. Recently, the establishment of core sets17-19 and coding of evaluative measures back to the ICF20,21 has initiated the process of linking measurement with the ICF. However, this work is preliminary and few practicing clinicians or education programs include specifics about measuring clinical outcomes within the ICF framework. From the perspective of entry-level education, early introduction of the students to a clear process on clinical decision-making that   171  guides the incorporation of ICF concepts into clinical decision-making would facilitate a holistic view of clients’ problems and clinical practice guidelines that are based on the ICF framework. From the perspective of practitioners, changes need to be integrated with familiar and meaningful practice elements in their continuing education and professional development. Based on these historic considerations, the purpose of this article is to propose a CDMM that integrates the ICF framework with the core processes of clinical decision-making inherent in PT practice.  5.2 MODEL DEVELOPMENT An integrated practice model was constructed in three steps. First, selected PT models were compared with respect to their practice elements and the ICF components. The APTA model was used as the reference as it was considered the most widely used and recognized current model. The CORxE1 and McMaster7 models were selected since they incorporated ICF components. Practice elements were derived from this comparison. The HOAC II was used to validate these practice elements as it is based on the APTA model but provides a more detailed level of decision-making. Second, an outcome evaluation process was constructed based on an outcome handbook,23 the Guide,4 and relevant publications30,31  Third, the constructions of the elements and the evaluation processes were integrated with the ICF components. The authors used qualitative information from a review of 40 clinical records taken from diverse practice areas and physical locations,29 and   172  participated in consensus meetings to make revisions to the model until the current version was agreed upon.  5.2.1 Elements of Practice Table 5-1 compares the practice elements from the APTA, CORxE, and McMaster models, none of which included all elements derived from our review. All models included an examination element comprised of a clinical record review, client interview, and physical examination. The CORxE and McMaster models also included steps to define the presenting problem and/or generate a differential diagnosis.1,7 Only the APTA model included the evaluation element, which was represented as an analysis of the examination findings. Diagnosis was an element in all models,1,4,7 but was defined in the CORxE model as an “evolving concept of the problem.”1 In the McMaster model, the diagnosis and examination elements overlapped.7 Components of the ICF were referenced explicitly in the CORxE and McMaster models with respect to establishing a PT diagnosis.1,7 Prognosis was an element in the APTA and McMaster models, and included goal-setting with the client. Planning was included as part of the prognosis element in the APTA model,4 however, the McMaster model defined planning of the evaluation methods and planning the treatment and methods separately.7 Collaborative goal setting for the presenting problem was included as central to the CORxE model.1 Discharge was defined in the text but not represented graphically as a part of the APTA model.4   173  The intervention element was represented in all three models with different terminology (intervention, implementation, and/or treatment). The CORXE referenced multiple levels of functioning based on the ICF components.1 Four objectives of intervention were described: recovery or remediation,1,4,7 adaptation,1,4,7 prevention,1,4,7 and maintenance.4 Outcome appeared in all models, each including some form of evaluative component. The APTA model provided a definition and detailed description in which the outcome evolves over the episode of care and requires reflection of actual results in contrast to anticipated goals and expected outcomes.4 Goals and expected outcomes may be established and evaluated at numerous levels including pathology and pathophysiology; impairments; Activity limitations; Participation restrictions, risk reduction and prevention; health, wellness, and fitness; societal resources; and client satisfaction.4 The McMaster model included statements about discontinuing interventions that were not effective or necessary, and all except the CORxE model included discharge, discontinuance, or both as end points, but not as distinct elements.1,4,7 From our review seven practice elements were derived. They included Examination, Analysis (or Evaluation), Diagnosis, Prognosis, Planning, Intervention, and Discharge and Follow-up (Figure 5-3). Although critically important, outcome was not included as an element, but was integrated as the outcome evaluation process, consistent with the APTA definition of outcome.4 The Evaluation element was renamed Analysis to differentiate the pre-intervention reasoning process to reduce examination findings to a set of PT diagnoses from the evaluative process   174  employed during intervention, at discharge, and in follow-up to infer outcome from scores on key indicators. Follow-up was incorporated to reflect the concept that outcomes continue to evolve, along non-linear paths beyond discharge or discontinuance.  5.2.2 Diagnostic and Outcome Evaluation Processes Underlying the elements of practice derived from the APTA, CORxE, and McMaster models are two related but distinct measurement and evaluation processes. Measures can be validated for three functions: discrimination, prediction, and evaluation.23,32 Diagnostic measures discriminate between states at a point in time. Prognostic measures predict a future state of one construct or characteristic based on the current value of another construct or characteristic. Evaluative measures determine change over time on a construct or characteristic. Measures may be validated as having multiple properties, thus can function across an episode of care. We have differentiated the diagnostic function since it represents a discriminative function that is used to identify which, of the specific constructs and characteristics that present for a given client, fall within the PT scope of practice and are amenable or relevant to treatment, as well as those that may require referral to another practitioner. We have grouped the prognostic and evaluative functions into the outcome evaluation process as prognosis represents an a priori prediction of the outcome expected for a given client. The outcome evaluation process begins with the physical therapist determining the diagnoses, and extends throughout planning by establishing prognoses,   175  intervention with iterative reassessment of change, to determining final status at discharge and on follow-up. We describe a four step process that may facilitate documentation of the most relevant and informative measurements to support clinical decisions across stages of an episode of care. The steps are: 1. Identify from the characteristics and constructs that are relevant to the client problem those that provide the best indication of outcome for each ICF functioning level (key indicators);4,23 2. Select reliable and valid outcome measures for the key indicators, administer and plan relevant time points for re-administration (to serve as outcome measures for monitoring status);4,23 3. Map the prognoses as paths of change by predicting point estimates or curves for values of the key indicators over the anticipated timeframes of intervention to the planned discharge date (this could be done graphically, as a data table, or in a calendar); and 4. Re-administer measures,4,23 evaluate actual change on key indicators as point estimated relative to the prognoses, and re-estimate outcome paths where they vary from the prognoses. Evaluation is incorporated throughout the clinical intervention that informs the selection, application and modification of prognosis and treatment strategies. In the model, we suggest that key indicators of prospective change should be selected for both Body Functions/Structures and Activity/Participation. Impairment measures such as joint range of motion may be relevant in rehabilitation following total knee or hip replacement or knee ligament reconstruction surgeries. Activity/Participation may be measured quantitatively with one standardized self- report questionnaire depending on the composition of its items. Participation milestones such as level of work ability (often referred to as return to work status)   176  may also be important to monitor due to its relevance to stakeholders such as third- party payers and employers. Performance measures such as balance tests and timed walk tests may provide valuable in-clinic indicators of Activity. Paths are predicted as the prognosis for each measure by estimating scores at relevant points in time. Time points can be selected based on expectations for clinically important change and may vary across outcome measures. For example, joint range of motion after a fracture can change quickly upon cast removal but Activity-level function evolves more slowly. Therefore, initially, it might be appropriate to measure range of motion weekly and re-administer self-reported outcome questionnaire scales after four weeks. As range of motion approaches target levels, frequency of measurement would decline and as Activity gains accelerate, questionnaires might be administered every week or two. Combined with clinical experience, knowledge of the literature on outcomes following therapeutic interventions can be instrumental in setting targets for point estimates for the prognoses. Boundaries for evaluating clinically important change could be established from known properties such as a minimal detectable change value. These values, however, have typically been determined as change over two time points with wide margins of error, often about 10% of their reference scales. Re-administration of outcome measures at planned intervals provides the clinician with an opportunity to re-evaluate prognosis and the effectiveness of the current intervention strategies. Clinicians typically monitor trends for some outcome measures on a repeated basis before making a decision, thus increasing their confidence in the direction and nature of the change. Thus a trend of improvement over multiple points may indicate   177  a true change where the point-to-point changes may not exceed the minimal detectable change value. Flow sheets or software used to cumulatively monitor outcome measure results can facilitate this process. Ultimately, statistical evaluation of multiple time point data could simplify this process by generating prognosis curves or point-estimates with more accurate boundaries of detectable change based on the key indicator and relevant factors (e.g., client age and fitness level).  5.2.3 Integration of Practice Elements, Measurement Processes, and ICF Components The final model (Figure 5-4) was created by integrating the seven practice elements and two measurement processes with the ICF components. The practice elements overlie vertical arrows representing the three central ICF functioning components (b/s, A, P, d) This represents explicit consideration of the three levels of ICF functioning (positive outlook rather than disability) across practice elements. While some outcome measures integrate items of Activity and Participation (under the umbrella term function or disability) into composite indices, we have used separate but overlapping arrows in the diagram recognizing that at some points in clinical decision-making they are separated and at others intertwined. Since Participation focuses on performing one’s usual roles and Activity focuses on specific tasks, it is often essential to understand whether a specific task can be performed to understand Participation. Conceptually Activity is congruent with the planning and execution of a PT approach which commonly seeks to maximize a client's ability to perform functional   178  tasks separate from their environmental context and social roles. In many instances, physical therapists accomplish this in part by addressing the underlying impairments of Body Functions and Structures. Ensuring the integration of improvements in body functions and structures by assessing the capacity to perform whole-body tasks in exclusion of environmental contexts in clinical settings reflects an assessment Activity. Additionally, client centered goals commonly focus on restoring clients to a previous or required level of Participation. Thus, physical therapists develop PT plans by ‘zooming in’ to examine, treat, and measure impairments and ‘zooming out’ to ensure the larger picture of restoring Participation with Activity being the primary level of focus. The ICF contextual factors are linked with relevant elements by solid arrows. They indicate that gathering client information initially includes the history of the current condition, general health, personal characteristics, and environmental factors that will impact the potential for recovery of impairments, Activity limitations, and Participation restrictions. These may include both modifiable and non-modifiable factors which impact the clinician’s determination of Diagnoses and Prognoses, or the client’s response to specific intervention strategies. The Examination and Analysis elements are connected with arrows to indicate the iterative process used to rule in or out competing problems to formulate diagnoses for each functioning level. Information used to select the key indicators, thus measures, comes from the clinician integrating information about the client from the examination/analysis cycle with prior knowledge of diagnostic classification,   179  prognostication, and planning based on best available evidence and clinical judgement.33 The diagnostic and evaluation processes underlie all aspects of the episode of care from diagnosis to follow up. Systematic measurement is applied throughout the assessment using discriminative measures to test diagnostic hypotheses, and prognostic measures to predict future events. Measures that are responsive to detecting clinical change are incorporated to assess specific constructs over the course of the episode of care and treated as “outcome measures”. Outcomes that reflect important constructs from the Body Functions/Structure and Activity/Participation components are required to provide a comprehensive view of the health impacts of physical therapy intervention. A box representing the evidence supporting practice runs parallel to the right, with input arrows from discharge and follow-up to the box representing the contribution of clinical experience gained from the clinician’s reflection on outcome paths of previous clients, and output arrows representing the integration of research findings, clinical experience, and client preferences into the elements of assessment through intervention of current clients. These arrows indicate that that the three core elements of evidence-based practice (best research evidence, clinical expertise and client values/preferences) inform all aspects of the episode of care. Furthermore, the observations made by the clinician about responses to clinical decisions constitute an important feedback loop, a fundamental step in the evidence-based process to evaluate the outcomes of clinical decisions.   180  To validate our model, we compared it to the HOAC II.8 Part 1 of the HOAC II describes detailed decision-making steps on the examination, analysis, diagnosis, and prognosis elements including both hypothesis-testing and predictive decisions, which is consistent with our steps of diagnosis, prognosis and planning. Terminology for intervention, strategies, and tactics (i.e., the specific treatment parameters) in our model were adopted from the HOAC II.8 Part 2 of the HOAC II details reassessment of existing and anticipated problems. These represent distinct decision paths for interventions of remediation and adaptation versus prevention.8 Detail is provided in the description of the evaluation of goal attainment, with stepwise review of the viability of tactics and strategies implemented and goals defined based on re- measurement of testing or predictive criteria.8 Like prevention, Intervention for the purpose of maintenance would be aimed at preventing deterioration of relevant levels of functioning which can be measured as the key indicators of those constructs or characteristics. Although we believe the HOAC II is compatible with our model, it is based on the APTA (2003) and Nagi (1964)10 models. Minor redesign of the HOAC II to incorporate ICF terminology and critical alignment of decision-making steps of the algorithm with the practice elements and measurement systems, however, would align it with our model.  5.3 DISCUSSION Review of the APTA model and recently published PT CDMMs provides valuable insight into how the ICF components and outcome evaluation processes have been integrated into PT practice to date. The existence of these variants   181  supports that no one model has resonated sufficiently with practitioners to be adopted widely, thus, constitutes a substantial gap that needs to be addressed for further evolution of practice. The APTA model includes most of the relevant elements of practice but was not based on the ICF framework and terminology. The CORxE and McMaster models integrate the ICF components and provide graphics, but omit some PT practice elements that are key to clinical decision-making. These CDMMs do not clearly incorporate the concepts that prognosis and outcome are non-linear and evolving, that physical therapists formulate judgments about these from the first encounter with a client, and that physical therapists continually modify these based on ongoing evaluation. Thus, these CDMMs lack clarity and/or depth of outcome evaluation processes that is critical to attaining optimal outcomes at all ICF levels. Our intent was to develop an integrative CDMM that provides a comprehensive yet concise integration of current diagnostic and evaluation concepts with elements of practice that are familiar to clinicians and is consistent with the ICF framework, thus providing a bridge from past to future practice. Qualitative studies of the use of the McMaster CDMM by first-year physical therapy students in a problem-based program revealed that they found it challenging to apply ICF concepts during their clinical placements.7 This may reflect difficulty that PT students experience in using frameworks to inform their thinking, or a lack of clarity in the model itself as to how and when to incorporate the ICF components, which are specific to only two of 14 steps. Without specific guidance to conceptually integrate the ICF components throughout a episode of care, and a model to practically implement this, students may not be prepared to do so in clinical practice.   182  A strength of the CORxE model is the conceptual integration of theory and research with assessment, intervention, and outcome & evaluation with ICF components in client-centered practice.1 Another is the explicit attention given to each ICF functioning level from diagnoses determination through assessment, intervention, and outcome & evaluation. However, the CORxE CDMM has two apparent deficiencies. Four practice elements (evaluation, diagnosis, prognosis, and plan) were not described or graphically represented. The cyclic graphic representation of the model itself may also present a challenge in terms of applying the framework. Overlying arrows provide unidirectional links from what the client and physical therapist bring to the intervention (under a box labelled ‘start here’), while bidirectional arrows link assessment indirectly through goal setting and hypothesis generation. Research also has unidirectional links to intervention and outcome & evaluation, but intervention is not connected directly to outcome & evaluation.1 Since there is no specific qualitative research defining how clinicians have used this model in practice, its value is speculative. However, our experience in teaching clinical decision-making within entry-level practice programs suggests that students favour frameworks that have steps that are clear and consistent with a familiar clinical process. The APTA model differs from the proposed model in that levels of functioning and the iterative analytic process used to formulate the diagnosis are not explicit, and distinct elements like prognosis and the plan of care are combined. The detail of the elements in the APTA model is provided in text descriptions and definitions however, some of these are complex. For example, the definition of diagnosis,   183  represents both a decision-making process and the end-result of evaluating examination data.4 Others, such as discharge, are defined in the text but not included in the graphic model.4 Concrete applications of the ICF have been described for statistical, clinical, research, social policy development, and educational domains in rehabilitation.25 Availability of ICF classification code core sets,17-19 ICF-based clinical practice guidelines,15,16 and ICF-linked self-report questionnaires20,21 now provide a comprehensive battery of tools to assist with diagnosis, prognosis, and evaluation. Combined with information technologies, much of the data collection and analysis could be automated. But, this will require application of the ICF framework across PT practice elements. Application of this integration has been demonstrated in clinical practice. Penney and colleagues demonstrated the application of the ICF in neurological rehabilitation, They described the elements of the APTA model, i.e., all ICF components with respect to stroke rehabilitation, and reported on the application of a battery of standardized measures to establish diagnoses and evaluate change before and after an Intervention.26 Rauch and colleagues also described an application of the ICF and the APTA elements of practice in a case report of a client with a spinal cord injury. They applied an ICF-based documentation template, summarized relevant information for the PT intervention and complemented the PT role in the context of the multidisciplinary setting. Further, in an orthopedic context, Rundell and colleagues described a case report demonstrating the application of the ICF framework in clinical reasoning and PT management of acute and chronic low   184  back pain.34 In this case, the ICF framework was reported to provide an effective framework in which to understand the client’s disablement experience, guide treatment selection, and identify barriers. These cases exemplify how integration of the ICF components can support formulation of the diagnosis and prognosis, intervention planning, re-measurement, and reflection on goal achievement across levels. As papers appear in the literature on applying the ICF to the management of clients with various conditions, there will be exemplars of the practical application that are consistent with the model we describe.  5.3.1 Limitations We recognize that developing models for practice ideally needs to engage as many stakeholders as possible. We relied extensively on the published literature, qualitative assessment of clinical records, our understanding of the ICF, and our experiences in clinical education, research, curriculum development, and knowledge of educational approaches across institutions. While we attempted to develop an integrated model that embraces both traditional PT frameworks and the ICF into a process-oriented model that could inform practice, we lacked the advantage of incorporating broad professional involvement in the development process of this working model. Thus, in this respect, it shares similar limitations of previously published models: lack of professional unity and perceived or actual increase burden. We believe that a clinical decision-making model that fully integrates the ICF is timely for physical therapists, and that the model we describe provides a start point for broader consultation and adaptation. It is only through applying this   185  framework to clinical education and practice that it can be refined and its value maximized.  5.3.2 Conclusion We propose this model as a foundation for discussion and action to develop a clinical decision-making model that embraces evidence-based practice, the ICF and existing approaches to clinical decision-making model within the PT profession.    186   Table 5-1.  Mapping of Elements of Practice from Two Practice Models to the APTA Practice Model.  APTA Model CORxE Model McMaster Model Examination  Collaborative goal setting for presenting problem, influenced by External Context Evolving concept of problem Assessment What are (+) and (-) influences on presenting problem. Consider BFS, ACT, PAR, PR, EF‡‡‡ 1. Collect initial data (history, chart, etc)  2. Generate differential diagnosis and Client’s problem statement 3. Frame presenting problem (I,A,P,E)§§§ 4. Examination - establish specific movement problem  Evaluation Diagnosis   5. Revise Diagnosis/problem list on basis of findings 6. Determine functional status (I,A,P,E) Prognosis       And Plan of Care Collaborative goal setting for presenting problem 7. Determine movement Prognosis (given diagnosis, natural history, age, etc): 8. Select from: • Refer for investigation or treatment by other professional • Establish functional goals with client 9. Plan evaluation methods 10. Plan treatment approach and methods (prevent, remediate, adapt) Intervention  Intervention Adaptation, recovery, and/or prevention? At what ICF component? BFS, ACT, PAR, PF, EF 11. Ensure client understands and consents 12. Implement Treatment 13. Charting Outcome  Outcome & Evaluation  How is it measured?  At what ICF component?  BSF, ACT, PAR, PF, EF  Was goal achieved? •  Yes: continuum of strategies •  No: reenter and reevaluate Reassess client If goals met - discharge  if not met - review: •  treatment methods •  treatment alternatives •  hypotheses •  revised goals  ‡‡‡  The CORxE model uses ICF constructs, where BFS=Body Functions and Structures, ACT=Activity, PAR=Participation, PF=Personal Factors, and EF=Environmental Factors §§§  The McMaster model uses ICF constructs, where I=Impairment, A=Activity, P=Participation, and E=Environment   187 Figure 5-1. The International Classification of Functioning, Disability, and Health (ICF) Framework.      Figure 5-2. The APTA Elements of Physical Therapy Practice. Examination Evaluation Diagnosis Prognosis with Plan Intervention Outcome Health Condition Disease or Disorder  Activity Body Functions and Structure Personal Factors Environmental Factors  Participation   188 Figure 5-3. Practice elements derived from model comparison. Examination Analysis Diagnosis Prognosis Planning Intervention Discharge and Follow-up       Outcome  Evaluation  Diagnostic  Testing and  Measurement   189   Figure 5-4. ICF-Integrated Physical Therapy Clinical Decision-Making Model.   190                      D ia gn os tic  T es tin g • D ia gn os tic  - di sc rim in at iv e • +/ - P ro gn os tic  –  P re di ct iv e • +/ - E va lu at iv e –( Δ /t)     Sy st em at ic  O ut co m e Ev al ua tio n • ID  re le va nt  c ha ra ct er is tic s/ co ns tru ct s  • S el ec t k ey  in di ca to rs  &  e va lu at iv e m ea su re s • A dm in is te r in iti al   &  re pe at  m ea su re s at  ti m e- po in ts  • In te rp re t o ut co m e pa th s (Δ /t)  &  a tta in m en t t o da te     Analysis • Critical synthesis of all information • Generate Hypotheses Examination • Review clinical record and other doc’n. • Interview  client: History [HD, PF, EF] • Physical examination      ICF*  HC●ICD/p/e  Diagnoses • Differential diagnosis reduction • Create problem list: ICF codes (b/s/d/e) Prognoses • Identify outcome mediators • Select Outcome Measures • Predict outcome paths (Δ/t) • Plan reassessment time-points Intervention • Implement strategies & tactics • Monitor, reassess & revise Plan ICF - Integrated Clinical Decision Making Model Planning • Define objectives: remediate, adapt, prevent, maintain • Plan Intervention strategies & tactics • Set client-centered SMART Goals • Refine Outcome Evaluation Plan    Discharge and Follow-Up • Reassess & evaluate outcome paths • Interpret actual vs. predicted paths • Reflect on this client & future clients b/s P(d) *LEGEND on reverse  © Kozlowski AJ, MacDermid JC, Solomon P. Version 2.2.2 2009 09 03  A(d) Evidence in Practice R esearch + C linical expertise + C lient preferences   191 ICF - Integrated Clinical Decision Making Model    LEGEND: ICF International Classification of Functioning, Disability, and Health. Items can be coded in ICF classification system as follows:  b/s Body Functions and Structures b  Body Functions s Body Structures A Activity P Participation d Disability; under which Activity and Participation are jointly coded together as Disability (d); HC Health Condition ICD International Classification of Diseases, Version 10. under which Diseases or Disorders are coded e Environmental factors p Personal factors, which are not coded in the ICF classification system are important in physical therapy  Δ/t  Change over time  SMART Specific, Measurable, Attainable, Relevant, and Time-framed  Intervention strategy a category of physical therapy intervention, such as education, exercise, or electrotherapeutic modalities, believed to be needed to alleviate one or more problems  Tactic a specific type of intervention strategy prescribed with dosage parameters of type, frequency, intensity, duration, etc. Where the intervention strategy is exercise, the tactics are the specific exercises and dosages prescribed.       192 5.4 REFERENCES 1. Darrah J, Loomis J, Manns P, Norton B, May L. Role of conceptual models in a physical therapy curriculum: application of an integrated model of theory, research, and clinical practice. Physiother Theory Pract. 2006;22(5):239-250. 2. Cott C, Finch E, Gasner D, Yoshida K, Thomas S, Verrier M. The movement continuum theory of physical therapy. Physiother Can. 1995;47(2):87-95. 3. Dean E. Psychobiological adaptation model for physical therapy practice. Phys Ther. 1985;65(7):1061-1068. 4. American Physical Therapy Association. Guide to physical therapist practice. Second ed. Alexandria, VA: American Physical Therapy Association; 2003. 5. Higgs J, Jones M. Clinical reasoning in the health professions. 2nd ed. Oxford: Butterworth Heinemann; 2000. 6. Edwards I, Jones M, Carr J, Braunack-Mayar A, Jensen G. Clinical reasoning strategies in physical therapy. Phys Ther. 2004;84(312-335). 7. Wessel J, Williams R, Cole B. Physical therapy students' application of a clinical decision-making model. The Internet Journal of Allied Health Sciences and Practice. 2006;4(3):1-11. 8. Rothstein JM, Echternbach JL, Riddle DL. The hypothesis-oriented algorithm for clinicians II (HOAC II): A guide for patient management. Phys Ther. 2003;83(455- 470). 9. Rothstein JM, Echternbach JL. Hypothesis-orient algorithm for clinicians: A method for evaluation and treatment planning. Phys Ther. 1986;66:1388-1394.   193 10. Nagi SZ. A study in the evaluation of disability and rehabilitation potential: concepts, methods, and procedures. Am J Public Health Nations Health. 1964;54:1568-1579. 11. World Health Organization. International Classification of Functioning, Disability and Health: ICF. Geneva: World Health Organization; 2001. 12. Stucki G, Grimby G. Applying the ICF in medicine. J Rehabil Med. 2004;36(44 Suppl.):5-6. 13. Van der Wees P, Hendriks E, Mead J, Rebbeck T. WCPT: International Collaboration in clinical guideline development and implementation. Paper presented at: 15th International Congress of the World Confederation for Physical Therapy, 2007; Vancouver, Canada. 14. American Physical Therapy Association. APTA Webpage for the International Classification of Functioning, Disability, and Health (ICF). Available at: http://www.apta.org/AM/Template.cfm?Section=Clinician_Resources_NEW&Temp late=/CM/ContentDisplay.cfm&CONTENTID=51425. Accessed July 14, 2009. 15. Childs JD, Cleland JA, Elliott JM, et al. Neck pain: clinical practice guidelines linked to the International Classification of Functioning, Disability, and Health from the Orthopaedic Section of the American Physical Therapy Association. J Orthop Sports Phys Ther. 2008;38(9):A1-A34.      194 16. McPoil TG, Martin RL, Cornwall MW, Wukich DK, Irrgang JJ, Godges JJ. Heel pain—plantar fasciitis: clinical practice guidelines linked to the International Classification of Function, Disability, and Health from the Orthopaedic Section of the American Physical Therapy Association. J Orthop Sports Phys Ther. 2008;38(4):A1-A18. 17. Cieza A, Stucki G, Weigl M, et al. ICF core sets for low back pain. J Rehabil Med. 2004;36(Suppl. 44):69-74. 18. Geyh S, Cieza A, Schouten J, et al. ICF core sets for stroke. J Rehabil Med. 2004;36(Suppl. 44):135-141. 19. Stucki G, Cieza A, Geyh S, et al. ICF core sets for rheumatoid arthritis. J Rehabil Med. 2004;36(Suppl. 44):87-93. 20. Cieza A, brockow T, Ewert T, et al. Linking health-status measurements to the International Classification of Functioning, Disability and Health. J Rehabil Med. 2002;34(5):205-210. 21. Weigl M, Cieza A, Harder M, et al. Linking osteoarthritis -specific health status measures to the International Classification of Functioning, Disability, and Health (ICF). Osteoarthritis Cartilage. 2003;11(7):519-523. 22. World Health Organization. International Classification of Diseases web page. Available at: http://www.who.int/classifications/icd/en/. Accessed July 14, 2009. 23. Finch E, Brooks D, Stratford P, Mayo N. Physical Rehabilitation Outcome Measures. Second ed. Hamilton: BC Decker; 2002.   195 24. Hall KM, Dijkers M, Whiteneck G, et al. The Craig Handicap Assessment and Reporting Technique (CHART): metric properties and scoring. Top Spinal Cord Injury Rehabil. 1998;4:16-30. 25. Ustun TB, Chatterji S, Bickenbach J, Kostanjsek N, Schneider M. The International Classification of Functioning, Disability and Health: a new tool for understanding disability and health. Disabil Rehabil. 2003;25((11-12)):565-571. 26. Penney J, MacKay-Lyons M, McDonald A. Evidence-based stroke rehabilitation: case analysis using the International Classification of Functioning, Disability, and Health framework. Physiother Can. 2007;29(1):36. 27. Leonardi M, Bickenbach J, Ustun TB, Kostanjsek N, Chatterji S, MHADIE Consortium. The definition of disability: what is in a name? Lancet. 2006;368:1219- 1221. 28. Kisner C, Colby LA. Therapeutic exercise. 5th ed. Philadelphia, PA: F. A. Davis Company; 2007. 29. MacDermid JC, Grewal R, Macintyre NJ. Using an evidence-based approach to measure outcomes in clinical practice. Hand Clin. 2009;25(1):97-111. 30. Rogosa DR, Brandt D, Zimowski M. A growth curve approach to the measurement of change. Psychol Bull. 1982;92:726-748. 31. Zumbo BD. The simple difference score as an inherently poor measure of change: Some reality, much mythology. In: Thompson B, ed. Advances in Social Science Methodology. Vol 5: JAI Press; 1999:269-304. 32. Streiner DL, Norman GR. Health measurement scales: a practical guide to their development and use. Second ed. Oxford: Oxford University Press; 1995.   196 33. Sackett DL. Evidence-based medicine: how to practice and teach EBM. Edinburgh: Churchill Livingstone; 2000. 34. Rundell SD, Davenport TE, Wagner T. Physical therapist management of acute and chronic low back pain using the World Health Organization's International Classification of Functioning, Disability and Health. Phys Ther. 2009;89(1):82-90.   197 CHAPTER 6. OPINIONS OF PHYSICAL THERAPISTS ON OUTCOME MEASUREMENT IN A WORK DISABILITY PREVENTION PROGRAM FOR HEALTHCARE WORKERS: PILOT DATA TO INFORM A KNOWLEDGE TRANSLATION INTERVENTION****   6.1 BACKGROUND AND PURPOSE Outcome measurement has been promoted in physical therapy (PT) to facilitate clinical decision-making1-3 Standardized, reliable, and validated measures designed for research applications have been promoted for clinical applications.1-3 Many of these tools are self-report disability questionnaires that have been designed to measure activity-level of functioning, which may provide more meaningful indicators of change for individuals and groups than measures of impairment.2 Physical therapists appear to have mixed views about the use of such tools in clinical practice, acknowledging their importance2-5 yet not to widely adopting them.3 Where adoption of self-report questionnaires had been demonstrated with a social marketing campaign, attitude towards outcome measurement was thought to be important but did not change.6  **** A version of this chapter will be submitted for publication. Kozlowski, A. J. Opinions of Physical Therapists on Outcome Measurement in a Work Disability Prevention Program for Healthcare Workers: Pilot Data to Inform a Knowledge Translation Intervention.   198 We were interested in clarifying these earlier findings and extending previous work with respect to the nature and level of self-report questionnaire use to measure outcome,1-3,5 and attitudes towards outcome measurement6 by physical therapists in the Prevention and Early Active Return-to-work Safely (PEARS) program. The outcome measurement and evaluation system used in the pilot studies for the PEARS programs incorporated four activity-level self-report questionnaires7 but their use was abandoned after the pilot period ended. This study was a preliminary step to gather information for developing a knowledge translation intervention to facilitate adoption of the self-report questionnaires to measure and evaluate outcomes in the PEARS program. The pilot study required collection of scores for each participant on a visual analogue scale for pain and one of four activity-level disability questionnaires valid for musculoskeletal injuries to four body regions (neck, back, upper extremity or lower extremity). The two measures were to be administered at four time points: on admission, at discharge, and at one and three months post-discharge. However, actual data collected were complete for admission and discharge for less than 40% of participants on any of the five measures.7 The requirement to administer any of these measures at any time-point was abandoned when the PEARS program was adopted by four regional BC health authorities (HAs). Despite efforts of the PT profession to promote adoption of outcome evaluation practices,1,2,4 this decision was not surprising when viewed in the context of the knowledge translation literature on promoting professional change: that dissemination methods can improve knowledge but are unlikely to change behavior.8,9   199 In contrast, however, a process has been demonstrated with which a motivated practitioner, despite organizational barriers, successfully implemented a similar measurement process.10 Our initial research study proposed to employ a health promotion and planning process called intervention mapping to identify the best- evidenced methods of knowledge translation from the research literature.11 In conjunction with an environmental scan of the target group, knowledge translation strategies can be selected based on matching the specific needs and preferences of that target group within a relevant theoretical framework.11 Although intervention mapping had not previously been applied to the development of a knowledge translation intervention, recommendations had been made to employ methods to strategically tailor such interventions to target barriers based not only on content, but also strategy type and ecological levels (e.g., individual, interpersonal, organizational, community, and societal11) specific to the target group.12 The purpose of this study was to describe the preliminary environmental scan planned to gather information from the physical therapists practicing in the PEARS program and four key stakeholder groups to facilitate development of the knowledge translation intervention through the intervention mapping process. The opinions and perspectives gathered from representatives of key stakeholder groups was then integrated with the evidence gathered through the theoretical mapping steps to develop the intervention map for the knowledge translation intervention tailored to implement the outcome evaluation system in the PEARS program. We planned to survey physical therapists about their current outcome measurement practice, attitudes towards use of standardized measures, and   200 barriers and facilitators to their use. In-depth information on the affective component of attitudes, barriers, and facilitators would be gathered through semi-structured interviews. We planned to gather perspectives of key stakeholders through interviews to ensure that barriers and facilitators associated with interpersonal, organizational, and community ecological levels were identified and addressed in the intervention map. This was critical given the complexity of the working behavioral change model developed through the intervention mapping process, which incorporated 80 determinants of professional behavior change spanning multiple ecological levels and phases of adoption or change.13  6.1.1 Environmental Scan Structure of the PEARS program (Figure 6-1) was unique in that both initial development and ongoing management had been guided by a collaboration of healthcare labor and management with a common objective of safe and healthy workers in a safe and healthy workplace. The administrative structure was defined by a provincial steering committee which had equal representation of union and management representatives, and was responsible for high-level program decisions. Regional committees were responsible for operational management of eight PEARS program sites in four regional health authorities. Each site employed a team with a program coordinator to provide day-to-day administrative management, a workplace assessor to address risk factors for ongoing or future injuries through assessment and intervention at the workplace (primary prevention), and a physical therapist to provide rehabilitation services to address the healthcare worker participant’s   201 impairments and activity limitations due to an injury (secondary prevention). These services were provided to participants based on priority and resource availability. The PEARS program was structured differently based on regional circumstances (Figure 6-2). For instance, Vancouver Coastal Health had one program with six sites in three Health Service Delivery Areas, Fraser Health had one program with one site at Royal Columbian Hospital, and Interior Health and the Vancouver Island Health Authority each had one program for each of three health service delivery areas, each with multiple sites. Programs structured their primary and secondary prevention services differently based on proximity to their participant populations. For example, urban facilities like Vancouver General Hospital and Royal Columbian Hospital served large localized populations, while the Port Hardy Hospital and Royal Inland Hospital served smaller facilities across large geographic areas in addition to their sites. Although healthcare workers with recent workplace musculoskeletal injuries received the highest priority for admission to the program, others with longer standing impairments and activity limitations that impacted ability to work, whether work-related or not, were also eligible depending on resource availability. Another objective was to facilitate early, safe return-to-work or stay-at-work by collaborating with the healthcare worker and his or her supervisor, along with other occupational health and disability management representatives. The key stakeholders identified for consultation in this study were union and management representatives responsible for regional administration, the PEARS coordinators responsible for direct program management, healthcare worker   202 participants served by the PEARS program, and the physical therapists who provided clinical rehabilitation services. Since the intervention phase focused on promoting a change in the clinical practice of physical therapists, we identified them as subjects rather than stakeholders, although they had a stake in evaluating PEARS program outcomes. Exploring perceptions of physical therapists practicing in the PEARS program was central to the environmental scan. Due to the potential number of physical therapists (n=26 as of January 2008), interviewing all subjects was not feasible, so most information would be gathered by survey. Purposive sampling based on survey responses would be used to select a subset of subjects for semi-structured interviews to gather detailed information on affective components of attitudes, barriers, and facilitators to adoption of the outcome evaluation system. Prior to drafting applications for the health authorities’ research ethics boards, Interior Health announced a departmental restructuring and cancellation of its PEARS program. Fraser Health, Vancouver Coastal Health, and Vancouver Island Health Authority had indicated support for the study, which would have provided access to 19 physical therapists in 17 sites (Figure 6-2). Exploring perceptions of stakeholder representatives was also important to the environmental scan, to provide peripheral perspectives. Interviewing the PEARS coordinators would provide insight into interpersonal and organizational influences on physical therapist practice. Interviewing the union and employer representatives would provide insight into organizational-level influences, and interviews of healthcare workers who participated in the PEARS programs would provide insight   203 into interpersonal and community-level influences. Collectively, these interviews might identify barriers and facilitators relevant to, but outside the realm of the physical therapists, that are matched to the equivalent behavior change determinants at organizational or other ecological levels  6.2 METHODS The study was approved by the research ethics boards for the University of British Columbia (Behavioral), Vancouver Coastal Health, Fraser Health and the Vancouver Island Health Authority (Appendix D). All subjects provided informed consent. A questionnaire was used to measure self-reported use of measures to evaluate clinical outcome, and barriers, and facilitators to use of self-report questionnaires. The physical therapists’ attitudes towards outcome measurement were also explored. The beliefs component of attitude towards outcome measurement was measured with a validated 10-item scale.6 The affective component of attitude was explored through three open-ended survey questions followed up with semi-structured interviews of a sample of subjects selected through purposive sampling.  6.2.1 Subject Survey All physical therapists practicing in PEARS programs were invited to complete the survey regardless of eligibility or intent to participate in the intervention phase. The objective was to solicit opinions from the entire population of the PEARS physical therapists. This approach would provide baseline information for those who   204 chose to volunteer in the intervention phase and comparison data for those who did not. This would also help to reduce the potential for selection bias if only those who volunteered for the intervention phase had completed the survey/interview process. The survey (Appendix B-1) contained five sections: a 10-item attitude beliefs scale, 10 items on current use, 14 items on barriers, three qualitative questions, and six demographic data items. Qualitative questions explored the affective component of attitudes, facilitative program changes to address organizational barriers, and facilitative resources and supports to address individual barriers. Some respondents were contacted to further explore their qualitative responses on the survey. Subject survey data were summarized with descriptive statistics and qualitative analysis. The attitudes-beliefs scale was scored according to Abrams et al with reporting of summary scores.6 Free-text responses from the three qualitative questions were transcribed for thematic analysis. Responses to questions on attitudes-affect and facilitators were used to guide purposive sampling of subjects for more in-depth interviews.  6.2.2 Subject Interviews Telephone interviews were planned based on survey responses. Comprehensive views of the affective component of attitude towards outcome measurement, barriers, and facilitators to its adoption were considered to be important. We wanted detailed opinions from at least one respondent reporting various affective descriptors in response to “how do you feel about measuring your   205 outcomes” (e.g., happy, sad, anxious, upset). In addition, we wanted responses from at least one respondent about each organizational- and individual-level facilitator. Additional probing of reported barriers would explore cognitive dissonance, which may be apparent from discrepancies between attitudinal beliefs, use, and affective responses. For instance, a respondent may report high importance of measuring outcomes but low or no use of standardized measures and/or having anxiety or fear about doing so.  6.2.3 Stakeholder Interviews Exploring the perceptions of stakeholders about the PEARS program outcomes and methods used to evaluate and report on those outcomes was important to the intervention mapping process. Changes to philosophy or process are unlikely if not supported by the organization or by the individual players within the organization.14-16 However, consulting all stakeholders is not feasible. Semi-structured interviews were planned to explore the perceptions of representatives from four stakeholder groups at three ecological levels (interpersonal, organizational, and community) who were thought to have the most influence on program services and decisions. Healthcare unions and the health authorities represented two key organization-level groups having both philosophical and administrative influences on the program. The PEARS coordinators represented the interpersonal level given their frequent communication with physical therapists. Healthcare worker participants represented both interpersonal (as individuals interacting with the physical therapists) and community levels (as members of the broader population receiving service from the program).   206 Consulting these four groups would provide a rich perspective on their perceptions of outcomes and use of measures. Questions were planned to explore their perceptions of the value of measured and reported outcomes for the PEARS programs in meeting program objectives, to what extent this was done, and the nature of barriers, facilitators and potential benefits, and enhancing the outcome measurement and evaluation system. The objective was to obtain a comprehensive qualitative view of the perceptions of the stakeholders who were most influential in implementing changes in the PEARS program. To manage resource and time limitations, a small sample of at least eight stakeholder interviews was planned. This would provide a convenient sample with representation of the four stakeholder groups across the three HAs, and both positive and negative views towards the current state of PEARS program evaluation. Purposive sampling would identify individuals to ensure this representation. Thematic analysis based on ongoing comparison throughout the interview process would provide an indication of saturation. In the event that this representation had not been achieved or perspectives had not been sufficiently explored with the initial interviewees, additional interviews would be added. Interview scripts with prompts appear in Appendix B-2. However, due to unforeseen circumstances this step was not conducted and no stakeholders were recruited and no data were collected.  6.3 RESULTS During the ethics application process, information became available on upcoming changes to the PEARS program in each of the health authorities.   207 Changes were announced by each late in the summer of 2008 which precluded implementation of the intervention phase, but the preliminary survey and interview phase remained viable. In two health authorities, competing demands took precedence over our study, to such a degree in one that we abandoned attempts to initiate the survey. The in-house PT component of the PEARS program was cancelled in the remaining HA. Surveys were sent by mail to 10 physical therapists in two health authorities. Four surveys were returned completed, three respondents declined to participate, and three surveys were not returned. One physical therapist agreed to the interview. Due to the low response rate and the changes in the PEARS programs, no attempt was made to recruit representatives of the stakeholder groups. Results from the survey have been summarized with descriptive statistics: belief component of attitudes to outcome measurement in Table 6-1, current outcome measurement practices in Table 6-2, and barriers to outcome measurement in Table 6-3. Since only one interview was conducted, results were integrated with the qualitative questions without direct quotations to retain confidentiality of the respondent.  6.3.1 Attitudes-Beliefs Results of attitudes towards outcome measurement (beliefs) scale are summarized in Table 6-1. Questions have been reordered to reflect the highest to lowest scoring. Items were scored on a five-point scale from Strongly Disagree (0 points) to Strongly Agree (5 points). The four respondents scored similarly with standard deviations of 0.6 or less and only 1 point separating minimum and   208 maximum scores on each of the items. The mean (standard deviation) summary score was 30.3 (0.6) out of a possible 50 points. Of the ten questions, four clustered at the ceiling, three in the middle, and three at the floor of the scale.  6.3.2 Current Outcome Measurement Practices Results of the current outcome measurement practices items are summarized in Table 6-2. Questions were reordered to reflect the most to least used types of standardized and non-standardized measures. Standardized impairment measures were reported as virtually always used, whereas self-report measures were almost never used. Regarding use of all modes of non-standardized measures, measures of physical performance and self-report ranked highest. Respondents reported almost always administering both standardized and non-standardized measures at intake and discharge assessments.  6.3.3 Barriers to Outcome Measurement Results of the survey barriers to outcome measurement are summarized in Table 5-3. Questions have been reordered to reflect the order from most to least extreme. Of the barrier categories, lack of time ranked highest, lack of knowledge was second, and the various others rated as third. Individually, the highest-ranked barriers were lack of time to search the literature and/or learn about measures; lack of time to administer to clients, score, and interpret them; lack of equipment and resources; and lack of administrative support. Mean scores for all items were 3.2/5 points. Next highest were lack of knowledge about measurement properties and lack   209 of availability of or access to measures, with a mean score of 3.0/5 points. The least significant barriers reported were lack of support from my employer/manager, lack of support from the profession, and lack of personal interests, which were all rated as less than 2/5 points. Responses related to barriers were more variable than the current practice items. The lack of administrative support barrier and the lack of time category had standard deviations ranging from 1.3. to 1.7. However, there was little variation in the barriers of lack of support from the profession and lack of personal interest, each with standard deviations of 0.6  6.3.4 Qualitative Responses Results of the three qualitative questions and the semi-structured interview were not tabulated due to the low response rate. Responses to these questions could be grouped as emotional responses, preferred measures, rationales for limited use of standardized self-report questionnaires, and barriers and facilitators to their use. Affective responses were few and included hesitance to use standardized measures with all clients, and lack of confidence in selection of intervention strategies based on reliable and reproducible measures. Preferred measures included pain scales, impairment measures, and physical performance measures, which were generally considered to provide all the clinical information necessary to make sound decisions. Self-report questionnaires were reported as useful for clients presenting with red flags, injury recurrences, prognosis for prolonged work time-loss, or barriers to progress. However, questionnaires were not reported as suitable for all clients due to the time required to administer and   210 score (burden on both client and physical therapist) and lack of clinical information above that gained from other assessment methods. In addition, some respondents thought non-standardized measures such as joint mobility tests should be considered valid as they are extensively taught in courses and used clinically. Lack of equipment due to limited funding was also reported as a barrier, however, this is more likely to relate to standardized demonstrative measures such as functional capacity evaluation systems, given that many self-report questionnaires are available for use without cost. Administrative support was the most commonly reported facilitator along with measurement protocols, professional collaboration, and criteria to simplify the selection of appropriate measures. Administrative support included three types: office management (e.g., client scheduling and photocopying), measurement support (e.g., tool selection, administration, scoring, and interpretation), and follow- up support. Collaboration between physical therapists was reported as important to identify and promote best-practices, establish prognoses, and select intervention strategies and apply treatment protocols. Availability of measurement protocols could also facilitate use by providing expectations based on initial scores and guidance on when to administer measures across the episode of care. Having a bank of measures available with criteria to help balance objectivity and administrative burden was also reported as useful.      211 6.4 DISCUSSION Despite the low response rate to the survey questionnaire this preliminary study generated some interesting findings. Although we could not confirm whether the respondents were representative of the larger physical therapist population, the four completed surveys provide insight into the perspectives of the respondents in addition to identifying some limitations of the survey instrument. Likewise, the interview provided insight into the opinions of the responding subjects. Although generalizations can not be made from these results, they do provide insights that have not been documented in previously. Our response rate (40%) was similar to that reported for previous surveys.1,3,4 A potential response bias would likely over estimate positive responses as non- respondents might be more likely to be physical therapists who do not use standardized questionnaires. In addition to measuring self-reported use, we strove to identify the different types of measures reportedly used and the extent to which respondents collect scores from at least two time-points. The latter of these represents the minimum requirement to calculate a difference score and evaluate change over time. A novel insight reported by Jette and colleagues was that 49% of their respondents reported having no intention to use standardized questionnaire in the future.3 However this was reported subsequent to our data collection. Despite the small sample, our findings appear consistent with previous studies.      212 6.4.1 Attitudes-Beliefs The scores on the attitudes (beliefs) scale provide insight to the beliefs of our respondents and to the attitude scale. The four physical therapists from two health authorities answered almost identically on all questions. From these responses, scores on the ten items were not distributed evenly across the scale, as might be desirable,17 with seven items rated near the upper and lower anchors. Although high-scoring questions were all worded positively (1, 4, 6 and 8) and the mid to lower-scoring questions all had negative phrasing, they do not necessarily represent desirable or undesirables beliefs. Abrams and colleagues6 did not provide a rule for rescoring questions to align them as “positive” or “negative” attitudes, thus making interpretation of the summary score questionable. Transformation by inverting the scores of the six negatively phrased items resulted in an increase of the summary score to 36.5//50. Although this is a modest increase, greater disparity in item scoring could result in substantial differences in summary scores and misinterpretation. In addition, the very high and very low-scored items may exhibit a social response bias, or the overriding influence of the profession. The scaling issues may be artifacts of the small sample, and the homogeneity of responses may reflect commonality of the physical therapists’ practice areas. However, items like “health professionals should measure their outcome” and “it is not necessary to measure functional outcomes” may always be endorsed near the anchors reflecting the social influence of the profession rather than the beliefs of the respondent. This scale may offer a valuable tool to assess the status of health professionals in the development   213 and implementation of interventions to promote adoption of standardized outcome measurement methods. However it may require redevelopment with the addition of a subscale to address affective components of attitude, further validation, and instruction on score transformation and interpretation prior to use on a larger scale.  6.4.2 Current Outcome Measurement Practices These findings provide interesting insight into measurement practices of a small sample of physical therapists. It is not surprising that impairment measures of any type are used more than standardized self-report questionnaires. Kirkness and Korner-Bitensky reported that pain scales were found in a random sample of clinical records far more frequently than questionnaires,18 and a survey of members of the American Physical Therapy Association (APTA) found that more than half of respondents reported not using standardized questionnaires.3 Both of these studies had substantial non-response rates, which may mean their results over estimate measured or reported use of standardized questionnaires. The finding that respondents reported selecting items from questionnaires was also consistent with other surveys in which 18-22% of respondents reported using “home-grown” measures.3,4 Also of interest was the extent to which our respondents reported collecting both admission and discharge scores on all types to measures. A record review in another study found only 8% of records that had a standardized pain scale had scores from both admission and discharge,18 and self-report of almost 25% of respondents indicated questionnaires are often not completed at discharge.3   214 Presumably, however, our reported high level of repeated measures applies primarily to the most-used measures, including impairment, pain scales, and performance measures. Further, outcome evaluation of pilot data from the PEARS programs found scores from admission and discharge administrations for less than 40% of clients on one pain scale and four self-report questionnaires.7 This is in sharp contrast to the outcome evaluation process where a single practitioner captured data at both time-points for more than 75% of clients.10  6.4.3 Barriers to Outcome Measurement Also not surprising was that lack of time, knowledge, and administrative support were reported as substantial barriers to using standardized measures as this was consistent with findings from previous surveys.1,3,4 Lack of time to learn about measures and to administer, score, and interpret was ranked highest. However time as a barrier may be problematic in that it is not only dependent on professional requirements and organizational expectations, it is also subject to person preferences in performing job tasks. Professionally physical therapists have regulations that define what they must do in their practice, but there is room within those boundaries for individual determination of what elements of practice they emphasize (e.g., intervention over outcome measurement) and the specific methods they select). Likewise the employer may specify requirements of the job, but there will likely be room for clinicians to apply their professional judgment. Thus lack of time may reflect many things that are not captured in the items within our survey and those used in other studies, some of which may be affective in nature.   215 Lack of knowledge about measurement properties and lack of administrative support also ranked high in previous surveys.1,4 Different from those studies, however, was the low rating given to lack of support for my employer/manager, lack of support from the profession, and personal interest.1,4 Akin to lack of time, lack of knowledge may represent a socially acceptable proxy for more complex sentiments. As knowledge is relatively easy to change and practice behaviors are not, knowledge may offer a more socially acceptable response category. Conversely, ability to change practice behaviors has not been included as a barrier category, so physical therapists may not have thought to report it as such. Lack of administrative support may represent a combination of these items as support personnel would offer both the ability to free up time for physical therapists to perform other duties and to assign duties that they have not adapted their practice to include. The lower rating of personal interest may also represent a social response bias. The lower ranking of support from the employer and the profession may represent local differences in the perceived management support in the PEARS programs and awareness of the efforts of the profession to promote the use of standardized outcome measurement.  6.4.4 Qualitative Responses Qualitative responses were generally consistent with findings from previous studies on use, barriers, and facilitators1,3,4 but provided interesting insights into the rationale reported by physical therapists for use and non use, what constitutes validity, and what constitutes an outcome. Like many physical therapists,   216 respondents tended to prefer impairment and demonstrative measures over self- report questionnaires due to their perceived relevance to clinical problems. This perceived relevance appeared to be associated with a belief that face validity of impairment and demonstrative measures is sufficient to make clinical decisions. The belief that we can not only see and feel subtle differences in movement at the body structure and activity levels, but also evaluate change based on those sensations may represent an undisclosed barrier for those professionals that do not currently use nor plans to adopt questionnaires to evaluate outcome.3 This belief supported with a statement that the profession would neither use nor teach the use of an invalid measure therefore we should accept them as valid. This reasoning is arguable in that we already accept that impairment measures, palpation skills, and observation skills have face validity. That however does not equate to validation of discriminant, predictive, or evaluative capabilities of such measures. Even if such skills could provide a modestly reliable interpretation of immediate change, assessment of such change over a period of a day or longer remains suspect. Although entry-level PT education programs may be equipped to instruct on the practical limitations of assessments of impairment, observation, and palpation, this belief may continue to be propagated to the larger population of seasoned practitioners through clinical education courses, which may be less subject to curriculum review and oversight. Challenging this belief may represent a leading barrier to not only the use of standardized measures but also the information they provide over successive administrations.   217 This sentiment may also represent a valid perspective that the method of assessing outcome as change between admission and discharge does not support clinical decision making processes. Clinical decisions are made frequently through a variety of information gathering and reasoning processes, much of which is done by skilled clinicians without conscious processing. Although the two-point change score may be useful in reflective practice, it may be rightfully seen to be too crude to support instantaneous decisions made throughout the episode of care. The other insight gleaned from the qualitative arm of this study relates to what constitutes an outcome. Relative to observation and palpation tests, respondents reported the ability to sense a change with an intervention tactic such as a manipulation or intra-muscular stimulation (dry-needling) treatment. Although absence of established evidence for validity to assess change over time for such methods does not constitute evidence of invalidation of the method, this ability may be limited to the immediate reassessment of that body structure. In other words, the concept of outcome may be context-sensitive in a manner similar to that of reliability and validity. Confusion about what constitutes an outcome may be another significant barrier to adoption of standardized questionnaires, and may have contributed to the over-reporting of ‘use of outcome measures’ in earlier surveys.1,4 There was a discrepancy between the high positive ratings given to attitudes supporting the importance of standardized outcome measurement with the low rating of use of self-report questionnaires. This difference may suggest that physical therapists who have not adopted the use of self-report questionnaires to measure outcome experience some level of cognitive dissonance. This discrepancy between   218 beliefs and practice may warrant exploration as it may offer an avenue to facilitate adoption of self-report questionnaires to measure outcomes.  6.4.5 Intervention Mapping The data gathered in this environmental scan was insufficient to complete the intervention mapping process in part due to the low response rate of physical therapists, but more so due to the absence of stakeholder input. Completing the intervention mapping process may seem moot given the administrative changes to the PEARS program but there may be benefit from completing the exercise. Since intervention mapping has not previously been applied to the development of a knowledge translation intervention, completing the exercise with incomplete local perspectives could provide at least a proof-of-concept for such an application of intervention mapping. Completing this exercise would include the development of a working model of behavioral change with respect to adopting an outcome evaluation system, developing program objectives for the individual (physical therapist) and program (PEARS) levels, and matching knowledge translation strategies to adoption barriers at both levels.  6.4.6 Limitations The primary limitation to this study is the low response rate from a small number of physical therapists that was diminished further by administrative changes prior to implementation of the study. This precludes making any generalization from the findings. However, similarities of findings from other studies indicate our novel   219 findings and interpretations warrant broader investigation. A second limitation is the absence of interview data from key stakeholders, which was critical to evaluate perceptions, barriers, and facilitators at the interpersonal, organizational, and community levels. Thus, we can only interpret these findings for these four physical therapists, and are likely missing important connections or discrepancies among the perceptions of other stakeholders across ecological levels. Nonetheless, these limitations constitute important findings in themselves. Failure to fully implement this and subsequent phases of the study constitutes a barrier to the study of health practitioner behaviors in British Columbia. The ecological milieu in which physical therapists practice may substantially influence practitioner behaviors, which have not been considered in previous surveys on outcome measurement practices in PT.1,3,4 Professional culture may promote resistance to implementing new practice methodologies,16 and a wide range of supra-individual determinants can influence adoption decisions in healthcare.15,19 These and other ecological influences may impact professional practice behaviors and thus potential for change. Further examination of systemic and practitioner factors that influence adoption and use of processes to measure and evaluate outcome is a priority in the era of clinically- and cost-effective PT management practice demanded as part of evidence-based practice.  6.4.7 Conclusion In preparation to implement an intervention to promote the adoption of outcome measurement and evaluation system based on use of self-report questionnaires by   220 physical therapists practicing in the PEARS programs, we conducted a pilot survey and interview of our target group regarding their beliefs and practices related to outcome measurement. We conducted a survey and interview of physical therapists from the PEARS program as an environmental scan for a preliminary phase of a knowledge translation intervention study to promote adoption of an outcome measurement and evaluation system. Due to changes in the PEARS program structure during the course of study, the intervention study was no longer viable, and the preliminary phase was not implemented in one health authority. Response to both the survey and interview of physical therapists may also have been impacted by the program changes, and consultation of four key stakeholder groups was not done. Despite the small survey and interview sample, we have drawn conclusions from our findings based on the data analysis and from the incomplete implementation. From the data, although attitudes to outcome measurement are likely an important factor in its adoption, the attitudes scale6 published by Abrams and colleagues may have limitations with range, scaling, and directionality of items. Regarding range, beliefs are included but affective items are not. The items appear to cluster closer to the ends of the scale, and to derive a summary score, some item scores may require transformation. Thus, interpretation of the raw summary score is unclear. From the qualitative data, the underlying reasons clinicians choose not to adopt outcome measurement and evaluation processes warrant exploration, particularly in light of the large proportion of physical therapists who have reported no intention to do so. Although the commonly reported barriers of lack of time,   221 support, and knowledge were reported by our respondents, these may represent proxy responses for other barriers. Respondents may experience cognitive dissonance given the discrepancy observed between their self-reported beliefs about and current use of self-report questionnaires. Exposing such barriers and identifying facilitators will be necessary to develop effective knowledge translation interventions tailored to the individual and collective needs of physical therapists with respect to promoting adoption of outcome evaluation. Other professions and other knowledge translation applications could also benefit from such inquiry.   222 Figure 6-1.  PEARS Program Organizational Management Hierarchy     Healthcare Unions* • BCNU • HEU • HSA   Health Authorities • Interior • Fraser • Vancouver Coastal • Vancouver Island  PEARS Provincial Steering  Regional Committees (8) PEARS Team • Coordinator • Occupational Therapist or Kinesiologist • Physical Therapist HCW Participants • Nurses • Support staff • Other Professionals •  Managers  HCW’s Workplace • Manager • Coworkers • Work Site • Work Practice Occupational Health (OH) • Manager • OH Physician • OH Nurse HCW’s General/Family Practice Physician OHSAH Board & Program  *BCNU = British Columbia Nurses Union, HEU = Health Employees Union, HSA = Health Services Association, HCW = Healthcare Worker   223 Figure 6-2. PEARS Program Regional Organization  Health Authorities Interior  Fraser Vancouver Coastal Vancouver Island Program Level (#) HSDA (3) HA (1) HA (1) HSDA (3) HSDA* Okanagan All Vancouver South Island Hospital Sites** Kelowna GH Vernon Jubilee H Penticton RH Royal Columbian H (New West- minster) Vancouver GH  Victoria GH Royal Jubilee H (Victoria) Saltspring Island HU HSDA* Thompson- Cariboo  Richmond Central Island Hospital Sites** Royal Inland H (Kamloops)  Richmond H West Coast GH (Port Alberni) Nanaimo RGH Tofino GH HSDA* Kootenay- Boundary North Shore/ Coast Garibaldi Central Island Hospital Sites** Kootenay- Boundary RH (Trail) Castelgar & District H  Lions Gate H Squamish GH St Mary’s H (Sechelt) Powell River GH Campbell River & District GH St. Joseph's GH (Comox) Port Hardy H Gold River HC * HSDA = Health Service Delivery Area ** H = Hospital, R = Regional, G = General, HU = Health Unit, HC = Health Centre     224 Table 6-1. Attitudes to Outcome Measurement Variable N Min/Max Mean (SD) 1.   Health professionals should measure the outcomes of their treatment. 4 4/5 4.8 (.5) 4.   The use of validated outcome measures is clinically helpful in an increasingly medico-legal environment. 4 4/5 4.8 (.5) 6.   Health professionals should monitor patient progress using reliable and valid tools. 4 4/5 4.8 (.6) 8.   Validated outcome measures can encourage a focus on functional outcomes. 4 4/5 4.2 (.5) 9.   Available tests are inappropriate for the type of patients that I treat. 4 2/3 2.5 (.6) 10. I do not think it is appropriate for third-party insurers or payers to tell me what to measure and how to report patient status. 4 2/3 2.5 (.6) 7.   I do not think it is appropriate for the regulatory board or professional association to tell me what to measure and how to report patient status. 4 2/3 2.2 (.5) 2.   Functional outcome tests and measures are unpopular with clients.  4 1/2 1.8 (.5) 5.   There is no need to change from the ways that we have always used to assess patients. 3 1/2 1.7 (.6) 3.   It is not necessary to measure functional outcomes.  4 1/2 1.2 (.5)  Summary Score  4 30/31 30.3 (.6)     225 Table 6-2.  Current Use of Measures for Outcome Measurement Rated from Never (1) to Always (5)  In my practice, I use Standardized … N Min/Max Mean (SD) 2.   impairment measures (e.g., Oxford Manual Muscle Test, or dynamometers for strength, or goniometer for range of motion with protocol such as average of 3 tries) 4 4/5 4.5 (.6) 1.   pain scales (e.g., Numeric Pain Rating Scale or McGill Pain Questionnaire)  4 1/5 3.8 (1.9) 3.   physical performance measures (e.g., 6 minute walk,  submaximal treadmill, or Berg Balance Tests) 4 3/4 3.5 (.6) 4.   self-report questionnaires (e.g., Oswestry Disability Index or Disabilities of the Arm, Shoulder & Hand (DASH) questionnaire for disability, or SF-36 or EuroQOL 5D health status or quality of life) 4 1/3 2.0 (.8) 5.   measures (from questions 1-4) at admission and discharge to measure the outcome of my treatment 4 4/5 4.2 (.5) To measure outcomes of client treatment in my practice, I use UNs  tandardized … 8.   physical performance measures (e.g., individualized tests of lifting or other job demands) 4 3/5 4.0 (.8) 9.   self-report (e.g., verbal report of pain, selected items from or modified versions of questionnaires) 4 3/5 4.0 (.8) 6.   pain reports (e.g., verbal scale, or pain descriptors)  4 2/5 3.8 (1.5) 7.   Impairment measures (e.g., strength test or range of motion without protocol)  4 2/5 3.3 (1.5) 10. measures (from questions 5-9) at admission and discharge to measure the outcome of my treatment 4 3/5 4.2 (1.0)    226 Table 6-3.  Barriers to Use of Outcome Measures in PEARS Clinical Practice, Rated from No Barrier (1) to “Extreme Barrier (5)  Lack of TIME to … N Min/ Max Mean (SD) 4.   search literature and/or learn about measures 4 2/5 3.2 (1.3) 5.   administer with clients, score, and interpret 4 2/5 3.2 (1.5) 6.   discuss results with clients, colleagues, and/or other stakeholders 4 2/5 2.8 (1.5) Lack of KNOWLEDGE about … N Min/ Max Mean (SD) 2.   measurement properties (e.g., reliability & validity, floor & ceiling effects, or detectable & important change) 4 2/4 3.0 (.8) 3.   procedures (e.g., administering, scoring, or interpreting) 4 2/4 2.8 (1.0) 1.   measures (e.g., variety of measures available, or which measure to use for specific cases) 4 2/3 2.2 (.5) Lack of … N Min/ Max Mean (SD) 10. equipment or resources 4 3/4 3.2 (.5) 11. administrative support 4 1/5 3.2 (1.7) 7.   availability of, or accessibility to measures 4 2/4 3.0 (.8) 9.   consensus on what measures to use 4 2/4 2.8 (1.0) 8.   compatibility with client needs 4 1/4 2.2 (1.3) 12. support from my employer/manager 4 1/3 1.8 (1.0) 13. support from the profession 4 1/2 1.5 (.6) 14. personal interest 4 1/2 1.5 (.6)       227 6.5  REFERENCES 1. Cole B, Finch E, C G, Mayo N. Physical rehabilitation outcome measures. Toronto: Canadian Physiotherapy Association; 1994. 2. Finch E, Brooks D, Stratford P, Mayo N. Physical rehabilitation outcome measures. Second ed. Hamilton: BC Decker; 2002. 3. Jette DU, Halbert J, Iverson C, et al. Use of standardized outcome measures in physical therapist practice: perceptions and applications. Phys Ther. 2009;89(2):125-135. 4. Kay T, Myers A, Huijbregts M. How far have we come since 1992? A comparative survey of physiotherapists' use of outcome measures. Physiother Can. 2001;53(4):268-275. 5. Huijbregts MP, Myers AM, Kay TM, Gavin TS. Systematic outcome measurement in clinical practice: challenges experienced by physiotherapists. Physiother Can. 2002;54(1):25-31, 36. 6. Abrams D, Davidson M, Harrick J, et al. Monitoring the change: current trends in outcome measure usage in physiotherapy. Man Ther. 2006;11(1):46-53. 7. Kozlowski AJ, Yassi A. Pain and disability outcomes from a Prevention and Early Active Return-to-work Safely (PEARS) Program. Paper presented at: World Confederation for Physical Therapy Congress, 2007; Vancouver, Canada. 8. Grimshaw JM, Shirran L, Thomas R, et al. Changing provider behavior: an overview of systematic reviews of interventions. Med Care. 2001;39(8, Suppl 2):II-2-II-45.   228 9. McCluskey A, Lovarini M. Providing education on evidence-based practice improved knowledge but did not change behaviour: a before and after study. BMC Med Educ. 2005;5:40. 10. Kozlowski AJ, Horner S, Dean E. Outcome Evaluation in Orthopedic Physical Therapy: Application of and Reflection on a Simple Method to Quantify Clinical Practice. In Preparation. 2010. 11. Bartholomew LK, Parcel GS, Kok G, Gottlieb NH. Planning health promotion programs: an Intervention Mapping approach. Second ed. San Francisco: Josey-Bass; 2006. 12. Bosch MC, van der Weijden T, Wensing M, Grol R. Tailoring quality improvement interventions to identified barriers: a multiple case analysis. J Eval Clin Pract. 2006;13:161-168. 13. Kozlowski AJ, van Oostrom SH, Anema JR. Intervention Mapping to develop a knowledge translation intervention to promote adoption of a system of outcome measurement and evaluation into a work disability prevention program. In Preparation. 2010. 14. Rogers EM. Diffusion of Innovations. 5th ed. New York: Free Press; 2003. 15. Greenhalgh T, Robert G, Macfarlane F, et al. Diffusion of innovations in service organizations. Systematic review and recommendations. Milbank Q. 2004;82:581-629.     229 16. Timmons S. How does professional culture influence the success or failure of IT implementation in health services? In: Ashburner L, ed. Organisational Behaviour and organisational studies in health care: reflections on the future. Palgrave: Basingstoke; 2001. 17. Henerson ME, Morris LL, Fitz-Gibbon CT. How to measure attitudes. Newbury Park: SAGE Publications; 1987. 18. Kirkness C, Korner-Bitensky N. Prevalence of outcome measure use by physiotherapists in the management of low back pain. Physiother Can. 2002;53(4):249-257. 19. Fleuren M, Wiefferink K, Paulussen T. Determinants of innovation within health care organizations: literature review and Delphi study. Int J Qual Health Care. 2004;16(2):107-123.    230 CHAPTER 7. GENERAL DISCUSSION AND CONCLUSION  7.1  GENERAL DISCUSSION Although translating research findings into practice has long been acknowledged as fundamental to advancing healthcare, research focused on this objective has not advanced in parallel. Recognition that practitioners do not spontaneously adopt new research in the 1990’s spurred the development of a branch of scientific inquiry known as knowledge translation or implementation science. Of special interest to the physical therapy (PT) profession is the integration of formal evaluation methods, predominantly self-report standardized disability questionnaires, to measure clinical outcomes.1-3 Many such questionnaires have been developed and tested for reliability and validity, yet use of such outcome measurement and evaluation tools by physical therapists has changed little in the past 15 years.1, 3-7 This apparent failure in knowledge translation warranted investigation. In this dissertation research program, we planned to develop and implement a tailored multifaceted knowledge translation intervention based on theoretical8, 9 and procedural8, 10 models which would facilitate the uptake11, 12 of outcome evaluation processes by physical therapists in their practices.1-3 The proposal was developed under the operational framework of the Ottawa Model of Research Use (OMRU).13 However, due to changes in the organizational demands of the supporting health authorities, the viability of this project deteriorated before completion of the ethics review process. As a consequence, only the preliminary phase, an environmental   231 scan of the project, was implemented. Under the OMRU framework, that consequence, like any planned or unplanned event required evaluation of the circumstances and options in order to complete this dissertation. A question raised in a later stage of this exploration led to the addition of a new chapter and a reframing of the dissertation. Deliberation on the issue of promoting the use of standardized questionnaires in physical therapy practice prompted the query of whether interventions using reflection on outcomes to influence clinical decision-making had been studied. This resulted in the literature review presented in Chapter 2. The review of practice standards for PT in the English-speaking provinces of Canada presented in Chapter 3 identified a gap between documented regulations and PT clinical decision-making models regarding outcome evaluation. Despite this gap, we demonstrated in Chapter 5 that a motivated practitioner could find value in the use of standardized questionnaires, and offered some of the clinician’s insights in this regard. A review of published models14-18 identified gaps in terminology and content regarding outcome evaluation methods2 and the model of health and functioning proposed within the International Classification of Functioning, Disability, and Health (ICF).19 In Chapter 4 we proposed a model to address these gaps. Development and implementation of the proposed knowledge translation intervention was to be conducted with stakeholder collaboration which is considered to be an important component of implementation studies.10, 20-26 However changes in stakeholder priorities limited the viability of the study and only the preliminary phase presented in Chapter 6 was completed.   232 The OMRU (Figure 1-1) was used initially to develop the proposed implementation study, and thus offers a framework for this dissertation. The review of literature on reflection and implementation interventions is presented first to demonstrate the iterative nature of the framework, and to underscore the importance of retaining a critical perspective of what has been done in order to inform future directions. Chapter 2 represents a reframing of the question to explore a process that we, and possibly many other researchers have taken for granted. In the second interaction of the OMRU, the extent to which clinical decision-making abilities for other than the most motivated practitioners can be changed becomes important. Chapters 3 to 6 contribute to the Assessment phase, however, the context has changed. Advances in this research would now require a reassessment of any local context in which the research is to be conducted, but now an examination of the reflective and decision-making abilities, and the receptivity of the potential adopters and the organizational structure to change becomes (or became) an important consideration. In this chapter, we review the key findings from each study comprising the dissertation, provide a synthesis of these findings, and make recommendations for future research.       233 7.1.1 Review of interventions to promote changes in clinical reflection and decision-making in healthcare professions with special reference to the standardized outcome measures  7.1.1.1 Summary of Findings Literature on interventions to influence reflection and clinical decision-making with attention to the implementation of outcome measures was searched through two databases and a snowball strategy. Twelve articles were located and reviewed. Although it is possible that all the relevant literature was not retrieved, two studies were found on the implementation of standardized measures into healthcare practice. Colquhoun and colleagues found no measureable benefit to the routine use of the Canadian Occupational Performance Measure in functional outcomes measured with the Functional Independence Measure27 in a geriatric unit in Canada, but documented the circumstances that may have confounded their implementation study.28 The second reported on successful development and implementation of a pilot study to promote the use of two standardized measures in the Netherlands, but did not provide a measure of success.29 The remaining studies retrieved explored the development of reflective skills in students,30-33 or the use of reflection to develop clinical abilites.34-39 Collectively these studies demonstrated the dearth of evidence on the processes of reflection in clinical decision-making and changing practice behaviors. It would appear that some students can develop reflective skills30-33 and that reflection can influence some practice behaviors,38, 39 but the evidence is weak.   234 Although reflective ability may be malleable for medical students, the magnitude of change may be small, and may be so only for those receptive to the process.32  7.1.1.2 Contributions To our knowledge, this study is the first review of intervention studies on reflection and clinical decision-making with respect to implementation of outcome measures. The study described the limited scope of research in this area, and draws attention to the need to research these aspects of healthcare practice. The study identified that change in reflection and skill development is possible, but that little is known about the individual and environmental factors that would influence success in cases outside of those studied. The review also brings attention to the assumptions that may be inherent in models like that put forth by Wainwright.40 Changes in reflective ability are thought to enhance clinical decision-making ability, but evidence to support this assumption was not found. Likewise, evidence to support the assumption that adoption of outcome measures in practice will enhance decision-making was not found. Studies on implementation of measures that were found highlight the importance of understanding the value, and hence the meaning, that prospective adopters (individual and organizational) attribute to the proposed measures. In contrast, researchers also need to understand the burden, and the complexities of the environment, and inhibiting factors to implementation in order to determine whether attempting an implementation is warranted.    235 7.1.2 Standards of Physical Therapy Practice Related to Outcomes, Measurement, and Evaluation in English-speaking Canada: A Review of Regulatory and Resource Documents  7.1.2.1 Summary of Findings In this study, we reviewed the regulatory and professional documents accessible through the internet that outlined the meaning of accepted elements of practice, constructs of function and disability, and constructs of outcome measurement and evaluation. We found that the practice elements of examination and intervention were well represented in the regulatory definitions, but others like diagnosis were less well represented. Terms associated with planning including prognosis and goal setting were rarely found and terms associated with outcome, including measurement and evaluation were absent from regulatory documents. Of the provincial regulatory boards, the Alberta College of Physical Therapists had print and web-based resources41-44 that reflected the most comprehensive definitions and elements of practice and constructs of outcome evaluation. The constructs of the ICF were not well represented in either the regulatory documents or professional resources. No evidence was found that the Canadian Physiotherapy Association had officially endorsed or adopted the ICF model of health as a framework for PT practice, whereas the World Confederation for Physical Therapy, an international organization of which the Canadian Physiotherapy Association is a member, has endorsed the ICF and has promoted integration of this classification system into practice.45 Resource documents   236 including the handbook on Physical Rehabilitation Outcome Measures, Second Edition,2 and the Essential Competency Profile for Physiotherapists in Canada46 both advocated the ICF as a framework for practice, but the terminology used within these professional documents did not consistently reflect this.2, 46 Where used in these two documents, the meanings of terms relating to constructs of outcome measurement and evaluation were neither clearly defined nor consistently used.46 Although certain terms were present and defined, their definitions did not appear to represent an accepted standard, as they have not permeated PT practice based on our review of regulatory and professional documents. Further, these definitions do not to reflect a concept that emerged from the psychology literature more than 15 years ago; specifically, outcome is not simply a point-in-time occurrence to be measured by a difference score, but rather a non- linear path which requires measurement of multiple time points in order to infer a trajectory.47, 48 In PT, such paths can be represented by indicators of recovery of functioning at the levels of body functions and structure, activity, and participation. The implications of the gaps between regulatory documents, elements of current practice, contemporary meanings of function and disability, and outcome measurement and evaluation, and the ideal applications of these concepts and constructs are significant. With neither a clear mandate nor meaningful guidelines regarding outcome evaluation, there is little need or responsibility for physical therapists to integrate such processes into their daily practices. These factors, may explain why outcome measurement and evaluation practices in PT in Canada have   237 not changed substantially over the past 15 years despite attempts by our professional bodies to promote such practices.1-3  7.1.2.2 Contributions To our knowledge, this study is the first descriptive review of regulatory and professional documents in the PT profession in Canada with respect to definition and use of terminology associated with the elements of practice, ICF components, and constructs of outcome measurement and evaluation. The contribution this study makes to PT practice is to identify the gaps that exist among regulation, promotion, practice, and theory within the profession. To translate evidence into practice, the gaps between the theoretical and empirical basis for changing what we do requires the support of the regulatory and professional bodies. Without such, the decision to incorporate outcome measurement and evaluation into practice is left to the discretion of individual practitioners. Given a profession by definition is largely self regulating, one can argue that outcome measurement and evaluation is not only fundamental to practice but to the profession as a whole. Conversely, although a mandated change may produce change, it may not be a desirable one because it may be implemented mechanistically without critical thought of the validity and meaningfulness of the process to specific applications (i.e. client situations). Knowledge of these gaps is pivotal in developing a collaborative and comprehensive practice model for the future.    238 7.1.3 Outcome Evaluation in Orthopedic Physical Therapy: Application of and Reflection on a Simple Method to Quantify Clinical Practice  7.1.3.1 Summary of Findings This study described the evaluation of outcome data collected by a physical therapist practicing in an outpatient clinic of a large hospital and described insights gained by the clinician through the implementation and arising from the evaluation. Data were collected on four standardized questionnaires49-53 that contained items primarily at the activity level of the ICF as well as on client characteristics and services provided. The physical therapist identified two primary barriers to her use of outcomes in her practice, namely, lack of knowledge and time. To address her lack of knowledge about relevant outcomes, she searched the literature to identify a suitable battery of measures. With respect to time constraints, she devised an administrative process for successful completion of the outcome tools by clients who complemented her practice pattern. The physical therapist also addressed organizational barriers inherent to her practice setting such as the rate of client scheduling and limited resources in the facility. Following implementation of the measurement procedure, the therapist collected complete outcome data on a consecutive sample of 296 clients with completion ratios of 65 to 92% for the four measures. We partitioned the client outcome data into meaningful subsets for analysis. In one step, indices of change reported in the literature2, 54-57 were used to evaluate the magnitude of change over time for the four measures that were used. Indices of   239 group change (mean change and effect size) and of individual change (reliable change proportion) were calculated for the data representing the complete and valid records for each measure. Further analysis compared the magnitude of change for differences between responders and non-responders on a series of client characteristic and service provision variables. Although this is a basic method of exploring differences, any clinician with access to a computer with spreadsheet software would be able to use it. Reflection by the clinician highlighted two separate processes through which she gained insight into aspects of her practice. She reported developing a sense of meaning of scores from the four questionnaires by relating point-in-time and change scores to the subjective reports of her patients’ experiences outside the clinic, and her observation of their performance within the clinic. This description fits definitions of reflection in practice.40, 58 Insight gained from the evaluation related more to her general practice. Awareness of and dissatisfaction regarding the lack of outcomes for patients with back and shoulder conditions provided incentives to implement published classification strategies59-61 into her diagnostic process and to pursue continuing education to develop skills in manipulation. She reported changing her interventions to use fewer modalities and add manipulation where indicated by evidence, gaining confidence with uncertainty, and adapting how she interacted with patients. This description fits the definition of reflection on practice58 or more specifically reflection on professional experience.40     240 7.1.3.2 Contributions This study contributes to the outcome measurement and evaluation and reflection literature in a number of ways. Regarding outcome evaluation, it demonstrates a methodology that any clinician with a basic understanding of descriptive and simple inferential statistics can use to make clinical interpretations about clusters of his or her clients’ outcome data. Further, it demonstrates in addition to valid standardized measures that are suitable for a wide range of PT clinical settings,2 that simple analytic methods are available. This study also demonstrated a case where a clinician reported making the link between individual scores and aggregated data from outcomes and clinical decision- making. The clinician reported experiences consistent with two types of reflection that are thought to facilitate clinical decision-making40, 58 and making changes in her practice at the client level and in her general approach to practice. What we do not know from this study is the extent to which other practitioners might experience similar insights, or the factors that might inhibit or facilitate the connection between outcome data and clinical decision-making. This limitation may provide support for the contention of clinicians that outcome evaluation of this nature does not provide them with additional information for clinical decision-making beyond what they have gathered subjectively.4, 62 Although an outcome evaluation of such data, as we have shown, can be used to inform decisions for future clients, the opportunity to use measurement and evaluation processes to inform clinical decisions for the discharged cohorts is lacking. The effort and time required to change a pattern of practice in order to administer   241 questionnaires pre- and post-intervention may exceed the limited clinical information gained about those clients at discharge. Given clinicians will have a subjective sense of the clients’ outcomes, regardless of potential bias, clinicians may believe they know their clients outcomes. Therefore, although clinicians may be able to address barriers to outcome evaluation in PT, adoption of such methods is not likely to be effected on a large scale unless the value of the process is perceived to be greater than the perceived effort of changing practice patterns.  7.1.4 Integrating the International Classification of Functioning (ICF), Clinical Decision-Making, and Outcome Assessment into Physical Therapy Practice: A Proposed Framework  7.1.4.1 Summary of Findings The objective of this study was to address the gaps identified in the review of regulatory and professional documents. In addition to the information gained from that review, we conducted a review of PT practice models with respect to their elements of practice, ICF terminology and components, and outcome measurement and evaluation constructs. Of importance was that none of the models reviewed provided a comprehensive integration of all these parts. Based on these observations, we developed a working model to integrate the elements of practice with the ICF components and outcome evaluation constructs.     242 7.1.4.2 Contributions The contribution this study makes to the profession is a novel practice model that explicitly integrates constructs that have not been consistently and comprehensively integrated into practice yet whose individual elements are familiar to physical therapists. Despite promotion of the ICF model by the profession,14, 45, 63 its terminology has not been integrated into the regulatory and professional documents. Despite promotion, outcome evaluation practices that employ standardized questionnaires have not been adopted into routine practice. We suspect that one barrier to the integration of these constructs into practice may be the lack of a meaningful model that subsumes these constructs. We believe that if adopted by a collaboration of professional bodies in Canada that includes the regulatory boards, professional associations, and entry-level educational institutions,46 this model may provide a foundation to facilitate communication within the profession and with external stakeholders, and facilitate common understanding of outcome evaluation constructs, and provide a basis for a common vision of planning future practice.        243 7.1.5 Opinions of Physical Therapists on Outcome Measurement in a Work Disability Prevention Program for Healthcare Workers: Pilot Data to Inform a Knowledge Translation Intervention  7.1.5.1 Summary of Findings This study provided two levels of important findings. First are the limited but interesting findings of the survey and interview and second is consequential to the failure of the proposed implementation phase of the main study. Although the subject pool for this study included ten prospective subjects of whom four responded to the survey and one to the interview, some insight was gained regarding the survey instrument and regarding barriers reported by respondents. The survey instrument included a scale to measure the beliefs component of attitude towards outcome measurement,6 and sections on measurement practices, and barriers and facilitators to the use of standardized measures. The attitude scale was adapted from Abrams and colleagues which they had used in a marketing study to promote adoption of standardized questionnaires to measure PT outcome.6 Given the low response rate, we found that scores on the items of this scale clustered near the ends of the item scales and yielded homogenous responses. Although this might reflect a homogeneous group of subjects with respect to the construct of attitude to outcome measurement, other explanations include a social response bias and problems with item construction and selection. The latter is likely an issue particularly with interpretation of the summary score for the scale given some items   244 appeared to have opposing directions in response categories. The investigators did not provide a rule pertaining to which items needed to have their scores inverted.6 Our findings were similar to other studies with respect to practitioners’ measurement practices and their reported barriers and facilitators to their use.1-3 Current practice did not include routine administration of standardized questionnaires, and non-standardized methods were preferred as they were perceived to immediately provide more clinically useful information. Issues of unreliability, invalidity, and bias in interpretation were viewed as less important relative to the subjective information they provided. These comments provide insight into the issue of meaning addressed in previous chapters. Regarding barriers and facilitators, we gained insight into possible undertones associated with meaning of outcome evaluation in our respondents. Although lack of time and lack of knowledge have been commonly reported as barriers,1-4, 62 the perceived lack of clinical utility relative to non-standardized tests appeared to be of greater concern to clinicians. Given that many barriers associated with lack of time and knowledge can be addressed using technological solutions, clinical utility and the meaning that outcome evaluation holds for clinicians warrant more attention. With respect to this study, we attempted to integrate the recommendations from the knowledge translation literature including the use of a theoretical model of behavioral change8 to inform development and implementation of the multifaceted knowledge translation intervention.11, 12 We planned to use an evidence-based process called Intervention Mapping8 to tailor the selection and implementation of the knowledge translation strategies based on supporting evidence from the   245 literature in conjunction with the perspectives of the target physical therapist subjects and important key stakeholders. We planned to use the Ottawa Model for Research Use13 as an implementation model for the knowledge translation study in which stakeholder consultation is a key component, and is thought to be critical in development and implementation.10, 20-26 Despite this, changes were made to the rehabilitation program we had anticipated studying which rendered the implementation phase nonviable.  7.1.5.1 Contributions The contributions this study makes to the literature rest less in the findings themselves but more in their implications. We may need to explore the concept of barriers further at multiple levels. Reported barriers such as lack of time and knowledge may not be as concrete and uniform as their simple labels suggest, but may be socially acceptable surrogates for something more complex. Barriers have been tied to meaning64, 65 and technological solutions based on evidence and theory have been abandoned.66 Individually and collectively practitioners have reasons for explicitly or subversively resisting change, and organizationally there are many factors that can facilitate such resistance. Additionally, stakeholder needs, preferences, and resources may change without notice. Researchers are advised to remain aware of the potential of such adverse events and to plan for contingencies to moderate the impact of such change.     246 7.2 SYNTHESIS OF RESEARCH FINDINGS This dissertation describes a program of research that has explored circumstances and processes thought to influence implementation of standardized questionnaires by physical therapists. We examined some of the ecological levels at which important influences to changes in practice of healthcare professionals exist. Chapters 4 and 6 focused on practitioners. In one study we demonstrated that a motivated physical therapist could overcome personal and organizational barriers to successfully implement an outcome evaluation system in her practice and reported on her reflections, insights, and consequent changes she made to her practice. The other study we identified that the barriers reported by some physical therapists not using standardized disability questionnaires might be more complex than presented in the literature. The latter study, with changes in stakeholder priorities, highlighted a risk that researchers face in conducting implementation studies. In Chapter 3, we explored barriers related to the mandate and meaning of outcome measurement and evaluation as part of regulation and professional promotion. We identified discrepancies between terminology and constructs found within those documents with respect to the literature on outcome evaluation. Representative of the ecological level of community, this gap may represent an important barrier to the systematic adoption of outcome measures into practice. In response to these multi-level gaps, we proposed a clinical decision-making model in which we have integrated constructs of measurement and evaluation with an internationally accepted model of functioning and health, i.e., the ICF. This model is presented in a format familiar to physical therapists in that it builds on accepted   247 elements of practice and a widely used PT practice framework, that used by the American Physical Therapy Association. Our model explicitly incorporates elements of outcome measurement and evaluation with components of the ICF. We also acknowledge that for this model to facilitate change in practice it must be widely adopted, and that consensus from a collaboration of the agencies that define, oversee, promote, and teach PT may facilitate adoption. Without broad appeal this model is unlikely to be used beyond the local level as. appears to be the case for models published by Canadian authors.14, 15 With a broader acceptance of a model, agencies can align their documentation and methods of communication, promotion, and education across system levels. Regulatory boards can, overtime, revise their acts, regulations, bylaws, and other supporting documents using common terminology that comprehensively defines practice. The variations that exist because of provincial-level regulation could be reduced or eliminated. The national professional association and its provincial branches could likewise revise their resources to align with regulation. Entry-level and continuing education providers could adapt their course materials to provide a common framework and language for new and experienced practitioners. And researchers could adapt their study proposals and resulting publications and knowledge translation initiatives to align with the same framework used throughout the profession. Ultimately, however, success of any model will likely depend on its having meaning to physical therapists individually and collectively in conjunction with a professional mandate to adopt.   248 Bridging levels of practice requires collaboration with other stakeholders including health authorities, public and private healthcare providers, and perhaps other health professions. In planning and implementing the proposed study, we experienced some of the challenges that exist within the health authorities with respect to implementation of health research. Despite agreement and common interest between the researchers and key stakeholder representatives, their needs, preferences, and resources may change without notice. While researchers work on timelines defined by compilation of supporting evidence through literature review and pilot testing, funding competition cycles, ethics review processes, and protocols defined in research proposals, healthcare administrators operate on yearly and quarterly budget cycles. Research methods should be selected to facilitate the alignment of the stakeholder’s need to show results with the researcher’s need for rigor and independence. Despite the change in stakeholder priorities, we completed a portion of the preliminary phase of the implementation study. In Chapter 6 we described the use, barriers, facilitators, and the beliefs component of attitudes to outcome measurement in a small sample of physical therapists. This study identified some issues with the interpretation of the attitudes scale, which if addresses, could be an important component of future studies of implementation of outcome measures. These studies addressed aspects of the Practice Environment, Potential Adopters, and Evidence-Based Innovation components of the Assessment phase of the Ottawa Model for Research Use. Although the implementation did not proceed as planned, the model offered a path to reassess based on the knowledge and   249 experience gained thus far. In doing so, we recognized that assumptions on the links between outcome measurement and clinical decision-making warrant attention. In Chapter 2, we found through a literature review that the role of reflection as a facilitator for the decision-making process has not been clearly established. In addition, the evidence for effectiveness of implementation studies to promote adoption of outcome measures remains sparse and weak. This study represented an application of a second iteration of the Evidence-Based Innovation of the Ottawa Model of Research Use by disclosing a gap in evidence for the value of standardized measures. The assumption that the use of such measures can facilitate decision-making in rehabilitation2 has not been substantiated empirically.28, 29 Future research will not only need to assess the local context at the individual, organizational, and possibly system levels, but deeper and broader understanding of the processes of decision- making, reflection, reasoning, and how these relate to outcome measurement may be necessary.  7.3 STRENGTHS OF THE DISSERTATION RESEARCH The strength of this dissertation research is inherent in the depth and breadth of the five aspects of outcome measurement in health care that were studied. Our examination incorporated disparate lines of inquiry at different ecological levels using a variety of methods. We examined the extent to which interventions for reflection, decision-making, and implementation of standardized measures across four health professions. We identified gaps that exist both with respect to the   250 mandate and meaning of outcome evaluation as an integral part of contemporary PT practice. At the practitioner level we described a process by which practitioners could evaluate and interpret individual and grouped outcome data from their clients. We described the insights of a motivated clinician who used reflection to independently adopt and integrate outcome measures into her practice. We also described the use and attitudes to use of outcome measures by a small sample of physical therapists and identified some issues affecting the questionnaire used to collect data. We constructed a novel model of practice that integrated these constructs with elements of practice that are well recognized by practitioners and an internationally accepted model of functioning and health, the ICF. Although a model of practice needs to be broadly accepted to facilitate change in practice and that process will take time, our proposed model can be used to guide practice by individuals and groups. We examined knowledge translation challenges from a variety of perspectives and incorporated behavioral theory and a knowledge translation implementation model as the basis for an intervention study. The findings from the preliminary phase provided insight into the complexity of barriers such as lack of time and knowledge to implement change, and to systemic barriers that can impact implementation of knowledge translation research. The failed implementation of the knowledge translation intervention led us to better understand the challenges faced by researchers and healthcare stakeholders particularly with respect to cross- jurisdictional studies in British Columbia. In the process, this research used exploratory methods to reframe lines of inquiry with respect to outcome evaluation   251 and knowledge translation. In summary, we have proposed solutions in the form of an outcome evaluation process and a novel PT practice model.  7.4 LIMITATIONS OF THE DISSERTATION RESEARCH The exploratory nature of the descriptive, qualitative, and non-experimental designs that we used does not provide a basis for generalization. For instance, we do not know whether the search strategy used in the Chapter 2 literature review retrieved studies to comprehensively review the literature on reflection, decision- making, and outcomes. We also do not know whether the clinical outcomes described for the motivated physical therapist in Chapter 4 represent better, equivocal, or worse outcomes than the professional norm or in comparison to other clinicians. Nor do we know the extent to which other clinicians can successfully integrate adoption of a measurement process with the ability to reflect on outcomes to enhance aspects of their clinical practice. We also do not know whether the perspectives of the respondents to the survey and interview in Chapter 6 are representative of their respective populations. Thus, this contribution is largely descriptive and designed to inform future investigations and studies.  7.5 FUTURE RESEARCH DIRECTIONS The findings from this dissertation pose important questions and ideas for future studies. Further exploration of the nature of reflection and its role in clinical decision-making across health professions and amongst individual clinicians is indicated. Successful implementation of outcome measures may depend on better   252 understanding of these clinical processes. Implementation will also be subject to better understanding of the value of outcome data to clinicians. It is apparent that value is not an attribute inherent to the measure or the process. Understanding and integrating that which clinicians find meaningful into measurement processes may facilitate implementation. Success with outcome measurement could inform implementation research in other areas of healthcare. Methods for aligning the development of professional standards in regulatory, promotion, and entry-level and continuing education may be valuable to providing the PT profession with a commonly-accepted practice model. Facilitating collaborations among the professions, healthcare agencies that employ health professionals (e.g., the health authorities and insurers that pay for their services) may advance implementation science. Within this complex milieu, understanding the interactions and conflicts of mandated change and individual autonomy might guide the development of training and support necessary to promote rapid adoption of new clinical methods is compelling as is the methodology by which this may be implemented. Regarding outcome evaluation methods, research is needed with respect to effectiveness and implementation. Efforts to improve the reliability of measures are underway, but methods to improve the meaning of outcome evaluations are warranted. For example, collecting multi-wave data and using longitudinal growth and change models to map outcome paths and trajectories may provide a meaningful representation of outcome to clinicians which could foster adoption of measurement processes. Regardless of the methods, efforts to promote adoption   253 will require technological solutions that do not increase the actual or perceived burden to enact a change in practice by the clinician to be successful. Regarding exploration of barriers to outcome evaluation, the preliminary survey and interview (Chapter 6) provided two important insights. Although attitudes to outcome measurement may be an important barrier to address, the instrument we used warrants further refining and testing. Also, barriers commonly identified in knowledge translation research such as lack of time and knowledge may be more complex than they appear. These may relate more to the meaning an innovation has to health professionals, and meaning may be layered among multiple ecological levels in the environment in which professionals practice. The issue of defining meaning of an innovation, in this case outcome measurement and evaluation as a part of PT practice, warrants exploration. With respect to the field of knowledge translation, research into process evaluation methods and the determinants of effective collaboration warrant further study. One of the challenges to be confronted in knowledge translation research is the burden it places on healthcare organizations and practitioners within the healthcare system. Research resources and demands may relegate knowledge translation research to a low priority if they are perceived as burdensome. Methods that gather rich information with little or no burden will likely be necessary for effective knowledge translation research, again favoring technological solutions. One plausible solution is the integration of knowledge translation implementation, clinical outcome evaluation, and knowledge translation process and outcome evaluation with information technologies that support daily clinical decision-making.   254 7.6 POTENTIAL APPLICATIONS OF RESEARCH This dissertation research provides a number of useful applications. The literature review in Chapter 2 has identified a gap in research on interventions to facilitate reflection, decision-making, and implementation of outcome measures. The regulatory review (Chapter 3) in conjunction with the PT practice model (Chapter 5) provides a foundation for collaborative revision and alignment of policies for regulation, promotion, and education related to outcome evaluation. The outcome evaluation method and reflections of a motivated physical therapist presented in Chapter 4 provides a practical and possibly meaningful method of evaluating outcome data. Although this is a simple method in comparison to the computer adaptive technologies available in some regions, it can provide those clinicians who lack access to leading-edge methods until they become generally available.  7.7 REFLECTION ON THE DISSERTATION This dissertation represents the outcome of a long and challenging endeavor. What is not apparent in the research chapters are the insights I gained through the experience. Since I entered this graduate program, my understanding of the complexities of systems in which health practitioners operate has both expanded and diminished. In absolute terms my knowledge, skills, and abilities in many areas have expanded. In relative terms, however, I have become increasingly aware of the expanse of that which I do not know. I entered this program with the belief that I had a perspective on the application of standardized measures that would be meaningful to other clinicians, would   255 demonstrate that adoption of measures could enhance clinical-decision-making, and could consequently improve practice. My understanding of the complexities of promoting change in practice behaviors expanded my appreciation for why everyone doesn’t “just get it.” I began to understand that, even if a practitioner wanted to change something, the organizational structure around them, including financial incentives from payment structures and influences of peers might inhibit change. Awareness of behavioral change models suggested that change is possible, but might depend on the stage the individual is in, or their attitudes or self-efficacy may be the key factor. Influencing one of these aspects, or tailoring the attributes of the innovation might be critical to address an individual circumstance. But since individuals function within complex environments, understanding the complexity is important. Interactions between the individual and their environment, then, must be of central importance to the promotion of change. Recently, I have come to see reflection as a means to understanding individuals in systems, and the importance of recognizing the different perspectives those individuals exhibit within the system. Reflection has been defined as a critical and self-reflexive process of self- inquiry and transformation of being and becoming the practitioner you desire to be.67 Active intellectual engagement, exploration of experiences, and a change in perspective are elements of reflection,58 and reflection-in- and -on-practice are two means by which reflection may be engaged.40, 58, 67 The change in perspective on the use of outcome measures in PT practice through the intellectual exercise that has been my doctoral program appears to fit two of these elements. It is likely that I have engaged in reflective activities at points throughout my program, and in have   256 done so retrospectively while drafting and redrafting this dissertation. Following are some of the insights I have gained through this experience.  The study presented in Chapter 4 was started in the first years of my program. An earlier version of the outcome evaluation was submitted for peer-reviewed publication, conditionally accepted, and subsequently rejected. I had thought that meaning could be exposed by finding informative ways of partitioning and interpreting outcome data. A recommendation to add the insights of the clinician who collected the data for Chapter 4 resulted in discussions about her experience. From that discussion I gained an appreciation of the extent to which her experience and her reflection-in-practice as she interpreted the results from measures to individual patients played a role in facilitating her changes to practice. The outcome evaluation did offer her additional opportunity for reflection-on-practice, but did not expose the potential for outcome data to facilitate immediate decision-making. Consequently I now see that the meaning is not inherent in the data, but in the perspective of the practitioner. In revisiting the clinician’s motivation for implementing the measures and collecting the data, the meaning is generated by the individual. The challenge, then, is to discover the attributes of the clinicians that will stimulate them to find meaning with appreciation for their individual differences and the organizational influences to which they are subject in order to match transfer strategies. It is not simply an exercise in mining data. Chapter 6 represents the portion of the proposed study that I was able to complete but the insight I gained from the experience is not reflected in the chapter. Much of this experience related not to the use of outcome measures, but to the way I   257 adapted my decision-making processes to accommodate the demands and expectations of the individuals and agencies with whom I was partnered. Through this experience I also gained an appreciation for the practical value of applying an operational framework such as the OMRU in realizing that I did not take advantage of its most valuable component. One of the valuable tools I gained in my undergraduate PT program was a perspective that critical appraisal could be applied to all aspects of my practice; an approach that ultimately led me to this graduate program. However, somewhere along the way I stopped reassessing my motivation and my path through my doctoral program. Instead I relied on the recommendations of others in remapping the path forward when I was faced with changes in my program. Had I applied the iterative feedback component of the OMRU to my personal circumstances and not just to my research project I may have identified other options for completing my program. In reflecting on this difficult part of my experience I am certain that, while in the moment, I had inadequately reflected the critical factors facing me. This may have left me at risk of making other decisions in my program based on incomplete information. The insight that I gained through completing Chapter 6 was not new. The experience has however refocused my awareness of the value of the critical self-inquiry component of reflection, which may improve my ability to engage in reflection-in-practice in the future. The findings reported in Chapter 3 were not surprising to me as I was previously familiar with the gap between the PT regulations in British Columbia and the ideal of practice that integrates use of outcome measures to facilitate decision-   258 making. Reflection on this chapter has, however, raised my awareness of the potential for reciprocal influences between slowly evolving regulatory standards of practice and the ideal practice defined by rapidly evolving literature. Professional regulations define minimum standards of practice and must be considerate of entry- level clinicians and those approaching retirement alike. Conversely the literature will report on advances in practice as they occur without regard for long-term efficacy, effectiveness, efficiency, or risk-benefit balance. In the gap between these extremes clinicians will, consciously or tacitly, find meaning in their way of approaching their clinical practice. Understanding aspects of this meaning, individually and collectively, may be another important factor to address in promoting change. The clinical decision-making model developed in Chapter 5 has also undergone substantial revision to reach its current version. In reflecting on this chapter, I have gained insight primarily through discussions with the co-authors and others on the relative merits of the model. Its development has benefitted from the experience of researchers, clinicians, educators, and students. Yet I am more acutely aware now that in order for this or any practice model, it must be validated. If meaning is an attribute of an individual’s experience with an event or object, a model must resonate with the majority of the members of a requisite network in order to be considered to represent the practice of that group. The purpose of publishing the model is to offer it up for testing its worthiness rather than to suggest it is a suitable model for others. Chapter 2 represents the result of a difficult juncture in my program. I deliberated with myself and others on how to proceed with what I saw as a search   259 for a proverbial needle in a haystack. The literature on constructs of reflection, reasoning, and clinical decision-making in rehabilitation, nursing, and medicine is extensive, yet I have not yet found an article on the use of clinical outcome data collected with sefl-report questionnaires to facilitate decision-making. I am not confident that my search was sufficiently comprehensive to suggest such a study has not yet been reported. However, I am now confident in my view that the premise of measuring outcome to facilitate clinical decision-making was understated, and remains insufficiently understood. I will be exploring models of decision-making in the health professions for those that more comprehensively integrate these decision- making components in order to inform further research endeavors in this arena. In addition, the exercise of reviewing studies on reflection in health care practice, and in completing this dissertation has prompted efforts to better understand my ways of reflection. Journaling and written narratives were methods reported in many of the studies reviewed.30, 31, 33-35, 38 Had I the foresight to journal my experience through my doctoral program, I would have been better situated to assess my reflective abilities. However, I have come to realize that journaling is not likely to be a suitable process for me. Typically I have “aha” moments by distancing myself from a subject after having consciously thought it over, often spontaneously while cycling or hiking. As physical therapists often report lack of time as a barrier to using outcome measures,1-3 I suspect narrative writing would not be suitable means to facilitate reflection on their adoption choice. Thus appreciating individual differences will be valuable in exploring how clinicians use reflection in decision- making.   260 My understanding and perspectives on the use of outcome measures in clinical practice evolved over the clinical and managerial phases of my PT career. This evolution has continued over my graduate program in ways I had not predicted. I presumed I would explore methods to gather and drill deeper into outcome data to expose meaning in the use of standardized self-report questionnaires as a means to inform clinical decision-making, which would lead to changes in the ways clinicians practice. Rather I now see clinical and professional decision-making as interactive processes to be applied by individuals that are influenced by cognitive abilities like reflection,(schon;wainwright} reasoning,68 and knowledge,68, 69 and a host of organizational,8 social65, governmental, professional and other factors that cross ecological levels. Promoting change in clinical practice is difficult,11, 12 and matching of interventions to individuals in their organizational settings is thought to offer larger magnitudes of change.8, 23, 70, 71 Small changes in the use of self-report questionnaires in PT practice have been demonstrated over short terms.6 However, it remains unclear how constructs like reflection and meaning related to the adoption decision, whether these vary if the decision is motivated by the individual or an external mandate, what strategies are necessary to facilitate adoption, and under what circumstances adoption relates to changes in practice elements like clinical decision-making. Further, if these conditions can be met, we do not know the extent to which changes will endure in practice, and what resources, such as changes to the practice environment, may be necessary to facilitate retention of the change or even promote a culture of ongoing change.   261 7.8 CONCLUSION This dissertation represents a novel contribution to understanding some aspects of implementation of outcome measures in physical therapy. Specifically, it investigated outcome measurement and evaluation in healthcare with special reference to adoption by physical therapists. The multi-factorial nature of the barriers and facilitators to implementation of a professional practice change like adoption of a system of outcome evaluation in PT was demonstrated. At the professional community level, gaps between PT regulatory and professional resources and literature on outcome evaluation were observed. These gaps may contribute to perceptions of clinicians that outcome evaluation is not necessary in their practices. Despite the lack of a regulatory mandate and defined process to evaluate outcome, we described a simple process to evaluate two wave data at the level of the individual clinician. To fill the gap between regulation and practice, we constructed a clinical decision-making model integrating familiar PT practice elements with ICF components and an outcome evaluation process. If adopted collaboratively by PT regulatory, professional, and academic organizations, such a model could help bridge the gap between the professional community and individual practitioner levels. In an effort to do this, we implemented the preliminary phase of a knowledge translation intervention study. This phase identified barriers reported by physical therapists that were similar to those reported previously1-4, 62 but disclosed underlying complexities that warrant further exploration. However, a primary objective of this phase, to gather stakeholder perspectives on barriers and facilitators to PT outcome evaluation in a work disability prevention program, was not   262 met. Failure of this objective and the subsequent knowledge translation intervention phase disclosed additional barriers systemic to the healthcare research system. Specifically, collaboration with stakeholder (a necessary condition in knowledge translation research) can be compromised by imbalances in availability of resources and demands for research, and processes such as ethics review can create delays or prevent implementation of research. Success of knowledge translation research in healthcare depends on an awareness and the management of this multi-factorial environment. This dissertation provides support for the complexity of the milieu in which knowledge translation initiatives operate and offers insight into the challenges inherent in promoting professional change in physical therapy.   263 7.9 REFERENCES 1. Cole B, Finch E, C G, Mayo N. Physical rehabilitation outcome measures. Toronto: Canadian Physiotherapy Association; 1994. 2. Finch E, Brooks D, Stratford P, Mayo N. Physical rehabilitation outcome measures. Second ed. Hamilton: BC Decker; 2002. 3. Jette DU, Halbert J, Iverson C, Miceli E, Shah P. Use of standardized outcome measures in physical therapist practice: perceptions and applications. Phys Ther. 2009;89(2):125-135. 4. Kay T, Myers A, Huijbregts M. How far have we come since 1992? A comparative survey of physiotherapists' use of outcome measures. Physiother Can. 2001;53(4):268-275. 5. Kirkness C, Korner-Bitensky N. Prevalence of outcome measure use by physiotherapists in the management of low back pain. Physiother Can. 2002;53(4):249-257. 6. Abrams D, Davidson M, Harrick J, Harcourt P, Zylinski M, J. C. Monitoring the change: current trends in outcome measure usage in physiotherapy. Man Ther. 2006;11(1):46-53. 7. Gross DP. Evaluation of a knowledge translation initiative for physical therapists treating patients with work disability. Disabil Rehabil. 2008;1(9):1-8. 8. Bartholomew LK, Parcel GS, Kok G, Gottlieb NH. Planning health promotion programs: an Intervention Mapping approach. Second ed. San Francisco: Josey-Bass; 2006.   264 9. Grol R, Bosch MC, Hulscher M, Eccles MP, Wensing M. Planning and studying improvement in patient care: the use of theoretical perspectives. Milbank Q. 2007;85(1):93-138. 10. Graham ID, Logan J, Harrison MB, et al. Lost in knowledge translation: time for a map? J Contin Educ Health Prof. 2006;26:13-24. 11. Grimshaw JM, Shirran L, Thomas R, et al. Changing provider behavior: an overview of systematic reviews of interventions. Med Care. 2001;39(8, Suppl 2):II-2-II-45. 12. Oxman AD, Thompson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. Can Med Assoc J. 1995;153(10):1423-1431. 13. Logan J, Harrison MB, Graham ID, Dunn K, Bissonnette J. Evidence-based pressure-ulcer practice: the Ottawa model of research use. Can J Nurs Res. 1999;31(1):37-52. 14. Darrah J, Loomis J, Manns P, Norton B, May L. Role of conceptual models in a physical therapy curriculum: application of an integrated model of theory, research, and clinical practice. Physiother Theory Pract. 2006;22(5):239-250. 15. Wessel J, Williams R, Cole B. Physical therapy students' application of a clinical decision-making model. The Internet Journal of Allied Health Sciences and Practice. 2006;4(3):1-11. 16. American Physical Therapy Association. Guide to physical therapist practice. Second ed. Alexandria, VA: American Physical Therapy Association; 2003.   265 17. Rothstein JM, Echternbach JL. Hypothesis-orient algorithm for clinicians: A method for evaluation and treatment planning. Phys Ther. 1986;66:1388- 1394. 18. Rothstein JM, Echternbach JL, Riddle DL. The hypothesis-oriented algorithm for clinicians II (HOAC II): A guide for patient management. Phys Ther. 2003;83(455-470). 19. World Health Organization. International Classification of Functioning, Disability and Health: ICF. Geneva: World Health Organization; 2001. 20. Guzman J, Yassi A, Baril R, Loisel P. Decreasing occupational injury and disability: the convergence of systems theory, knowledge transfer and action research. Work. 2008;30(3):229-239. 21. Jensen CB. Sociology, systems and (patient) safety: knowledge translations in healthcare policy. Sociol Health Illn. 2008;30(2):309-324. 22. Keown K, Van Eerd D, Irvin E. Stakeholder engagement opportunities in systematic reviews: knowledge transfer for policy and practice. J Contin Educ Health Prof. 2008;28(2):67-72. 23. Grol R, Grimshaw JM. From best evidence to best practice: effective implementation of change in patients’ care. Lancet. 2003;362:1225-1230. 24. Grol R, Wensing M, Eccles MP. Improving patient care: the implementation of change in clinical practice. Edinburgh: Elselvier Butterworth Heinemann; 2005. 25. Lavis JM. Informing policy-making with research findings. Can J Rehab. 1997;11(1):8.   266 26. Law M. Integrating outcomes research findings into rehabilitation practice. Can J Rehab. 1997;11(1):16-17. 27. Hamilton B, Granger C, Sherwin F, Zuielezny M, Tashman JS. A uniform national data system for medical rehabilitation. In: Fuhrer MJ, ed. Rehabilitation outcomes: analysis and measurement. Baltimore, MD: Brooks; 1987:59-74. 28. Colquhoun H, Letts L, Law M, MacDermid JC, Edwards M. Routine administration of the Canadian Occupational Performance Measure: effect on functional outcome. Aust Occup Ther J. 2010;57:111-117. 29. Stevens JGA, Beurskens AJMH. Implementation of measurement instruments in physical therapist practice: development of a tailored strategy. Phys Ther. 2010;90(6):953-961. 30. Donaghy M, Morss K. An evaluation of a framework for facilitating and assessing physiotherapy students' reflection on practice. Physiother Theory Pract. 2007;23(2):83-94. 31. Roche A, Coote S. Focus group study of student physiotherapists' perceptions of reflection. Med Educ. 2008;42(11):1064-1070. 32. Sobral DT. An appraisal of medical students' reflection-in-learning. Med Educ. 2000;34(3):182-187. 33. Carr S, Carmody D. Experiential learning in women's health: medical student reflections. Med Educ. 2006;40(8):768. 34. Green CA. Reflecting on reflection: students' evaluation of their moving and handling education. Nurse Educ Pract. 2002;2(1):4-12.   267 35. Burnett E, Phillips G, Ker JS. From theory to practice in learning about healthcare associated infections: Reliable assessment of final year medical students' ability to reflect. Med Teach. 2008;30(6):157-160. 36. Bellman LM. Changing nursing practice through reflection on the Roper, Logan and Tierney model: the enhanced approach to action research. J Adv Nurs. 1996;24(1):129-138. 37. Auburn T, Bethel J. Hand injuries in children: A reflective case study. Emerg Nurse. 2007;40(8):768-774. 38. Peden-McAlpine C, Tomlinson PS, Forneris SG, Genck G, Maiers SJ. Evaluation of a reflective practice intervention to enhance family care. J Adv Nurs. 2005;49(5):494-501. 39. Toy EC, Harms KP, Morris JRK, Simmons JR, Kaplan AL. The effect of monthly resident reflection on achieving rotational goals. Teach Learn Med. 2009;21(1):15-19. 40. Wainwright SF, Shepard KF, Harman LB, Stephens J. Novice and experienced physical therapist clinicians: a comparison of how reflection is used to inform the clinical decision-making process. Phys Ther. 2010;90(1):75-88. 41. College of Physical Therapists of Alberta (CPTA). CPTA practice standards for Physical Therapists. PDF available at: http://xur.liquidweb.com/~cptaab/sites/default/files/Practice_standards.pdf. Accessed July 14, 2009.   268 42. College of Physical Therapists of Alberta (CPTA). CPTA disability management of injured workers: a best practices resource guide for physical therapists. PDF available at: http://xur.liquidweb.com/~cptaab/sites/default/files/disabilitymanagement_web .pdf. Accessed July 14, 2009. 43. College of Physical Therapists of Alberta (CPTA). CPTA outcome measurement resources web page. Available at: http://www.cpta.ab.ca/outcome%20measures. Accessed July 14, 2009. 44. College of Physical Therapists of Alberta (CPTA). CPTA Automobile insurance in Alberta: a reporting guide for physical therapists. PDF available at: http://xur.liquidweb.com/~cptaab/sites/default/files/Auto_insurance_reporting_ web.pdf. Accessed July 14, 2009. 45. Van der Wees P, Hendriks E, Mead J, Rebbeck T. WCPT: International collaboration in clinical guideline development and implementation. Paper presented at: 15th International Congress of the World Confederation for Physical Therapy, 2007; Vancouver, Canada. 46. National Physiotherapy Advisory Group, Accreditation Council of Canadian Physiotherapy Academic Programs, Canadian Alliance of Physiotherapy Regulators, Canadian Physiotherapy Association, Canadian Universities Physical Therapy Academic Council. Essential competency profile for physiotherapists in Canada 2004.   269 47. Rogosa DR, Brandt D, Zimowski M. A growth curve approach to the measurement of change. Psychol Bull. 1982;92:726-748. 48. Zumbo BD. The simple difference score as an inherently poor measure of change: Some reality, much mythology. In: Thompson B, ed. Advances in social science methodology. Vol 5: JAI Press; 1999:269-304. 49. Vernon H, Mior S. The Neck Disability Index: a study of reliability and validity. J Manipulative Physiol Ther. Sep 1991;14(7):409-415. 50. Fairbank J, Couper J, Davies J, O'Brien J. The Oswestry low back pain questionnaire. Physiotherapy. 1980;66:271-272. 51. Fairbank J, Pynsent P. The Oswestry Disability Index. Spine. 2000;25:2940- 2953. 52. Binkley J, Stratford P, Lott S, Riddle D, al e. The Lower Extremity Functional Scale (LEFS): scale development measurement properties and clinical application. Phys Ther. 1999;79:371-383. 53. Beaton DE, Katz JN, Fossel AH, Wright JG, Tarasuk V, Bombardier C. Measuring the whole or the parts? Validity, reliability, and responsiveness of the Disabilities of the Arm, Shoulder and Hand outcome measure in different regions of the upper extremity. J Hand Ther. Apr-Jun 2001;14(2):128-146. 54. Stratford PW, Riddle DL. Assessing sensitivity to change: choosing the appropriate change coefficient. Health Qual Life Outcomes. 2005;3(1):23-37. 55. Schmitt JS, Di Fabio RP. Reliable change and minimum important difference (MID) proportions facilitated group responsiveness comparisons using individual threshold criteria. J Clin Epidemiol. Oct 2004;57(10):1008-1018.   270 56. Husted JA, Cook RJ, Farewell VT, Gladman DD. Methods for assessing responsiveness: a critical review and recommendations. J Clin Epidemiol. 2000;53:459-468. 57. Kazis LE, Anderson JJ, Meenan RF. Effect sizes for interpreting change in health status. Med Care. 1989;27(Suppl 3):S178-S189. 58. Schon D. The reflective practitioner: how professionals think in action. San Francisco, CA: Jossey-Bass Inc Publishers; 1983. 59. Childs JD, Cleland JA. Development and application of clinical prediction rules to improve decision making in physical therapist practice. Phys Ther. 2006;86(1):122-131. 60. Childs JD, Fritz  JM, Flynn TW, et al. A clinical prediction rule to identify patients with low back pain most likely to benefit from spinal manipulation: a validation study. Ann Intern Med. 2004;141:920-928. 61. Millar LA, Jasheway PA, Eaton W, Christensen F. A retrospective, descriptive study of shoulder outcomes in outpatient physical therapy. J Orthop Sports Ther. 2006;36(6):403-414. 62. Huijbregts MP, Myers AM, Kay TM, Gavin TS. Systematic outcome measurement in clinical practice: challenges experienced by physiotherapists. Physiother Can. 2002;54(1):25-31, 36. 63. American Physical Therapy Association. APTA Webpage for the International Classification of Functioning, Disability, and Health (ICF). Available at: http://www.apta.org/AM/Template.cfm?Section=Clinician_Resources_NEW&T   271 emplate=/CM/ContentDisplay.cfm&CONTENTID=51425 64. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kryiakidou O. Diffusion of innovations in service organizations. Systematic review and recommendations. Milbank Q. 2004;82:581-629. . Accessed July 14, 2009. 65. Rogers EM. Diffusion of Innovations. 5th ed. New York: Free Press; 2003. 66. Timmons S. How does professional culture influence the success or failure of IT implementation in health services? In: Ashburner L, ed. Organisational behaviour and organisational studies in health care: reflections on the future. Palgrave: Basingstoke; 2001. 67. Johns C. Becoming a reflective practitioner. Third ed. Oxford, UK: Wiley- Blackwell; 2009. 68. Higgs J, Jones M. Clinical reasoning in the health professions. 2nd ed. Oxford: Butterworth Heinemann; 2000. 69. Mattingly C, Fleming MH. Clinical reasoning: forms of inquiry in a therapeutic practice. Philadelphia, PA: FA Davis Co.; 1994. 70. Grol R. Successes and failures in the implementation of evidence-based guidelines for clinical practice. Med Care. 2001;39(1146-1154). 71. Bosch MC, van der Weijden T, Wensing M, Grol R. Tailoring quality improvement interventions to identified barriers: a multiple case analysis. J Eval Clin Pract. 2006;13:161-168.    272 APPENDICES  A-1 Operational Definitions for Independent and Outcome Variables A-2  Cues to Reflection for the Outcome Evaluation Process B-1 Questionnaire on Attitudes, Current Practices, Barriers, and Facilitators to Measurement of Physical Therapy Outcomes B-2 Script for Semi-Structured Interview Questions Regarding the Affective Component of Attitudes, Barriers, and Facilitators to the Use of Standardized Disability Questionnaires to Measure Physical Therapy Outcomes C Ethics Review Certificates     273 Appendix A-1.  Operational Definitions for Independent and Outcome Variables  Variable Category/Name Type Definition  Demographic Age Continuous Patient's age in years at time of initial assessment Sex Nominal Patient's sex (male or female), reported as the proportion of female (F)  Utilization Date of Examination Calendar date Date Patient first seen by physical therapist for that Episode of Care Date of Discharge or Discontinuance Calendar date Date Patient last attended physical therapy, and date of final outcome measure administration Duration Continuous Derived variable from Examination and Discharge or Discontinuance dates; number of calendar days in Episode of Care where the Date of Examination = 0 days Number of Visits Continuous Total number of visits attended by the patient in the Episode of Care where the Examination visit = 1  Condition Stage of Healing Ordinal Category of elapsed time interval from onset of condition to Initial Evaluation where Acute < 3 weeks, Subacute > 3 weeks and < 6 months, Chronic > 6 months Discharge/Discontinuance Reason Nominal Categorical reason for termination of the Episode of Care, based on Clinician’s opinion. Discharge reasons are those where the end of the Episode of Care was determined by the PT, or in collaboration with the patient, and include Goals Met, Minimal Progress, Refer to Physician, and Service Inappropriate. Discontinuance reasons are those where only the patient decided, and may reflect influence of other factors, including Did Not Return, Insurance Issues, Moved, and Attendance Issues (did not attend for 3 visits).  Standardized Measure Regional SDQ ID Nominal Label for ODI, NDI, DASH, LEFS Initial SDQ Score Continuous Score ranging from 0 - 100 for ODI, NDI, DASH, and 0 - 80 for LEFS at Examination. Scores for the ODI, NDI, and DASH were inverted so that higher scores represented lower activity limitation Discharge SDQ Score Continuous Score ranging from 0 - 100 for ODI, NDI, DASH, and 0 - 80 for LEFS at Discharge. Scores for the ODI, NDI, and DASH were inverted so that higher scores represented lower activity limitation SDQ Change Score Continuous Score ranging from 0 - 100 (0 - 80 for LEFS) derived by subtracting score at Examination from score at Discharge of the Regional Measure, where positive scores represent a reduction in activity limitation or increase in activity    274 Appendix A-2.  Cues to Reflection in the Outcome Evaluation Process  Step 1: Post-Clinical Evaluation  • Examine reasons for discharge/discontinuance of Examination-Only patient records o What proportion were discharge reasons?  What types of goals were met in one visit?  What happened to patients referred back to the physician? • Was the assessment accurate? Can findings be validated? • Did patients go on to surgery or other intervention? • Were patients lost to follow up    In what way were services inappropriate? • Were the conditions beyond the clinician’s ability to treat? • Were alternative services appropriate, and if so, recommended? o What proportion were discontinuance reasons?  Could follow-up contact identify more specific reasons why patient “did not return?”  What was the patients perspective/belief regarding value of the service?  Were there discrepancies between patient goals and clinical expectations?  Was there failure to establish a patient-therapist relationship?  To what extent do insurance issues impact continuance to intervention?  Where there under lying reasons for attendance issues? • Were patients educated and accepting of their rehabilitation role? • Did other life situations take priority? • Was the patient-therapist relationship insufficient? • Were there discrepancies between expectations? o Were there differences between Intervention patients with complete versus incomplete records? Consider risk of patient discontinuing when planning and implementing Intervention and subsequent outcome measure administration. • Higher proportions of discharge, in particular goals-met and valid physician re- referral reasons may indicate efficient service provision and effective Examination skills.  Step 2: Data Integrity  • Examine the proportion of incomplete outcome records o Do patient subsets with complete and incomplete outcomes data differ? o Does awareness of differences guide decision-paths for future patients? o What discontinuance reasons are represented? Were these patients  Rapidly improving or independently able to implement the treatment plan?  Not improving or getting worse?  Reaching a plateau on impairment recovery and expecting more focus on capacity for activity and/or performance of participation (i.e.,   275 was the plan not responsive to the patient’s higher-level functioning goals)?  Was the patient-therapist relationship insufficiently developed?  Were insurance issues a barrier to intervention?  Were some of these due to ceiling effects at Examination?  How many incomplete records had discharge reasons? • Was there an error in coding a discharge instead of discontinuance reason? • Was an opportunity to administer the measure at Examination or Discharge missed? • Was time or resource availability insufficient? • Examine the proportion of complete outcome records o How many have discontinuance reasons? Should these have been coded with a Discharge reason, or were scores from intermediate SDQ administrations collected prior to planned discharge? • Higher proportions of incomplete records may result in selection bias within your data, and may influence change indices and response comparison results.  Step 3: Case Validity  • For patients expected to improve, what proportion of records had ceiling effects? • If a subset was partitioned for patients with prognoses of deterioration, did any records had floor effects at Examination? • Were records with floor or ceiling effects clustered to one or two measures, to specific diagnoses, or to identifiable patient groups (e.g., elite athletes); Identifiable clusters may warrant use of other measures.  Step 4: Body Region/Selected Self Report Questionnaires  • How are patient records distributed among SDQs? • Are there large subsets measured with an upper extremity (i.e., shoulder joint or rotator cuff diagnoses) or lower extremity (knee joint or osteoarthritis diagnoses) SDQ that might warrant drilling down further, or use of a more specific SDQ?  Step 5: Change Indices  • What was the magnitude of change? How does it relate to the mean initial score? • What was the effect size? What range did it fall in to? • What was the reliable change proportion? • How do these indices compare to your intuitive assessment of outcomes for that patient sample? • Are there any subsets with less than satisfactory recovery?  Step 6: Response Comparison  • How did the responders and non-responders compare on selected variables? • Were distributions of any of those variables skewed? • Explore individual records of the patients at the extremes: o Were reasons for extreme values apparent? o What was special about the patients who exceeded expectations?   276 o What was notable about the patients with the worst outcomes? o How quickly did the clinician respond to lack of progress or signs of worsening? • Are the variables contained in the database sufficient for the evaluation? o Are there other factors associated with outcome that should be added? o Are any variables clearly not adding to the evaluation that could be removed?     277 Appendix B-1. Questionnaire on Attitudes, Current Practices, Barriers, and Facilitators to Measurement of Physical Therapy Outcomes  Section 1: Attitude Towards Outcome Measurement - Beliefs Scale  Please rate your response to each question on the scale to the right by selecting the most applicable rating, from “Disagree Strongly” to “Agree Strongly”  D is ag re e   S tr on gl y   D is ag re e  N ei th er  A gr ee   n or  D is ag re e  A gr ee   A gr ee   S tr on gl y  1 Health professionals should measure the outcomes of their treatment. 1 2 3 4 5 2 Functional outcome tests and measures are unpopular with clients.  1 2 3 4 5 3 It is not necessary to measure functional outcomes.  1 2 3 4 5 4  The use of validated outcome measures is clinically helpful in an increasingly medico- legal environment. 1 2 3 4 5 5 There is no need to change from the ways that we have always used to assess patients. 1 2 3 4 5 6 Health professionals should monitor patient progress using reliable and valid tools. 1 2 3 4 5 7  I do not think it is appropriate for the regulatory board or professional association to tell me what to measure and how to report patient status. 1 2 3 4 5 8 Validated outcome measures can encourage a focus on functional outcomes. 1 2 3 4 5 9 Available tests are inappropriate for the type of patients that I treat. 1 2 3 4 5 1 0  I do not think it is appropriate for third-party insurers or payers to tell me what to measure and how to report patient status. 1 2 3 4 5 Questionnaire adapted from Abrams D et al. Monitoring Change: Current trends in outcome measurement usage in physiotherapy. Man Ther 2006;11(1):46-53.    278  Section 2: Current Use  Please rate your response to each question by selecting the most applicable rating, from “Never” to “Always” based on the proportion of clients where that measure would be appropriate for use.    In my practice, I use Standardized …  N ev er   (< 10 % )  O cc as io na lly   (1 0 – 39 % )  S om et im es   (4 0 – 60 % )  F re qu en tly    ( 61  –  9 0% ) A lw ay s  (> 90 % ) 1 pain scales (e.g., Numeric Pain Rating Scale or McGill Pain Questionnaire)  1 2 3 4 5 2 impairment measures (e.g., Oxford Manual Muscle Test, or dynamometers for strength, or goniometer for range of motion with protocol such as average of 3 tries) 1 2 3 4 5 3 physical performance measures (e.g., 6 minute walk,  submaximal treadmill, or Berg Balance Tests) 1 2 3 4 5 4 self-report questionnaires (e.g., Oswestry Disability Index or Disabilities of the Arm, Shoulder & Hand (DASH) questionnaire for disability, or SF- 36 or EuroQOL 5D health status or quality of life) 1 2 3 4 5 5 measures (from questions 1-4) at admission and discharge to measure the outcome of my treatment 1 2 3 4 5  To measure outcomes of client treatment in my practice, I use UNs  tandardized …  6 pain reports (e.g., verbal scale, or pain descriptors)  1 2 3 4 5 7 Impairment measures (e.g., strength test or range of motion without protocol)  1 2 3 4 5 8 physical performance measures (e.g., individualized tests of lifting or other job demands)  1 2 3 4 5 9 self-report (e.g., verbal report of pain, selected items from or modified versions of questionnaires)  1 2 3 4 5 1 0 measures (from questions 5-9) at admission and discharge to measure the outcome of my treatment 1 2 3 4 5    279  Section 3: Barriers to Use of Outcome Measures in Your Clinical Practice  Please rate the degree each barrier presents to your use of standardized self-report questionnaires by selecting the most applicable rating, from “No Barrier” to “Extreme Barrier” based on use with appropriate clients.  Lack of KNOWLEDGE about …  N o   B ar rie r  L ow   B ar rie r  M od er at e  B ar rie r  H ig h  B ar rie r  E xt re m e  B ar rie r 1 measures (e.g., variety of measures available, or which measure to use for specific cases) 1 2 3 4 5 2 measurement properties (e.g., reliability & validity, floor & ceiling effects, or detectable & important change) 1 2 3 4 5 3  procedures (e.g., administering, scoring, or interpreting) 1 2 3 4 5     Lack of TIME to …  N o   B ar rie r  L ow   B ar rie r  M od er at e  B ar rie r  H ig h  B ar rie r  E xt re m e  B ar rie r 4  search literature and/or learn about measures 1 2 3 4 5 5  administer with clients, score, and interpret 1 2 3 4 5 6  discuss results with clients, colleagues, and/or other stakeholders 1 2 3 4 5     Lack of …  N o   B ar rie r  L ow   B ar rie r  M od er at e  B ar rie r  H ig h  B ar rie r  E xt re m e  B ar rie r 7  availability of, or accessibility to measures 1 2 3 4 5 8  compatibility with client needs 1 2 3 4 5 9  consensus on what measures to use 1 2 3 4 5 10  equipment or resources 1 2 3 4 5 11  administrative support 1 2 3 4 5 12  support from my employer/manager 1 2 3 4 5 13  support from the profession 1 2 3 4 5 14  personal interest 1 2 3 4 5     280 Section 4: Qualitative Questions  [Introduction] In this section a written response is required. Please respond as openly and honestly to the questions as you like. Feel free to write as much as you like. You may attach extra pages if needed.  • How do you feel (i.e., your emotional response) about using outcome measures with your patients or clients?  • Is there anything that could be changed, added to, or removed from your program that would make it easier to measure outcomes? If yes, what kind of things – please describe in detail giving examples  • Would additional training, assistance, or support make it easier for you to measure outcomes? If yes, please describe in detail giving examples.     Section 5: Demographic Data  Name: ____________________ Program:  Age:    _____   Sex:    M / F  Years of Practice:    _____  Education Level (Check one each)   Entry Level   Highest o Diploma     ___     ___ o Bachelors     ___     ___ o Masters (Professional: i.e., MPT)  ___     ___ o Masters (Research: i.e., MSc)   ___     ___ o Doctorate (Professional: i.e., DPT)  ___     ___ o Doctorate (Research: i.e., PhD)  ___     ___     281 Appendix B-2. Script for Semi-Structured Interview Questions on the Measurement of Physical Therapy Outcomes in the Prevention and Early Active Return-to-Work Safely (PEARS ) Program  [Introduction] “This interview will probably take about 20-30 minutes, and be audio taped and transcribed word-for-word. I am interested in finding out about outcomes of the PEARS program, and how they are measured. The questions I am going to ask you are open-ended and I would like you to feel free to talk as much or as little about this as you like. I will give you an opportunity at the end of the interview to add any additional information you want that has not been covered by the questions.”  Semi-structured questions will include • Could you tell me about your views on outcomes for the PEARS program? o Prompt: outcomes are the important results of the program. They may represent things like recovery of the part of the body that was injured, return of ability to perform specific job tasks like lifting or patient handling, or ability to return to work. o Prompt: The three examples I gave represent three different levels of health: Body Functions and Structures, whole person Activity, and Participation in social roles. Are all of these important to your program? o Prompt: Which of these three levels is most commonly measured in your program? o Prompt: Do you know of any specific measures are used? o Prompt: Do you think any gaps exist? • Could you tell me what you think the outcomes have been for your PEARS program? o Prompt: what are outcomes like for individual participants? o Prompt: what are outcomes like for your program or across the province? • How have you become aware of these outcomes? o Prompt: How are they measured? o Prompt: How useful are these measures as evidence of the success of the program? o Prompt: What do you know about the five pain and disability measures used in the PEARS pilots? • Can you tell me about any barriers to using these measures that might exist in your program? o Prompt: Organizational barriers? o Prompt: Individual barriers? • Do you think there would be any benefits to using standardized, validated measures in your program? o Prompt: For example, self-report questionnaires like the Oswestry Disability Index or the DASH questionnaire. o Prompt: [if so,] who might benefit? o Prompt: [if so,] how might they benefit? o Prompt: [if not,] what risks or consequences might come from using standardized measures? • If support was provided to your program would it make it easier to use standardized measures? o Prompt: what specific kinds of support would be helpful? • What, if anything, would make using these or similar measures more desirable to you?   282 Appendix C. Ethics Review Certificates          

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0071077/manifest

Comment

Related Items