- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Development and validation of a data abstracting tool...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Development and validation of a data abstracting tool for the evaluation of quality of care in emergency medical services Andrusiek, Douglas Lorne
Abstract
INTRODUCTION: Peer review is common in Emergency Medical Services (EMS); however its reliability and validity is not well studied. We sought to determine the reliability and criterion validity of EMS appropriateness and protocol compliance evaluation performed by peer review. METHODS: Six peers retrospectively reviewed 168 patient care reports (PCRs) for severely injured trauma patients as defined by explicit criteria. Care was rated with a tool derived by a sequential derivation process. Emergency physicians (EPs) prospectively evaluated 118 of the patients with the same tool (blind to the PCR). Inter-rater reliability was determined between paramedics for all 168 PCRs. Criterion validity was determined between the EP (gold standard) and peer rating. Intrarater reliability was determined from a repeated scoring of 50 PCRs after a washout period. The sample size was defined a priori. RESULTS: The criterion validity correlation coefficient was 0.29 (95%CI: 0.06-0.53). The inter-rater reliability kappa score was 0.35 (95%CI: 0.17-0.53). The intra-rater reliability kappa score was 0.62 (95%CI: 0.37-0.86). Three of 14 questions had very good inter-rater reliability: "Was a prehospital intervention required to manage airway?"; "Was a prehospital intervention required to cervical spine?"; and "Was a prehospital intervention required to orthopedic injuries?". Two questions had strong criterion validity: "Was a prehospital intervention required to manage airway?" and "Was a prehospital intervention required to cervical spine?". All questions asking if treatments provided were appropriate or compliant with written protocols had poor inter-rater reliability and criterion validity. CONCLUSION: Intra-rater reliability of retrospective peer review is good; however both inter-rater reliability and criterion validity are frequently poor. In particular, questions rating the appropriateness or compliance of treatments have poor reliability and validity. Although more research is required to fully understand this issue, these results call into question the appropriateness of retrospective peer review of the quality of care in EMS.
Item Metadata
Title |
Development and validation of a data abstracting tool for the evaluation of quality of care in emergency medical services
|
Creator | |
Publisher |
University of British Columbia
|
Date Issued |
2005
|
Description |
INTRODUCTION: Peer review is common in Emergency Medical Services
(EMS); however its reliability and validity is not well studied. We sought to determine
the reliability and criterion validity of EMS appropriateness and protocol compliance
evaluation performed by peer review. METHODS: Six peers retrospectively reviewed
168 patient care reports (PCRs) for severely injured trauma patients as defined by explicit
criteria. Care was rated with a tool derived by a sequential derivation process. Emergency
physicians (EPs) prospectively evaluated 118 of the patients with the same tool (blind to
the PCR). Inter-rater reliability was determined between paramedics for all 168 PCRs.
Criterion validity was determined between the EP (gold standard) and peer rating. Intrarater
reliability was determined from a repeated scoring of 50 PCRs after a washout
period. The sample size was defined a priori. RESULTS: The criterion validity
correlation coefficient was 0.29 (95%CI: 0.06-0.53). The inter-rater reliability kappa
score was 0.35 (95%CI: 0.17-0.53). The intra-rater reliability kappa score was 0.62
(95%CI: 0.37-0.86). Three of 14 questions had very good inter-rater reliability: "Was a
prehospital intervention required to manage airway?"; "Was a prehospital intervention
required to cervical spine?"; and "Was a prehospital intervention required to orthopedic
injuries?". Two questions had strong criterion validity: "Was a prehospital intervention
required to manage airway?" and "Was a prehospital intervention required to cervical
spine?". All questions asking if treatments provided were appropriate or compliant with
written protocols had poor inter-rater reliability and criterion validity. CONCLUSION:
Intra-rater reliability of retrospective peer review is good; however both inter-rater
reliability and criterion validity are frequently poor. In particular, questions rating the
appropriateness or compliance of treatments have poor reliability and validity. Although more research is required to fully understand this issue, these results call into question the
appropriateness of retrospective peer review of the quality of care in EMS.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2009-12-11
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use.
|
DOI |
10.14288/1.0092047
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2005-11
|
Campus | |
Scholarly Level |
Graduate
|
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use.