Open Collections

UBC Faculty Research and Publications

Evidence Based Practice - Step 2 - Appraising the Evidence: Practical Session Hoens, Alison; Leznoff, Sandy Jan 12, 2007

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
52383-RSRnet_3.pdf [ 185.69kB ]
52383-Part I.mp3 [ 29.16MB ]
52383-Part II.mp3 [ 40.83MB ]
52383-Part III.mp3 [ 22.32MB ]
52383-Worksheet-evaluation.pdf [ 92.76kB ]
Metadata
JSON: 52383-1.0103628.json
JSON-LD: 52383-1.0103628-ld.json
RDF/XML (Pretty): 52383-1.0103628-rdf.xml
RDF/JSON: 52383-1.0103628-rdf.json
Turtle: 52383-1.0103628-turtle.txt
N-Triples: 52383-1.0103628-rdf-ntriples.txt
Original Record: 52383-1.0103628-source.json
Full Text
52383-1.0103628-fulltext.txt
Citation
52383-1.0103628.ris

Full Text

EBP Step #2 Cont’d APPRAISING THE EVIDENCE: QUANTITATIVE ARTICLES Alison Hoens Clinical Assistant Prof, UBC Clinical Coordinator, PHC  Sandy Leznoff Clinical Instructor, UBC Clinical Coordinator, OT, PHC  EBP - THE PROCESS Clinical Problem Act Ask Apply Acquire Appraise  ASK „  Foreground Questions PICO „ P - Patient/Problem „ I - Intervention „ C - Comparison „ O - Outcome „  ACQUIRE Building the search strategy „ Identify concepts from PICO „ Decide on words 2 methods: keywords or classification „ Boolean operators „ Truncation „ Limits: study design; age; gender; language; year of publication „  APPRAISE LEVEL OF EVIDENCE „ „ „ „ „ „ „ „ „  Meta-analysis Systematic reviews Narrative reviews Clinical Practice Guidelines Randomized Controlled Trials Cohort Studies Case-controlled Studies Case studies *Expert opinion „ Modified from Cormack (2002)  APPRAISE STRENGTH OF EVIDENCE „ „  Quality of methodology “If you are deciding whether a paper is worth reading, you should do so on the design of the methods section and not on the interest of the hypothesis, the nature or impact of the results, or the speculation in the discussion” Greenhalgh, T. (1997). How to read a paper. BMJ, 315, 243-6  APPRAISE „ „  PURPOSE REVIEW OF LITERATURE Synthesis of previous literature (refs) „ Demonstrate ‘holes’ „  „  METHODS „  Sample Description „ Number (calculation of power) „ Randomization „ Allocation concealed „  APPRAISE „  METHODS Cont’d „  „  Intervention „ Detailed „ Relevant „ Trained & blinded „ Contamination & co-intervention Outcomes „ Detailed „ Relevant „ Reliable „ Valid  APPRAISE „  STATS ANALYSIS & RESULTS Groups similar at baseline „ Drop outs - reported & analyzed „ Test for normalcy „ More than means and p values „  APPRAISE „  CONCLUSIONS „ „ „  „  Clinical implications Reasonable restriction of interpretation Limitations reported  APPLICABILITY „ „  Patients similar Benefits> harm/cost  APPRAISE „  EXPLANATION „  A mutual discussion designed to correct differences ……  „  So, now let’s go practice!  WHERE ARE WE NOW? „ „ „ „ „ „  ASK ACQUIRE APPRAISE APPLY ACT Pollock et al (2000): Barriers to EBP in 3 general categories „ „ „  Ability Opportunity Implementation  THANK YOU! „ „ „ „ „ „ „ „  Charlotte Beck Maggie McIllwaine Sandy Leznoff Eugene Barsky Barbara Saint Jo Clark Marcelle Sprecher YOU!  STAY TUNED … „ „ „  Session # 4: Appraising Qualitative Articles (theory) Session #5: Appraising Qualitative Articles (practical) Session #6: Applying EBP- Putting it into practice!  THE BOTTOM LINE …………..  WE CAN DO IT!   WORKSHEET – APPRAISAL OF A QUANTITATIVE ARTICLE   I. Study Purpose  ­ Clearly stated  ­ Phrased as a research question or hypothesis  II. Literature Review  ­ Provide a synthesis of appropriate previous research & the clinical importance of the  topic  ­ Include primary > secondary sources  ­ Interpret results of previous work  ­ Clearly demonstrate the ‘holes’ that need to be filled by this particular study and thus  justifies the need for this study  III. Study Design  RCT  Cohort  Single subject  Before­After   Case­Control  Cross­Sectional  Case Study   IV. Appropriateness of Design  Sample/selection Bias  ­ Volunteer or referral bias  ­ Seasonal bias  ­ Attention bias  Measurement/detection Bias  ­ No. of outcome measures  ­ Lack of ‘masked’/’blinded’ evaluation  ­ Recall or memory bias  Intervention/Performance Bias  ­ Contamination  ­ Co­intervention  ­ Timing of intervention  ­ Site of treatment  ­ Different therapists  V. Sample ­ Was there a detailed description eg. Age, gender, duration of disease / disability?  ­ Were the number reported? Were the groups equal & similar?  ­ Was there a description of how subjects were sampled/recruited?  ­ Were there appropriate inclusion/exclusion criteria?  ­ Was there justification of sample size (calculation of power?  ­ If there was more than one group, were subjects randomly allocated?  ­ Was allocation concealed?  ­ Were the ethics procedures reported?  VI. Outcomes  ­ Were the outcomes clearly described?  ­ Were they detailed sufficiently for replication?  ­ Was the frequency of outcome measures described?  ­ Were the measures relevant to clinical outcome?  ­ Was reliability examined/reported & confirmed with these investigators?  ­ Was validity examined & reported?  VI. Intervention  ­ Was the intervention described in detail (to replicate)?  ­ Was the intervention relevant?  ­ Who delivered it; were they trained?; were they blinded?  ­ Was the frequency appropriate?  ­ Was the setting appropriate?  ­ Was contamination/co­intervention avoided?  VII. Stats Analysis & Results  ­ Was a testable hypothesis or objective stated?  ­ Was the sample adequately described?  ­ If randomized ­ are the baseline groups similar?  (table with baseline characteristics  presented and statistical comparison)  ­ Adequate number of subjects?   (30+ per group)  ­ Is the population normally distributed?  Should a parametric or nonparametric test been  used?  ­ If hypothesis testing ­ is the result statistically significant?  (p <0.05)  ­ Was the change meaningful?  ­ If showing confidence intervals, does the interval cross 0 (if presenting the mean  difference between the two groups) or 1 (if showing relative risk or odds ratios).  ­ Is the change what you expect?  ­ Have they made too many comparisons? Fishing expedition.  ­ Were the statistical methods provided in detail?  ­ Were the statistical methods appropriate?  ­ Were drop­outs reported? Were their results analyzed?  VIII. Conclusions  ­ Were clinical implications explored?  ­ Were conclusions restricted to a reasonable interpretation of the results?  ­ Were limitations of the study reported?  IX How can I apply the results to patient care?  ­ Were the study patients similar to my patient?  ­ Were all clinically important outcomes considered?  ­ Are the likely treatment benefits worth the potential harm and costs?  Adapted from:  1.  Critical Review Form, Quantitative Studies, developed by McMaster OT Evidence­Based  Practice Research Group (Law et al, 1998)  2.  Guyatt, G. & Rennie, D. (2002). User’s Guide To The Medical Literature. JAMA  3.  BMJ (1997) series July­Sept Greenhalgh, T  4.  Downs, SH & Black, N (1998). The feasibility of creating a checklist for the assessment  of the methodological quality both of randomized and non­randomised studies of health  care interventions. J Epidemiol Community Health, 52: 377­384.  5.  Medlicott, M & Harris SR (2006). A systematic review of the effectiveness of exercise,  manual therapy, electrotherapy, relaxation training, and biofeedback in the management  of temporomandibular disorder. Physical Therapy, 86(7), 955­973.  6.  PEDro scale from: http://www.pedro.fhs.usyd.edu.au/scale_item.html  7.  Akobeng, AK (2005). Principles of Evidence Based Medicine. Arch Dis Child; 90; 837­  40.  8.  Akobeng, AK (2005). Understanding randomized controlled trials. Arch Dis Child; 90;  840­44.  9.  Akobeng, AK (2005). Understanding systematic reviews and meta­analysis. Arch Dis  Child; 90; 845­48.  10.  Akobeng, AK (2005). Evidence in Practice. Arch Dis Child; 90; 849­852.  11.  Sim, J & Reid, N. (1999). Statistical Inference by confidence intervals: Issues of  Interpretation and Utilization. Phys Ther, 79(2), 186­195.  12.  Clancy, MJ (2002). Overview of research designs. Emerg Med J; 19; 546­549.  13.  Herbert, RD (2000). How to estimate treatment effects from reports of clinical trials. I:  Continuous Outcomes. Aus J Phys Ther, 46; 229­235.  14.  Herbert, RD (2000). How to estimate treatment effects from reports of clinical trials. I:  Dichotomous Outcomes. Aus J Phys Ther, 46; 303­313.   A. Hoens & P. Camp 2004/ Revised Dec 2006  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.52383.1-0103628/manifest

Comment

Related Items