QUALITY APPRAISAL: QUANTITATIVE RESEARCH WORKSHOP SYSTEMATIC REVIEWS OF QUANTITATIVE AND QUALITATIVE EVIDENCE
LEUVEN MAY 27 -28 2015
Floryt van Wese...
QUALITY APPRAISAL: QUANTITATIVE RESEARCH WORKSHOP SYSTEMATIC REVIEWS OF QUANTITATIVE AND QUALITATIVE EVIDENCE
LEUVEN MAY 27 -28 2015
Floryt van Wesel & Hennie Boeije
WHY APPRAISE QUALITY? Is the evidence gathered in one research design better than in another? Overall: In order to be able to value the results of a paper, we should be able to judge its quality In meta-analysis: Meta-analysis aims to increase precision of an effect size estimate Meta-analysis of studies with bias in results gives very precise but wrong results: Garbage in, garbage out Slide based on Appraising the quality of RCTs for a systematic review by Ruth Garside 2
Centrum Brein & Leren
QUALITY MEASURES: VALIDITY Validity: measuring what we intend to measure
BIAS
Systematic errors in results or inferences Methodological (fatal) flaws Systematic over- or underestimation of the “true” effect Ideal
Bias
Slide based on Appraising the quality of RCTs for a systematic review by Ruth Garside 3
Random error in results Varying samples Direction of error is random Ideal
Imprecision
Slide based on Appraising the quality of RCTs for a systematic review by Ruth Garside 4
Centrum Brein & Leren
EFFECT OF BIAS AND IMPRECISION Clear empirical evidence that particular flaws in study design can lead to bias. Usually impossible to know to what extent biases have affected the results. Key consideration = how much confidence should we have in the results?
Slide based on Appraising the quality of RCTs for a systematic review by Ruth Garside 5
Centrum Brein & Leren
QUALITY-CHECKLISTS There are many, many, many checklists! Most sensible is to choose a checklist that fits a study’s design. The Cochrane collaboration provides tools. For Randomized Controlled Trials (RCT’s): • The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ 2011; 343 doi: http://dx.doi.org/10.1136/bmj.d5928
For all kinds of studies: • http://ohg.cochrane.org/sites/ohg.cochrane.org/files/uploads/Risk%20of%20bias% 20assessment%20tool.pdf • http://ph.cochrane.org/sites/ph.cochrane.org/files/uploads/Unit_Eight.pdf
6
Centrum Brein & Leren
UNIT EIGHT: PRINCIPLES OF CRITICAL APPRAISAL For quantitative studies:
A
Main categories: A. Selection bias B. Allocation bias C. Confounders D. Blinding E. Data collection methods F. Withdrawals and dropouts G. Analysis H. Intervention integrity
B C
H
F D E
Note: This document provides a separate checklist for qualitative studies 7
G Centrum Brein & Leren
THE FORM
8
Each section consists of several items Each item has its own rating scale Explanations on rating are given in the ‘Dictionary’ (p.63-69) For each section rate:
Centrum Brein & Leren
SELECTION BIAS Selection bias When the study sample does not represent the target population for whom the intervention is intended. Are there differences in the way participants were accepted or rejected in the trial and the way in which interventions are assigned to individuals? Filter bias Volunteer bias
9
Centrum Brein & Leren
ALLOCATION BIAS Allocation bias The likelihood of bias due to the allocation process in an experimental study. How are the experimental and control group assembled? Are the groups comparable at baseline? Assigned at random Done by someone outside the research project
10
Centrum Brein & Leren
CONFOUNDERS Confounders A confounder is a characteristic of study subjects that is a risk factor for the outcome or is associated with exposure to the putative cause (discussed and decided on a priori).
Groups have to be comparable at baseline. (link with allocation bias) The confounding factor has to be related to both: Intervention Outcome
11
Centrum Brein & Leren
BLINDING Blinding Blinding the outcome assessors (participants, care takers, researchers) is to protect against detection bias. If assessors are blind to the experimental or control status of the participant, outcomes can be viewed as more objective.
12
Centrum Brein & Leren
DATA COLLECTION Data collection Data should be collected in a valid and reliable manner as to ascertain objective measurements. Some variables can be measured in an objective/direct manner. However, if this is not the case (self-reports, observations, etc.) validity and reliability of the measurement instruments should be provided.
13
Centrum Brein & Leren
WITHDRAWAL AND DROP-OUT Withdrawal and drop-out Differences between experimental and control group in the number of withdrawals, called attrition bias. Systematic differences in withdrawal/drop-out may lead to systematic differences between the groups, making them incomparable: Experimental – control groups Participants – drop-outs
14
Centrum Brein & Leren
STATISTICAL ANALYSIS Statistical analysis The sample size should be sufficient (statistical power). Sample size calculations (power analysis) should be performed beforehand. Analysis techniques should be fit the research question, design, and data. Especially for clustered data/multilevel designs.
15
Centrum Brein & Leren
INTEGRITY OF INTERVENTION Integrity of intervention This concerns the actual implementation of the intervention. Is the intervention carried out according to plan/protocol? Intervention integrity can be measured in five dimensions: Adherence Exposure Quality of delivery Participant responsiveness Program differentiation
16
Centrum Brein & Leren
WORKSHOP: QUALITY ASSESSMENT Exemplar article:
Bicanic, I., et al. (2014). Evaluation of a Cognitive Behaviour Group Therapy (STEPS) for Adolescent Girls with Rape-Related PTSD and their Parents. European journal of psychotraumatology, 5, 22969
17
Centrum Brein & Leren
WORKSHOP: QUALITY ASSESSMENT 1. 2. 3. 4.
Scan the exemplar article Read the methods en results section of the exemplar article Now form groups of max 3 people Use the checklist (and manual) to assess the article’s quality. Each group focusses on one section 5. Individually answer the items within your assigned section 6. Discuss within your group 7. Be prepared to present your findings