I. Is OTR an Evidence-Based Practice (EBPs)? Why (EBPs)? II. What constitutes an EBP, and what is good. evidence?

1 I. Is OTR an Evidence-Based Practice (EBPs)? Why (EBPs)? II. What constitutes an EBP, and what is good evidence? III. Current status, translating...
2 downloads 4 Views 2MB Size
1

I.

Is OTR an Evidence-Based Practice (EBPs)? Why (EBPs)?

II. What constitutes an EBP, and what is good evidence? III. Current status, translating research to practice IV. Existing dissemination & evaluative models V.

Classroom teacher’s guide: screening tool for evaluating EBPs

VI. Two EBPs for Large Group Instruction: Case Examples

“It is clear that the most powerful and efficacious interventions at our disposal will be necessary to address the unique needs of students with EBD and to offer any hope of reversing the historically poor outcomes they experience in such (critical) areas as achievement, behavior, school attendance, graduation, and postschool employment”1

1

(Cook, Landrum, Tankersley, & Kauffman, 2003, p. 346)

Defining evidence to be used in practice is problematic, and disagreement exists on important criteria for identification of effective programs and practices Further, many special educators perceive efforts to identify and implement practices shown by research to be generally effective (i.e., evidence based practices) as “unnecessary” and “counterintuitive”2 These difficulties translating research knowledge into daily practice, most pronounced in the (special) education of students with EBD, obstruct efforts to improve student outcomes3

3

2 (Cook, Landrum, Cook, & Tankersley, 2008) (Carnine, 1997; Cook, Landrum, Tankersley, & Kauffman, 2003)

Consequently, NCLB (P.L. 107-110), IDEA (S. 1248) contain strong emphasis on implementing techniques shown to be effective through scientifically based research (i.e., “teaching practices that have been proven to work”)4 Changing requirements focus more than ever on accountability and use of evidence-based practices as reported by the impact of intervention on the achievement of P-12 learners:5

At the classroom level (e.g., RTI) At the school level (e.g., accreditation agencies) • At the professional practice level (e.g., what are evidence based practices?) • •

5 (Mooney,

4 (U.S. Dept. of Education, 2003) Denny, & Gunter, 2004; Gunter, 2008)

What is needed is to determine5: A. B. C. D. E.

I. What are Evidenced-Based Practices (EBPs)? II. How EBPs should be used and implemented; and,

A.

III. The usefulness of EBPs for improving student outcomes

5 (Cook,

Tankersley, & Harjusola-Webb, 2008)

Definition: EBPs refer to “a body of scientific knowledge about treatments, prevention-intervention approaches, or service practices”; they “denote research-based, structured, and manualized practices that have been tested via randomized trials in which experimental and control groups or conditions are used to establish causation and to assess the magnitude of effects”9 9 (Hoagwood,

2003-4)

Better definition? “Practices that are informed by research, in which the characteristics and consequences of environmental variables are empirically established and the relationship directly informs what a practitioner can do to produce a desired outcome”10

10

(Dunst, Trivette, & Cutspec, 2002)

Comparison shows distinction between efficacy and effectiveness11: Efficacy – intervention outcomes produced by researchers under ideal conditions Effectiveness – demonstrations of socially valid outcomes under normal conditions

11

(Walker, 2004)

(From: Lewis, Hudson, Richter, & Johnson, 2004; Cook, Landrum, Tankersley, & Kauffman, 2003)

Considering Efficacy Experimental research is regarded as providing the most credible evidence for the efficacy of a practice12 Experimental research includes two types of methodological approaches: group experimental designs and single subject research designs.

12 (Cook,

13

Tankersley, Cook, & Landrum, 2008)

(Odom, Brantlinger, Gersten, Horner, Thompson, & Harris, 2005)

Efficacy is examined by making comparisons between the outcomes of two groups, and by providing intervention to only one of the groups (in SSD, participants provide their own comparison)14 Three key components: • • •

Behavior assessed repeatedly using trustworthy measures Interventions systematically introduced and withdrawn Effects across baseline/intervention conditions must be analyzed for each participant 14

(Tankersley, Harjusola-Webb, & Landrum, 2008)

Using Trustworthy Measures: Repeated (static) measurement of operationalized behavior to provide true representation of performance15 Observational measurement of target behaviors should include at least 316 to 517 demonstrations to establish functional relationship per condition Ensure that measures are trustworthy by assessing inter-observer agreement (expressed as a percentage)

17

15 (Kazdin, 1992) 16 (Lane et al., 2008) (Horner et al., 2005)

Systematized Introduction/Withdrawal of Intervention: Establishing causal link by purposeful & systematic manipulation is key to determining EBPs Use of multiple conditions for comparison of performance (e.g., baseline, intervention); changes in behavior are associated with changes in intervention Many designs used to show functional relationship, differ in terms of applying, withdrawing, or altering intervention (see handout)

Example (ABAB Reversal Design):

(From: Tankersley, Harjusola-Webb, & Landrum, 2008)

Analysis of Effects: Changes in performance evaluated according to strength or magnitude of the behavior (mean, level) across conditions, and the rate of these changes (trend, latency)18 Mean – performance during intervention condition should be meaningfully better than in baseline Level – Abrupt change in level from end of one condition to beginning of next shows strong reaction to intervention Trend – Ascending trend in outcomes after implementation Latency of Change – Shorter time frame for change indicates clearer effect of intervention

18

(Kazdin, 1992)

A Word on Effect Size: Effect Size denotes the magnitude of change shown by an intervention; for a single study, reports standardized difference between group means.19 Effect size “…enables readers to evaluate the stability of results across samples, designs, and analyses.”20 Categorizing Values of Estimates of Effect Size21: Small Effect < .20 Medium Effect .50 Large Effect > .80

20 (Wilkinson,

19 (Banda & Therrien, 2008) & The APA Taskforce on Statistical Inference, 1999) 21 (Cohen, 1988)

Use of Single-subject Research to Establish a Practice as Evidence-Based22: A minimum of five single-subject studies that meet minimally acceptable methodological criteria and document experimental control have been published in peer-reviewed journals; The studies are conducted by at least three different researchers across at least three different geographical locations; and The five or more studies include a total of least 20 participants.

22 (Horner,

at

Carr, Halle, McGee, Odom, & Wolery, 2005)

Teachers often report that articles are difficult to translate into usable terms, especially given their limited time23 Further, teachers remain skeptical of scientific evidence and/or relevance to children in their care24 Teachers generally rated more informal sources of info (e.g., own colleagues or workshops) as more trustworthy and useable than traditional sources of research based info. (i.e., college courses, professional journals)25

25

23 (Retrieved: www.cec.sped.org) 24 (Forness, 2005) (Landrum, Cook, Tankersley, & Fitzgerald, 2002)

Example – Discovery of cure for scurvy by British Naval surgeon in mid 1700s; over a century before dietary treatment fully adopted by fleets!26 Practitioners must become better consumers of research27 They must be able to discern fluff and feel-good approaches from practices shown to be effective by application a valid scientific evaluation model28

28

of

26 (Walker, 2004) 27 (Lewis, Hudson, Richter, & Johnson, 2004) (Cook, Tankersley, Landrum, & Kauffman, 2003)

Questions concerning EBPs in the classroom29: •

How do teachers access EBPs;



Do teachers use the methods correctly;



How can teachers meld EBPs and the craft of teaching

29 (Retrieved:

www.cec.sped.org)

Next Hurdle: Getting EBPs to Teachers (Access) Teachers need to have time, tools, and resources to implement the practices – having info available is only part of solution. It needs to be in a format teachers can grasp quickly and easily. Teachers say they need info. that tells “what practice is, for whom it is effective, how to implement the practice, and how practice is rated”30

30 (Retrieved:

www.cec.sped.org)

The What Works Clearinghouse Established by U.S. Department of Education’s Institute for Education Sciences to provide central, independent source of scientific evidence of what works in education.

The Promising Practices Network Highlights programs and practices that credible research indicates are effective in improving outcomes for children, youth, and families.

Blueprints for Violence Prevention National violence prevention initiative to identify programs effective in reducing adolescent violent crime, aggression, delinquency, and substance abuse.

The International Campbell Corporation Registry of systematic reviews of evidence on the effects of interventions in social, behavioral, and educational arenas.

Social Programs that Work Series of papers developed by the Coalition for Evidence-Based Policy on social programs that are backed by rigorous evidence of effectiveness.

http://ies.ed.gov/ncee/wwc/ http://www.promisingpractices.net/ http://www.colorado.edu/cspv/blueprints/index.html http://www.campbellcollaboration.org/Fralibrary.html www.excelgov.org/displayContent.asp?Keyword=prppcSocial

Excellent resource for evaluating evidence of efficacious interventions!

http://www.ed.gov/rschstat/research/pubs/rigorousevid/rigorousevid.pdf

Another Hurdle: How are EBPs Used? (Fidelity of Implementation) Intervention approaches universal in nature involving “standard dosage” are easier to deliver and have a higher likelihood of making it into standard school practice31 Identifying effective practices is only meaningful to the extent that they are applied (with fidelity) with children and youth with disabilities32

32 (Cook

31 (Walker, 2004) & Schirmer, 2006)

(Yet) Another Hurdle: How can Teachers Meld EBPs and the Craft of Teaching (Social Validation) Emphasis on improving the rigor of research practices while conducting socially valid studies in applied settings33 EBPs should interface with the professional wisdom of teachers to maximize the outcomes of students with disabilities34 This entails: (a)judiciously selecting EBPs to implement, and (b)adapting EBPs to meet individual needs and goals of specific students35

35 (Cook,

33 (Baer, Wolf, & Risley, 1968; 1987) 34 (Whitehurst, 2002) Tankersley, & Harjusola-Webb, 2008)

The ECF>EBP Rubric: A guide to selecting and evaluating EBPs (see handout)

Choral Responding What is it? Choral responding occurs when all students in the class verbally respond in unison to a teacher question (Heward, 1994; Heward et al., 1996). Examples of choral responding are when students simultaneously respond, “16” after the teacher asks the entire class, “What is 4 times 4?” Or the students say, “Red” in response to the teacher’s question, “What is the color of a stop sign?”

Why does it work? During choral responding, teachers prompt students to respond in unison at a brisk pace, thus dramatically increasing attentiveness and number of responses. Students are less likely to be offtask, passively watch, or distract their peers because all students are busy answering questions.

How to implement. In order to successfully implement choral responding teachers should: •Develop questions with one correct answer. •Ask questions with short (one to three) word answers. • Provide a wait time (thinking pause) of 3 sec between asking the question and prompting the students to respond. •Use predictable phrases or clear signals to cue students to respond, “Get ready”. •Present questions at a fast lively pace.

Percentage of nonoverlapping data (PND) Proportion of data points during the intervention that exceeds the extreme value during the baseline condition If 9/12 treatment data points exceed the highest baseline data = (75%) PND scores Above (90%) = very convincing data 70 – 90% = convincing data 50-70% = questionable Below (50%) = not convincing

Scruggs & Mastropieri, 2001

34

35

36

The results of nine studies comparing choral vs. individual responding generally indicate a positive relationship between an increased rate of OTR using choral responding and students’ active student responding, on-task behavior, correct responses, and fewer disruptive behaviors.

Potential roadblocks and solutions. Some teachers may feel that choral responding is too noisy and may increase levels of excitement for students who have a great deal of energy. However, using precorrection strategies (i.e., giving reminders to remain relatively quiet before each session), having students practice and model responding in reasonably quiet voices by using “inside voices”, and using positive reinforcement (praise or token economies) are ways to avoid these concerns. For students who are unmotivated (merely “mouth” or passively watch the class), using mixed responding is one method to increase participation.

Preschool setting: choral responding might create more excitement for some children because they have to call out answers. However, implementing a classroom rules before using choral responding may increase the effectiveness of the choral responding and decreased inappropriate behavior (Godfrey et al., 2003; Sainato et al., 1987) .

For nonverbal students identified with autism, the use of response cards, unison hand raising, or individual responding may be the preferred method to increase participation and engagement during small group instruction (Kamps et al., 1994; McKenzie & Henry, 1979).

During large group instruction with students at-risk for EBD in general education classroom settings and special education classroom for students with EBD, mixed responding and choral responding appears to be a more effective instructional strategy in increasing student correct responses, participation, on-task behavior, as well as decreasing disruptive behavior in comparison to teachers’ baseline rates of individual responding (Haydon et al., 2009a; Haydon et al., 2009b; Sutherland et al., 2003).

During small group instruction, only slight differences may exist between choral and individual responding in terms of number of correct responses and on-task behavior (Sindelar et al., 1986; Wolery et al., 1992).

However, in addition to group size, teachers could also consider the following; if all students in the group need to learn the same skills, then choral responding may be appropriate, whereas if students are at different learning levels and learning distinctive skills, then individual responding may be more appropriate (Wolery et al., 1992).

Final Considerations Researchers could examine the acceptability of choral and mixed responding across various levels of teacher professional learning (i.e., preservice, induction, continuing professional development; Feinman-Nemser, 2001).

“Evidenced-based” practices • video and practice recording behavior • http://www.myeducationlab.com/login.html

Questions/Comments can be directed to: Todd Haydon, LCSW, Ph.D. Assistant Professor CECH, University of Cincinnati

[email protected]