The Word Processor as an Instructional Tool: A Meta-Analysis of Word Processing in Writing Instruction

Review of Educational Research Spring 1993, Vol. 63, No. 1, pp. 69-93 The Word Processor as an Instructional Tool: A Meta-Analysis of Word Processing...
Author: Lewis Webb
64 downloads 1 Views 3MB Size
Review of Educational Research Spring 1993, Vol. 63, No. 1, pp. 69-93

The Word Processor as an Instructional Tool: A Meta-Analysis of Word Processing in Writing Instruction Robert L. Bangert-Drowns University at Albany, State University of New York Word processing in writing instruction may provide lasting educational benefits to users because it encourages a fluid conceptualization of text and frees the writer from mechanical concerns. This meta-analysis reviews 32 studies that compared two groups of students receiving identical writing instruction but allowed only one group to use word processing for writing assignments. Word processing groups, especially weaker writers, improved the quality of their writing. Word processing students wrote longer documents but did not have more positive attitudes toward writing. More effective uses of word processing as an instructional tool might include adapting instruction to software strengths and adding metacognitive prompts to the writing program. Does word processing, especially in the context of writing instruction, facili­ tate the enhancement of writing skill? Lasting effects on human cognition have been proposed for other human tools. Some researchers (e.g., Sherzer, 1987; Whorf, 1956), for example, have suggested that language use determines the ways in which humans interpret their experience. Others (e.g., McLuhan, 1962; Olson, 1976; Ong, 1982) argued that the everyday availability of books changed the way literate cultures conceive speech (as detached from speakers and their contexts), understand knowledge (as a social rather than personal develop­ ment), manage information (moving from memorization and oral transmission), and think (less "magically" and more rationally). Can cognitive effects be substantiated for individuals who use word processors? Research on word processing has been difficult to interpret. The difficulty partly lies in a lack of overarching theoretical constructs to lend coherence to research efforts and interpretations of findings. Another difficulty is the diver­ sity of research designs with which, and settings in which, word processing is examined. This review responds to these dilemmas by clarifying the conceptual­ ization of the word processor as an instructional tool and by integrating relevant research findings across diverse study characteristics. The Nature of Instructional Tools One could arrange educational software on a continuum according to whether it supplants or facilitates students' capacities to regulate their own learning. Tutorials and tools mark the continuum's extremes (Bangert-Drowns & Kozma, 1989). Tutorials provide explicit guidance to learners, supplanting metacognitive and intellectual skills to teach specific declarative knowledge. Cognitive technol­ ogy or tools (Pea, 1985), on the other hand, help amplify mental effort by 69

Bangert-Drowns assuming portions of tasks that are usually time consuming, repetitive, tedious, or relatively trivial. Computers are uniquely versatile instruments for tutorial presentation. Not only can a computer manage visual and aural symbol systems (e.g., text, music, speech, graphics, pictures, and animation), but it can manipulate them in many different ways (e.g., reception, storage, search, display, and many kinds of transformations). Added to these capacities is the speed at which a computer can operate, a speed that can make human-machine interaction dialogical. These features alone would suggest that the computer will make a greater impact on education than the phonograph, radio, television, or other media that preceded it (Kozma, 1991). But perhaps the computer's greatest impact on education will have nothing to do with the fact that it can teach. The computer as a tool, as a spread sheet, data base, network link, graphics generator, calculator, and word processor is trans­ forming the way the "information age" society lives and works. Educational institutions are not exempt from these changes, and they increasingly adopt higher technologies to enhance productivity. Education also must prepare future generations to harness the power of these electronic tools for the management and interpretation of the vast information they make available. But do these tools offer educational benefits to their users? Cognitive tools do not "educate" in the sense that tutorials do. Tutorials have implicit or explicit educational objectives and provide specific guidance and practice to achieve those objectives. Tools, on the other hand, must do one or more of the following to be instructionally significant (Salomon, 1988): 1. Enable students to mindfully attend to more complex mental tasks by performing simpler, time-consuming mental tasks; 2. Provide implicit or explicit models of information representation, proc­ esses, or strategies; 3. Strategically present choices that require the mobilization or acquisition of specific knowledge for selection; or 4. Strategically present questions, comments, or signals to stimulate the use of metacognitive skill. Just as a child learns new skills through social interaction before they are internalized, so it is possible that users can acquire new skills in interactive partnership with their tools. In short, tools may transform human cognition and become instructional because they can allow learners to practice, and thus enhance, skills that otherwise would not have been practiced as frequently or because they permit the internalization of information representations, proc­ esses, or strategies exhibited or stimulated by the tool. In spite of such promising possibilities, research on the instructional effects of cognitive tools has produced more discouraging results. Tools can be distin­ guished by the way they interact with the learner's cognition. In the simplest case, the tool is designed to supplant specific "lower order" tasks, to make these processes invisible to the user, so that the user is free to attend to "higher order" strategic tasks (e.g., goal setting, solution strategizing, and organizing). This kind of tool does not prompt reflection on either the processes it supplants or the strategic processes required for effective use. 70

Meta~Analysis of Word Processing An example of this kind of tool is a simple arithmetic calculator. Hembree and Dessart (1986) performed a meta-analysis of the extensive research on hand-held calculators in precollege mathematics education. They found that the use of calculators by mathematics students in the context of instruction generally produced very small, yet positive effects on their mathematics performance without calculators. When tested with calculators, students showed, not sur­ prisingly, improved computational accuracy, but they also showed some im­ provement in selection of problem-solving strategies in word problems. Hem­ bree and Dessart also found that calculators had greater educational effects when used in the context of curricula specially designed to make use of the calculator. A second class of tools permits alterations of their function. To create effective tools, users must clarify the nature of the tasks they seek to perform and solve problems related to both the nature of the task and the tool. Proponents of alterable tools suggest that learners will gain a greater understanding of the processes the tool will ultimately supplant and perhaps also acquire problemsolving skill. Computer programming software exemplifies alterable tools. Since Papert's (1980) Mindstorms, many researchers have articulated and explored ways in which programming could affect human cognition. Most research, however, has shown few cognitive benefits from programming (Johanson, 1988). Perhaps the most promising research in this area suggests that programming accompanied by specific instruction on cognitive operations can be beneficial (Palumbo, 1990). In Swan (1991) and Lehrer, Samcilio, and Randle (1989), for example, Logo programming was found to facilitate problem solving and knowl­ edge of geometry, respectively, when the programming was accompanied by instruction that capitalized on special strengths of Logo related to the instruc­ tional goals. A third type of instructional tool explicitly prompts metacognition. Unlike tools that try to make certain tasks invisible, these tools try to open up cognitive processes to reflection and thus increase the user's mental effort by stimulating the kind of inner dialogues that typify self-regulated learners. For example, Salomon, Globerson, and Guterman (1989) report considerable success with the Reading Partner, software that presents embedded questions in text to prompt students to monitor and manage their reading comprehension. Students re­ ported expending more mental effort when working with the Reading Partner. Students also showed improvements in reading comprehension and in under­ standing specific skills that improve reading comprehension. Furthermore, their increased understanding of text from reading tasks seemed to transfer to their ability to write essays. Similarly, Daiute (1986; Daiute & Kruidenier, 1985) showed that by adding a prompting program to a word processor, students could be guided to significant improvements in their revising strategies. The Word Processor as an Instructional Tool Can the word processor act as an instructional tool? That is, do students learn new skills from their partnership with this specialized writing device? The typical word processor is not a particularly intelligent tool. It often does not analyze or alter data entry, and it does not explicitly stimulate metacognitive processes. Its greatest and most obvious function is what Perkins (1985) calls "first-order 71

Bangert-Drowns fingertip effects;" that is, it puts considerable power at the user's fingertips, allowing the manipulation of text for the production of high-quality printed documents. But does it produce second-order fingertip effects, increased capac­ ity for higher order thinking? The typical word processor allows users to make changes to text that would have been more cumbersome on paper. These changes range from simple editing (e.g., corrections in punctuation, spelling, grammar), to the addition and dele­ tion of single words or simple phrases, to more substantial revision (e.g., rear­ ranging sentences in paragraphs, reordering large blocks of text, rewriting text for greater coherence). The effects of these functions may be twofold. First, they offer a particular representation of the nature of text, text as a fluid and easily transformed communication and therefore closely connected to thinking and speaking. Second, they allow the user to attend to higher order decisions (e.g., revision for clarity of communication) by removing the mechanical difficulties involved in changing text. Users of word processors might therefore compose longer documents and engage in more revision of their documents than compara­ ble users of simpler technologies. Given these extra efforts in the context of feedback from teachers and peers in writing instruction, we might expect an improvement in the quality of writing (although this improvement may be limited by the degree to which students had good writing skills to begin with, the degree to which these skills are addressed in feedback from teachers and peers, and the degree to which these skills were inhibited by the mechanical aspects of writing). Having thus practiced such skills while on the word processor, students might continue such practices even when writing by hand. One might further predict that the ease of revision and close connection to personal communication, combined with the improved appear­ ance of writing products and the excitement of using a high technology, would collectively contribute to the improvement of students' attitudes toward writing. Although the word processor has been met with some excitement among instructors of writing, it also has its critics. Critics argue that use of the word processor may have no effect, or even a detrimental effect, on writing. The word processor does nothing to explicitly change writing performance; good and bad writers may write well and poorly regardless of the tool they use. Persons who have learned to write by hand will have to spend considerable energy to adjust to keyboarding, scrolling or paging texts on a screen, and learning new commands for simple tasks. Thus, using the word processor could actually distract learners from higher order thinking, especially when first learning the software. Further­ more, complementary functions such as spelling checks and text analyses could actually weaken a user's ability to spell and edit text because the computer has taken some responsibility for these functions. Research on Word Processing Research on word processing can be grouped into two large classes. One trend has been to study word processing in the context of its operation. Researchers working in this vein argue that little can be usefully known about the effects of word processors per se and that word processing is affected by the culture in which it is used and, in its turn, affects that culture. Such research assumes that word processing cannot be understood apart from its context (Cochran-Smith, Paris, & Kahn, 1991). 72

Meta-Analysis of Word Processing Although it is irrefutable that word processing is influenced by many factors in its context, another line of research has involved asking what contribution the word processing itself might make to any measurable changes in writing proc­ esses or products. This literature has been difficult to interpret, however, per­ haps fueling the interest in studying word processing in context. One reason this literature has been difficult to interpret is the diversity of research designs. Individual case studies, classroom case studies, surveys of student attitudes, alternating designs (where writers alternate between using the pen and the computer to compose), and comparative designs (where one group of writers using word processors is compared with another group writing by hand) have been used with varying sophistication. Furthermore, the effects of word processing are measured in very different ways in different studies. For example, revision frequency, type of revision, length of document, number of syntactic or spelling errors, and holistic ratings of writing quality have been used to measure word processing effects. Another reason why the literature on word processing is difficult to interpret is the differences in study findings. Cirello (1986), for example, concluded that word processing has a significant positive effect on writing; Rosenbaum (1987), on the other hand, concluded exactly the opposite. If the effect of word process­ ing is weak and easily influenced or overwhelmed by contextual factors, then it is too much to ask any individual study to definitively determine the kind of impact this instructional tool has on students. One might, however, detect patterns of effects over collections of studies. Reviewers also struggle to make sense of the research on word processing. In their review, Cochran-Smith et al. (1991) summarized the literature by saying, "Repeatedly the effects research demonstrates that the answer to its bottom-line question, 'Do students write better with word processing?' is 'It depends'" (p. 61). The effects of word processing depend on any number of factors: the writer's preferred writing and revising strategies, keyboarding skill, prior com­ puter experiences, supplementary instructional interventions, definitions of "better," the teacher's goals and strategies, the social organization of the learn­ ing context, and the school and community culture. In spite of the many contextual factors in which it is nested, Cochran-Smith et al. ventured several general propositions about word processing that the litera­ ture seems to justify: 1. In instructional contexts, students make more revisions when writing with word processing than they do when writing with paper and pencil. 2. Word processing students tend to write longer texts than students using paper and pencil. 3. Students produce neater and more error-free texts when writing with word processing. 4. Word processing alone does not improve the quality of students' writing. 5. Students generally have favorable attitudes toward word processing. Hawisher (1989) also cautions against attributing effects to the computer without accounting for the culture in which it exists. However, she too offers some observations particularly drawn from 26 comparative studies (i.e., studies that use an alternating treatment design or compare treated and untreated 73

Bangert-Drowns groups). Like Cochran-Smith et al., Hawisher notes that word processed prod­ ucts are generally longer and freer of mechanical errors than products written with paper and pencil. Frequency of revision and quality of writing were too varied from study to study to draw a general conclusion. Hawisher found, however, that word processing students report more positive attitudes toward writing. She also noted some evidence that suggests that basic writers benefit most from word processing. Russell (1991) reported a meta-analysis of 21 studies of instruction involving word processing and keyboarding. Effect sizes in this review varied to extremes (lows of - 2 standard deviations to highs of 4.5 standard deviations), so median values would be the best indicators of central tendency. Median effect sizes for different types of writing outcomes hovered near zero: 0.09 for writing quality, 0.02 for revision, and - 0.03 for attitude toward writing and computers. Russell noted that in cases where word processing seemed beneficial, the benefits may not be due at all to the word processing itself but to the kinds of social interac­ tions that computer laboratory environments permit. Method This review adapted meta-analytic procedures suggested by Glass, McGaw, and Smith (1981) and most closely resembles study effect meta-analysis (Ban­ gert-Drowns, 1986). Two factors, use of the individual study (as opposed to the study finding or individual subject) as the unit of analysis and careful selection of studies for inclusion on methodological and conceptual criteria, chiefly charac­ terize study effect meta-analysis. The review proceeded through four phases: the collection of studies, coding of the studies' characteristics, calculation of effect sizes as common measures of study outcomes, and search for relations between study characteristics and study outcomes. Collection of Studies Searches for relevant research included two on-line data bases: (a) ERIC, a data base on educational reports, evaluation, and research maintained by the Educational Resources Information Center consisting of Research in Education and Current Index to Journals in Education, and (b) Comprehensive Dissertation Abstracts. Full-text searches checked for variations of the term word processing ("word processing," "word processor," etc.). The bibliographies of collected studies and reviews, particularly Hawisher (1989), provided leads to other relevant research. In all, approximately 200 articles, books, dissertations, and abstracts were examined. Thirty-two reports met four criteria for inclusion into this meta-analysis. First, each study compared two groups of students who received virtually identical instruction in writing, differing only in that one group was allowed to use the word processor for some phase of the writing process. In all cases, the compari­ son group wrote by hand. This criterion eliminated individual and classroom case studies (e.g., Burnett, 1984), studies in noninstructional settings (e.g., Haas, 1986), pre-post studies (e.g., Juettner, 1987), and studies in which sub­ jects alternated between writing tools (e.g., Duling, 1985; Hawisher, 1986). Second, the studies were retrievable from university and college libraries through interlibrary loan, from the Educational Resources Information Center (ERIC), or from University Microfilms International. Third, these reports 74

Meta-Λnalysis of Word Processing measured treatment outcomes quantitatively. Not all the included studies had sufficient information to permit the calculation of effect sizes, but all provided at least enough information for coding the statistical significance and direction of the results. Fourth, the reports showed no severe methodological flaws (e.g., excessive and disproportional attrition rates, significant and uncorrected pretreatment differences between groups, and inconsistencies in the report of data). Study Features Twenty-one characteristics of each study were coded. These characteristics fell into four categories: instructional treatment, research methodology, study set­ ting, and publication features. Eight variables described aspects of the instruction: Presence of direct instruction. In all of the studies, instruction emphasized the integrated writing process rather than the mechanics of specific writing skills. However, some studies supplemented the process emphasis with direct instruc­ tion or focused exercises on writing skills. Frequency of computer use. Computer use during instruction was coded as either once per week, two or three times per week, or more than three times per week. Time of computer use. Students wrote with word processing during class, during the school day, after school, or some combination of these times. Computer use inprewriting. Did students engage in prewriting exercises on the computer? Computer use in composing. Did students compose their writing directly on the computer, or did they draft their writing by hand and enter the text at a later time? Use of text analysis. Some students had access to spell checking and grammar checking software. Computer use in revision. Did students use the computer in revision of their writing? Source of writing feedback. Students could get responses to their writing from their teachers, their peers, or both. Methodological features included the following: Subject assignment. Subjects were randomly or nonrandomly assigned to treatments. Study duration. This variable indicated the duration of the study in weeks. Control for teacher effect. In some cases, the same teachers taught both the students who used and did not use word processing. In other studies, different teachers taught the different treatments. Control for researcher bias. Did the researcher also teach students in the study? Control for posttreatment writing conditions. In some studies, treatment and comparison groups both wrote their final compositions by hand. In other studies, the treatment group used word processing to write their final documents. Methodological design. This variable distinguished among studies that mea­ sured outcomes with unadjusted posttest scores, gain scores, or covarianceadjusted scores. The five setting features were as follows: 75

Bangert-Drowns Grade. This variable distinguished studies in college settings from studies with younger students. Writing ability. Authors assessed whether the students being studied demon­ strated low, average, or high writing ability. Type of computer. Word processing students worked at either microcomputers or mainframe terminals. Location of computer. Word processors were located either in classrooms or in separate computer laboratories. Assignment of students to computer. Students could work individually or in small groups at the computer. Two variables recorded the publication features of the studies: Source of report. Reports came from journals or books, doctoral dissertations, or the ERIC system. Year of report. The second publication variable indicated the year in which the document became publicly available. Outcome Measures These studies of word processing measured posttreatment performance on several criteria: quality of writing, number of words, attitude toward writing, adherence to writing conventions, and frequency of revision. Treatment effects for each criterion were analyzed separately. Statistical significance and direction for each finding was noted. Effect sizes were calculated for studies where more detailed information was available. Effect size is the difference between adjusted or unadjusted experimental and control group means divided by the raw score standard deviation of the control group or the pooled raw score standard devia­ tion. Where means and standard deviations were available, effect sizes could be calculated directly. In some cases, effect size was calculated from significance tests (e.g., t or F values). Formulas for these transformations were taken from Glass et al. (1981). Results Thirty-two reports met criteria for inclusion into this meta-analysis. One report (Kantz, 1989) analyzed students with low and middle writing ability separately, and these separate analyses were counted as two separate studies. Therefore, 32 reports yielded 33 studies of word processing. In all cases, two groups of students received identical instruction on the writing process, except that one group could produce compositions on a word processor while the second group composed by hand. As might be expected given the relatively recent availability of word process­ ing in schools, all of the studies included were conducted in the last decade, two thirds during the period 1985-1987. Only two studies located the different treatments in different schools; in all other cases, both groups were located in the same school. All studies except Frase, Kiefer, Smith, and Fox (1985) conducted the treatments simultaneously. Students were drawn from all grade levels. The largest group (17 studies) were conducted in college settings, but eight were conducted in elementary and even preschool settings. Not all studies investigated the same outcomes for instruction on writing process. The largest group of studies (28 studies) compared quality of writing 76

Nteta-Analysis of Word Processing between word processing and nonword processing groups. Nine studies mea­ sured students' length of writing by simply taking a word count. Nine studies measured students' attitudes toward writing. Twelve studies measured students' adherence to writing conventions. In seven studies, the frequency of revision in different conditions was compared. Quality of Writing Twenty-eight studies obtained holistic measures of the quality of student writing. Almost two thirds of these studies concluded that access to word processing during writing instruction improved the quality of students' writing. Ten studies indicated statistically significant results favoring word processing; only one study found significantly negative results. Given such percentages, one would attribute strong and reliable positive effects to the use of word processing in writing instruction. Effect sizes provide a more exact measure of the magnitude of effects than the mere direction of results. Unfortunately, eight studies did not provide sufficient information for the calculation of effect sizes. The percentage of positive findings (60%) in the reduced group is only slightly less than in the full sample. The average effect size of writing quality for the 20 studies is 0.27 standard deviations (SE = 0.11). This isasmall effect but is significantly different from zero, ř( 19) = 2.56, p = .02. Figure 1 presents the frequency distribution of these 20 effect sizes. One can observe that most of the effect sizes cluster between -0.25 and 0.50 standard deviations, indicating a small positive effect. The average effect size may be distorted by the unusually large effect sizes from Shinn (1986; 1.56) and Cirello

Writing Quality Effect Size

FIGURE 1. Histogram showing the distribution of 20 effect sizes measuring writing quality in research on word processing 77

Bangert-Drowns (1986; 1.14). Taking the median as a better estimate of central tendency, the typical effect size is 0.21 standard deviations. Study features and quality of writing. Since 20 studies yielded effect sizes for writing quality, there seemed to be a sufficient number of studies to look for relations between study features and effect size. Some variables failed to differ­ entiate among studies and so were dropped from further investigation. The range of years of publication (1984-1990) was too restricted for further analysis. Only 1 study located the word processors in a classroom; all others located them in a computer laboratory. All word processing treatments allowed students to revise their compositions on computer, and none reported directing students to do prewriting exercises on computer. Only 11 studies provided information about the source of writing feedback, and 10 of these reported the teacher as the primary source (although 5 of these mixed teacher and peer feedback). Only 2 studies used a mainframe terminal instead of a microcomputer to provide the word processing, and only 2 assigned students in groups to the computers instead of individually. All word processing treatments allowed students to compose during class time; 2 studies reported that students also could compose outside of class time. Table 1 lists 28 studies that obtained holistic measures of writing quality, some of their features, and their writing quality outcomes. Table 2 presents means and standard errors for the 20 studies yielding writing quality effect sizes organized by various coding categories. Only one study feature, writing ability, showed a statistically significant rela­ tion with effect size. Nine studies provided remedial writing instruction to students who had demonstrated difficulty with writing. These nine studies yielded an average effect size of 0.49, significantly larger than the average of the other studies, 0.09, F(l, 18) = 3.99, p = .06. Other variables are worthy of notice. In nine studies, posttreatment writing conditions differed for experimental and control groups. Specifically, in nine studies, the experimental groups composed their final compositions with a word processor while the control groups used handwriting utensils. It certainly seemed possible that any gains brought by use of the word processor would immediately disappear when students returned to writing by hand. Interestingly, effect sizes did not differ regardless of whether or not both groups composed by hand; in fact, when both groups wrote their compositions by hand, the average effect size was larger (M = 0.34, SE = 0.14) than when the experimental group continued to use the word processor (M = 0.19, SE = 0.07), although not significantly so. Three variables seem to have clustered small groups of negative effect sizes: study duration, use of text analysis, and subject assignment. The small negative average effect (M = - 0.02, SE = 24) related to briefer studies (1 to 10 weeks) is chiefly produced, however, by the large negative effect size from Rosenbaum (1987). Similarly, studies using computers for text analysis as well as word processing yielded an average effect size of - 0.08 standard deviations. However, this is based on only three effect sizes, two of which come from Kantz (1989), and these studies do not require but simply provide text analysis software. A more suggestive finding was the average effect size of four studies whose word process78

TABLE 1 Features and outcomes for 28 studies of word processing effects on writing quality

Study Cheever (1987) Cirello (1986) Coulter (1986) Crealock, Sitko, Hutchinson, Sitko, & Marlett (1985) Cross & Curey (1984) Dalton & Hannafin (1987) Deming (1987) Etchison (1985) Frase, Kiefer, Smith, & Fox (1985) Greenland & Bartholome (1987) Hawisher & Fortune (1989) Kantz (average) (1989) Kantz (basic) (1989) Kiefer & Smith (1983) King, Birnbaum, & Wageman (1984) Lehrer, Levin, DeHart, & Comeaux (1987) Levin, Boruta, & Vascocellos (1983) Lytle (1987) Miller (1984) Moore (1987) Posey (1986) Robinson-Staveley & Cooper (1990) Rosenbaum (1987) Shinn (1986) Sommers (1985) Teichman & Poris (1985) Vockell & Schwartz (1988) Wetzel (1985) Note. + + indicates a significantly positive result; result.

Grade Elementary High school College Junior high

Duration (in weeks) >20 11-20 11-20 11-20

Writing ability Average Basic Average Basic

Posttreatment writing condition Same Different Different Same

Total sample (A/) 50 30 62 27

Effect size 0.27 1.14 0.08 —

College Average Different 58 + Junior high >20 Basic Same 64 0.28 College 1-10 Basic Different 24 0.47 College 11-20 Average Different ++ College 11-20 Average Same 177 + College Average Same 110 -0.11 College 11-20 Basic Different 40 0.13 College 11-20 Average Different 85 -0.05 College 11-20 Basic Different 99 -0.09 College 11-20 Average Same 84 College 11-20 Basic Same 10 + Elementary 11-20 Average Same 15 0.45 Elementary 11-20 Average Same ++ Junior high 1-10 Average Same 84 -0.03 Elementary 1-10 High Same 28 0.15 Elementary 11-20 Average Same 200 0.46 College 11-20 Basic Same 13 -0.05 College 11-20 Basic Different 79 0.66 High school 1-10 Average Different 59 -0.66 Elementary 11-20 Basic Same 18 1.56 College 11-20 Basic Same 79 0.32 College 11-20 Average Same 160 ++ College 11-20 Average Same 36 0.48 Elementary 11-20 Average Different 183 -.01 + indicates a positive, nonsignificant result; - indicates a negative, nonsignificant

TABLE 2 Means and standard errors of writing quality effect sizes for different categories of studies n Coding category M SE Presence of direct instruction Absent Present Frequency of computer use Once per week 2 or 3 times per week More than 3 times per week Computer use in composing Enter hand-drafted copy Compose on computer Use of text analysis Yes No Subject assignment Volunteer Nonrandom Random Study duration 1 to 10 weeks 11 to 20 weeks More than 20 weeks Teacher effect Different Same Researcher bias Not teacher Teacher Methodological design Posttest only Pretest adjusted Covariance adjusted Posttreatment writing condition Different Same Grade Precollege College Writing ability* Basic Average Source Dissertation Published *p = .06.

7 7

0.21 0.11

0.21 0.10

3 6 5

0.04 0.25 0.36

0.11 0.18 0.30

3 14

0.30 0.18

0.09 0.12

3 14

-0.08 0.26

0.02 0.11

4 8 7

-0.03 0.39 0.31

0.04 0.18 0.22

4 13 2

-0.02 0.39 0.28

0.24 0.14 0.01

7 8

0.28 0.16

0.09 0.19

12 6

0.31 0.01

0.10 0.17

7 10 3

0.31 0.29 0.12

0.26 0.11 0.18

9 11

0.19 0.34

0.07 0.14

10 10

0.36 0.18

0.20 0.09

9 11

0.49 0.09

0.18 0.10

14 6

0.25 0.32

0.15 0.11

Meta-Analysis of Word Processing ing groups were constituted by self-selection. The four effect sizes neatly con­ verge around zero (M = - 0.03, SE = 0.04), but again the sample is too small to put much confidence in the relation. Relations between preinstructional and postinstructional writing. Some have suggested that word processing is "transparent;" that is, students who wrote well or poorly composing by hand would do just as well or poorly composing by computer. The vote count and average effect size indicate that word processing is more than just transparent and that it has a small positive effect on writing in the context of writing instruction. It is possible, however, to more directly examine writing before and after using the word processor. Ten studies reported means and standard deviations for both pre- and postinstructional writing samples. For these studies, comparisons between com­ puter-composing and noncomputer groups could be made at pretreatment and posttreatment. If word processing had no effect on writing quality but writing quality was completely predicted by pretreatment writing quality, then there should be no difference between pretreatment and posttreatment effect sizes. Figure 2 illustrates the relation between pretreatment and posttreatment writing quality. Obviously, there is a strong relation between the two (r = .68, p = .03), indicating that pretreatment writing quality predicts postinstructional writing quality fairly well. But it is equally obvious that postinstructional effect sizes consistently surpass values predicted by preinstructional writing quality alone. Word processing elevates the writing quality scores of treatment groups,

1.2

¾.

.6 Diagonal indicates predicted postinstructional writing quality if word processing has no effect.

2 .4 2H

σ

•f I --21 .4

-.2

0

.2

.4

.6

.8

1.2

Writing Quality Effect Size (Post)

FIGURE 2. Relation between preinstructional and postinstructional writing quality effect sizes in 10 studies of word processing 81

Bangert-Drowns thus making the postinstructional effect sizes larger than their preinstructional counterparts. More can be said about the relation between preinstructional writing ability and the quality of postinstructional writing samples. First, it seemed possible that word processing might not only affect the magnitude of writing quality scores but also their variability. To check the possibility that word processing might increase or decrease variation in the performance of different writers, the simple ratio of the experimental group's standard deviation to the control group's standard deviation was calculated. Ratios greater than one would indi­ cate more variation in the writing quality scores of the word processing group, ratios less than one would indicate less variance in the word processing group, and ratios of one would indicate equal variance in both groups. Fourteen studies supplied sufficient information to calculate the ratio of exper­ imental and control group standard deviations. The average ratio was .95 (SE = .12), suggesting that word processing had no effect on variation in writing quality. However, the effect of word processing on variation in writing quality appears to depend on the writing ability of the students. Figure 3 graphically presents the relation between writing ability and the ratio of experimental and control group standard deviations. Except in one case, word processing reduced variation in writing quality for low ability writers but not for middle and high ability writers. The one exception was Shinn (1986), which, as we have already seen, had an 2.5-

2.25-

É

1

.

1

ι

1



1

i

1

»

1



1



1

|



2-

^175-

ç¿ a o CO



1

1.5-

Sl.25-

+

û CO

q CC .75.5-

• •

1

1

L

.25-

f

n.

L€W

MID/HIGH Writing Ability

FIGURE 3. Scatterplot showing relation between writing ability and the ratio of experimental and control group standard deviations in 14 studies of word processing effects on writing quality 82

Meta-Analysis of Word Processing unusually high effect size. If we ignore Shinn (1986) as an unusual case, the ratio of standard deviations is significantly smaller for low ability writers than middle and high ability writers (low ability, N = 6, M = 0.68, SE = 0.09; middle/high, N = 7, M = 1.01, SE = 0.12), F(l, 10) = 4.79,/? = 0.05. Number of Words Comparing the average length of documents produced by treatment and comparison groups provides another way to examine the effects of word process­ ing. Document length can be rendered simply as the number of words in the document. Table 3 lists word count outcomes associated with nine studies of word proc­ essing. All but one were positive, suggesting that students using word processing during writing instruction reliably begin to produce longer documents than students who do not have access to word processing. Five studies reported sufficient information for the calculation of effect sizes. The average effect size for document length was 0.52 standard deviations, significantly different from zero (t = 3.01, p = .04). Cirello (1986) yielded a large effect size (1.16 standard deviations) that inflated the mean. The median of the five effect sizes was 0.36 standard deviations. Learning to compose with word processors encourages longer documents, but these documents may be poor in quality. To test this possibility, eight studies measuring both writing quality and document length were located. Six of the eight were jointly positive, and one was jointly negative, suggesting a strong, positive relation between document length and writing quality. Four studies produced effect sizes for both dependent measures. For these four studies, effect sizes for document length and writing quality were almost perfectly correlated (r = .99, F = 128.45, p = .01). Attitude Toward Writing Nine studies measured students' attitudes toward writing. Table 3 lists the outcomes of those measurements. Four studies found that word processing students reported more positive attitudes toward writing, and five found that students who wrote by hand reported more favorable attitudes toward writing. Seven studies permitted the calculation of effect sizes for attitude toward writing. The average effect size was 0.12 standard deviations (SE = 0.21), which was not significantly different from no effect. In fact, the median effect size was -0.05 standard deviations. Did attitude toward writing show a positive relation with writing quality? Seven studies measured both outcomes; two were jointly positive, two were jointly negative, and three reported more positive ratings of writing quality but poorer attitudes among word processing students. Five studies reported effect sizes for both outcomes, and these suggest a strong linear relationship between attitude toward writing and writing quality (r = .90, F = 13.218, p = .04). Apparently, word processing students do not need to have more favorable attitudes toward writing to produce better quality compositions than comparison students; however, as the attitude becomes more positive, the writing quality improves. 83

TABLE 3 Effect sizes for number of words, attitude toward writing, conventions, and frequency of revision from 24 studies of word processing Writing Attitude Frequency Study No. of words toward writing conventions of revision -0.10 Ō36 -0.19 Cheever (1987) Cirello (1986) 1.16 Coulter (1986) -0.10 Crealock, Sitko, Hutchinson, Sitko, & Mariett (1985) Deming (1987) 0.32 -0.42 Etchison (1985) ++ Fitch (1985) 0.13 -0.27 ++ Frase, Kiefer, + Smith, & Fox (1985) Greenland & Bar0 tholome (1987) Hult (1985) + Jackson, McHaney, -0.04 & Handley (1984) Kiefer & Smith 0.55 (1983) King, Birnbaum, & ++ Wageman (1984) Kurth (1987) 0.26 1.26 Levin, Boruta, ++ & Vascocellos (1983) Lytle (1987) 1.21 Miller (1984) 0.22 -0.23 Moore (1987) -0.05 Posey (1986) -0.49 Robinson-Staveley 0.61 0.60 & Cooper (1990) Rosenbaum (1987) 0.44 Schank (1986) 0.05 Teichman & Poris + + (1985) Vockell & Schwartz + (1988) Note. + + indicates a significantly positive result; + indicates a positive, nonsignifi­ cant result; - indicates a negative, nonsignificant result; indicates a signifi­ cantly negative result.

Meta-Analysis of Word Processing Writing Conventions Twelve studies assessed students' ability to write according to conventions of standard written English (e.g., correct punctuation, capitalizing the beginning of sentences, subject/verb agreement). This was measured in two ways. In three studies (Frase et al., 1985; Greenland & Bartholome, 1987; Kiefer & Smith, 1983), objective tests were used to measure students' ability to identify and correct violations of conventions. In the nine other studies, treatment groups were compared on the number of convention errors that remained in their postinstructional writing. Of the 12 studies measuring skill with writing conventions, 7 reported positive outcomes and 5 reported negative outcomes (Table 3). Seven studies permitted the calculation of effect sizes. The average effect size for conventions was 0.16 standard deviations, not significantly different from zero, ř(7) = 1.13,/? = .30. In fact, the median effect size was zero. Nine studies measured both skill with writing conventions and writing quality. Three were jointly positive, one jointly negative, three negative in quality but positive in conventionality, and two positive in quality but negative in conven­ tionality. Four studies reported effect sizes for both outcomes, but the correla­ tion was virtually zero. Frequency of Revision In seven studies, researchers measured students' frequency of revision by counting the number of changes between the first and final drafts of their postinstructional writing samples. This kind of measure presents some inter­ pretative problems. First, simple counts do not distinguish between changes that improve drafts and changes that introduce flaws. Second, simple counts do not distinguish among revisions of varying significance. How does one count the deletion of a poor paragraph, for example? It is usually counted as one revision; however, if the writer had kept the paragraph and made many surface changes, it would count as many more revisions. Finally, the revisions of students writing their postinstructional compositions on computer may be undercounted because students may make many revisions while crafting an electronic document on computer that disappear on the final paper copy. Keeping these problems in mind, one can see the list of revision outcomes in Table 3. Four studies reported positive findings, and three reported negative findings. All three negative findings came from studies where the word process­ ing group used the computer on the criterion writing sample. Four studies yielded effect sizes that are quite divergent, averaging 0.18 standard deviations, but with a large standard error (SE = 0.36). The median was virtually zero ( - 0.03 standard deviations). The only study that reported an effect size and also had both the computer-composing and hand-composing groups write the criterion sample by hand was Lytle (1987), which showed a very large effect size in favor of the word processing group. If any conclusion can be drawn from such a small group, it at least cautions researchers that counting changes on hard-copy drafts of word processing students may not be the same as counting changes on handwritten compositions. Four studies measured both frequency of revision and writing quality, but these outcomes showed no relation to each other. Only three studies produced 85

Bangert-Drowns effect sizes for both outcomes, and the relation they suggest contradicts expecta­ tions. Lytle (1987), which showed word processing students doing much more revision than their paper-and-pencil peers, showed virtually no difference in the groups' writing quality. Word processing students in Deming (1987), on the other hand, showed less frequent revising but better quality writing. Considering the difficulties in interpreting this variable, differences in postinstructional writ­ ing conditions, and the small sample of studies available, it is best to treat these findings as inconclusive. There should be one final note on revision outcomes. Kiefer and Smith (1983) and Frase et al. (1985) gave their students a 470-word passage and asked them to revise it to "improve the style or mechanics." In both cases, students who had received their writing instruction with word processors made more revisions than the comparison group students, even though all revising was done with pen. Word processing students in both studies had been working with text analysis software (Writer's Workbench), and the authors interpreted the increased revi­ sion as a transfer of learning from that experience. Discussion Humans invent tools to extend their capacity to understand and regulate their world. These inventions, in turn, can transform the inventor, changing the ways in which humans conceive of and respond to their experiences. The capability of electronic tools to transform information rapidly has introduced a new kind of tool, one that can mimic some aspects of human cognition and thus interactively engage in extending the capacities of the mind. Obviously, these cognitive tools can have what Perkins (1985) calls first-order finger effects, immediate practical benefits to the user. But do they have second-order finger effects, lasting en­ hancements of complex cognition? Under what circumstances can cognitive tools become instructional tools, the mere use of which generates educational benefits? There are reasons to think that word processing software could capitalize on the computer's capacity for rapid data transformation to enable it to serve as an instructional tool. It would be the simplest kind of tool, contributing to the efficiency of processes in which the user is already engaged, but not allowing users to create new kinds of tools or directly prompting metacognition. In spite of this, it may present users with a new way of conceptualizing written text, as a fluid, alterable communication similar to thinking and speaking. It also may free users to practice thinking about "higher level" aspects of writing (e.g., organiza­ tion and clarity) by simplifying mechanical tasks. If embedded in the context of writing instruction that emphasizes the writing process, rather than analytically focusing on decomposed writing subskills, one might expect use of the word processor to have lasting effects on aspects of students' writing. The findings from research on the effects of word processing have been ambiguous. Results of individual studies are often contradictory, and readers are left suspecting that unspecified contextual features mediate the impact of word processing on writing. Reviews of word processing research (e.g., CochranSmith et al., 1991; Hawisher, 1989; Russell, 1991) have sharpened critical questions in word processing research but have themselves disagreed on funda­ mental conclusions. 86

Meta-Analysis of Word Processing Perhaps conclusions in previous reviews are discrepant because they were drawn from research conducted in a variety of contexts using a variety of research methods. Writing contexts included instructional and noninstructional settings; expert and novice writers; brief, one-composition investigations; and long-term studies. The Russell meta-analysis combined keyboarding instruction with word processing studies. Research methods included case studies, pre-post designs, and within-subject alternating treatments. Although each method has special advantages, each also has its drawbacks. It is difficult, for example, to separate treatment effects from historical effects in long-term pre-post studies or to disentangle the possible interaction among treatments as a user moves back and forth between writing by hand and by computer. Combining them all can complicate the picture still further. The present review contributes to the investigation of word processing in three ways. First, it limited its focus to studies using a consistent research method employed in similar educational contexts. All the studies compared identical instructional programs on the writing process, differing solely in that only one group was allowed to use word processing software. Second, it assembled a larger group of such studies than had been previously gathered. Third, it used metaanalysis to explore the magnitude, not just the frequency, of word processing effects and to systematically look for contextual variables that might account for variation in those effects. Writing quality is frequently reported to have improved after instruction with word processing. A full 66% of the 28 studies that measured writing quality reported this effect. But frequency can be deceiving. Although a frequently reported outcome, the median magnitude of this improvement reported in 20 studies permitting effect size calculation is fairly small (0.21 standard deviations). This finding of a general low-magnitude skill improvement is very similar to the effects reported for another simple instructional tool, the hand-held calculator (Hembree & Dessart, 1986), and somewhat higher than the median writing quality effect reported by Russell (1991; 0.09 standard deviations). Only one contextual feature was related to writing quality effect sizes. Studies of remedial instruction for students who had previously demonstrated difficulty with writing indicated greater improvements from word processing experience than studies with typical writing instruction. Also, studies of remedial writing instruction reported reduced variance among writing quality scores of word processing students. Interestingly, duration of study was not significantly related to variance in effect sizes. One would have expected that, if using this instructional tool did have an impact on cognitive skill, this improvement would be more detectable over longer exposure to the tool. The studies do not provide enough information to determine whether these relations among writing quality, word processing, and ability can best be characterized in terms of instructional or student features. That is, features of remedial instruction or of basic writers might interact with word processing to affect writing quality. However, these three findings—that basic writers tended to benefit more from word processing than higher ability students, that basic writers tended to have less variance in writing quality after experience with word processing, and that duration of this experience did not appear related to writing quality—suggest that the word processing experience has a motivational impact 87

Bangert-Drowns on basic writers, encouraging all to engage in writing tasks more wholeheartedly. A motivational impact could result in roughly equal effects for short- or longterm interventions, whereas actual skill improvement would more likely show consistent improvement over time. Increased motivation could have the greatest effect on students who are in some way disaffected from their writing instruction; improvement for this group would reduce the variance of writing quality effects and slightly elevate the average writing quality effects for all students with which they are grouped. If the effects of word processing are primarily attributable to motivation, one might expect that students' attitude toward writing would improve after instruction, an effect not shown in the present meta-analysis or in the Russell metaanalysis. In studies that measured both kinds of outcomes, attitude toward writing had a strong positive relation with writing quality, but the average of seven studies measuring attitude toward writing showed that students who hand composed and who computer composed had roughly the same attitudes toward writing. It is important to note, however, that attitude toward writing itself may not be the chief determinant of engagement in word processing. Students may become more enthusiastic about word processing, not because of more positive attitudes toward writing, but because they enjoy working on the computer. Attitudes toward word processing would not be captured in effect sizes from the comparative studies included in this meta-analysis because control students, by definition, could not comment on their experience with word processors. However, in their narrative review, Cochran-Smith et al. (1991) collected several studies that observed the positive attitudes of students toward writing with computers. The fact that students tend to produce longer documents after writing instruction on the word processor, a finding of this meta-analysis as well as Cochran-Smith et al. (1991) and Hawisher (1989), can be taken as evidence of this increased willingness to engage in the writing task. The meta-analysis described here can offer little insight regarding revision processes after writing instruction with word processing. In these comparative studies, the typical measure of revision is to count the number of changes from an early to final draft of a composition, a measurement procedure that ignores many on-screen revisions that never appear on paper. In this meta-analysis, the study of revision frequency that had the largest effect size also had all students write their final composition by hand; the study with the smallest revision effect size had the word processing group compose their final writing sample on the computer. The number of studies measuring this outcome were quite small (four), and the results were varied and therefore inconclusive. For this outcome, it is probably better to rely on close observations of revising processes or records of actual keystrokes than on counts of changes in paper drafts (Fitzgerald, 1987). Results of research on word processing are consistent with findings for other instructional tools, indicating small positive improvement on performance of higher order processes. Why are these results so much smaller than one might have hoped for? In the case of programming, Johanson (1988) has argued that the design of the instruction that provides the context for the tool is critical if the tool is to enhance cognition. The accompanying instruction must explicitly identify and practice the skills that one expects to gain from the tool in order 88

Meta-Analysis of Word Processing for those gains to occur. Sure enough, in cases where accompanying instruc­ tion highlights cognitive process, cognitive process is affected by work with the tool (e.g., Lehrer et al., 1989, Swan, 1991). Similarly, Hembree and Dessart (1986) found that the use of calculators had greater effects on cognition when the use was embedded in instruction that capitalized on special features of the calculator. None of the studies included in this meta-analysis reported special instruc­ tional adaptations for special features of the word processor. It would seem easy and potentially powerful, for example, to make explicitly and experientially clear to students a new view of text as fluid and as closely allied to thought and speech. It would also seem easy and potentially powerful to demonstrate the ease with which revision can be accomplished, to distinguish between surface and deeper revision and how both can be accomplished with the word processor, or to show how word processors can be used differently for different phases of the writing process. No such instructional adaptations were reported in these studies. Other evidence suggests that adaptations of the word processing software itself might also be advantageous. Dauite's addition of a revision prompting program to a word processor (Daiute, 1986; Daiute & Kruidenier, 1985) encour­ aged students to systematically reflect on different qualities of different parts of their essays. These prompts aided students to make more frequent and interac­ tive revisions to their documents. The authors credit these improvements to several factors: practical consolidation of instruction on revising, provision of a model for self-prompting, and direction of attention to specific revision strate­ gies. Salomon, Globerson, and Guterman (1989) similarly found that embedded metacognitive prompts in the Reading Partner software stimulated greater men­ tal effort during reading and improved reading comprehension and transferable text management skills well after use with the software. Instruction with even simple tools such as the word processor and the hand­ held calculator can have small positive effects on the performance of their users. These benefits are probably derived from motivational gains attributable to increased work efficiency and the quality of tool-assisted products. Evidence from several areas of research suggests that these small benefits can be amplified if the tool is used in instruction adapted to its features or if the tool explicitly prompts or guides higher order thinking. It seems reasonable to expect such gains from adjustments in the use of word processing in writing instruction. References Bangert-Drowns, R. L. (1986). A review of developments in meta-analytic method. Psychological Bulletin, 99, 388-399. Bangert-Drowns, R. L., & Kozma, R. B. (1989). Assessing the design of instruc­ tional software. Journal of Research on Computing in Education, 21, 241-262. Burnett, J. H. (1984). Word processing as a writing tool of an elementary school student: A single-case experiment with nine replications. Dissertation Abstracts International, 47, 1183A. Cheever, M. S. (1987). The effects of using a word processor on the acquisition of composition skills by the elementary student. Dissertation Abstracts International, 48, 43A. (University Microfilms No. 87-10, 325) Cirello, V. J. (1986). The effect of word processing on the writing abilities of tenth grade remedial writing students. Dissertation Abstracts International, 47, 2531A. (University Microfilms No. 86-14, 353) 89

Bangert-Drowns Cochran-Smith, M., Paris, C. L., & Kahn, J. L. (1991). Learning to write differently. Norwood, NJ: Ablex. Coulter, C. A. (1986). Writing with word processors: Effects on cognitive develop­ ment, revision, and writing quality. Dissertation Abstracts International, 47, 2552A. (University Microfilms No. 86-255, 529) Crealock, C. M., Sitko, M. C , Hutchinson, A., Sitko, C , & Marlett, L. (1985, April). Creative writing competency: A comparison of paper and pencil and computer technologies to improve the writing skills of mildly handicapped adolescents. Paper presented at the Annual Meeting of the American Educational Research Association, Chicago. (ERIC Document Reproduction Service No. ED 259 531) Cross, J. A., & Curey, B. J. (1984, May). The effect of word processing on writing. Paper presented at the mid-year meeting of the American Society for Information Science, Bloomington, IN. (ERIC Document Reproduction Service No. ED 247 921) Daiute, C. (1986). Physical and cognitive factors in revising: Insights from studies with computers. Research in the Teaching of English, 20, 141-159. Daiute, C , & Kruideneir, J. (1985). A self-questioning strategy to increase young writers' revising processes. Applied Psycholinguists, 6, 307-318. Dalton, D. W., & Hannafin, M. J. (1987). The effects of word processing on written composition. Journal of Educational Research, 80, 338-342. Deming, M. P. (1987). The effects of word processing on basic college writers' revision strategies, writing apprehension, and writing quality while composing in the expository mode. Dissertation Abstracts International, 48, 2263A. (University Microfilms No. 87-27, 191) Duling, R. A. (1985). Word processors and student writing: A study of their impact on revision, fluency, and quality of writing. Dissertation Abstracts International, 46, 3535A. Etchison, C. (1985). A comparative study of the quality and syntax of compositions by first year college students using handwriting and word processing. Dissertation Abstracts International, 47, 163A. (University Microfilms No. 86-06, 203) Fitch, J. E. (1985). The effect of word processing on revision and attitude toward writing. Evanston, IL: National College of Education. (ERIC Document Repro­ duction Service No. ED 272 898) Fitzgerald, J. (1987). Research on revision in writing. Review of Educational Research, 57, 481-506. Frase, L. T., Kiefer, K. E., Smith, C. R., & Fox, M. L. (1985). Theory and practice in computer-aided composition. In S. W. Freedman (Ed.), The acquisition of written language (pp. 195-210). Norwood, NJ: Ablex. Glass, G. V., McGaw, B., & Smith, M. L. (1981). Meta-analysis in social research. Beverly Hills, CA: Sage. Greenland, L. T., & Bartholome, L. W. (1987). The effect of the use of microcompu­ ters on writing ability and attitude toward business communication class. The Delta Pi Epsilon Journal, 29, 78-90. Haas, C. (1986). Computers and the writing process: A comparative protocol study (Tech. Rep. No. 34). Pittsburgh, PA: Communications Design Center. Hawisher, G. E. (1986). The effects of word processing on the revision strategies of college students. Dissertation Abstracts International, 47, 876A. Hawisher, G. E. (1989). Research and recommendations for computers and compo­ sition. In G. E. Hawisher & C. L. Selfe (Eds.), Critical perspectives on computers and composition instruction (pp. 44-69). New York: Teachers College Press. Hawisher, G. E., & Fortune, R. (1989). Word processing and the basic writer. Collegiate Microcomputer, 7, 275-284. 90

Meta-Analysis of Word Processing Hembree, R., & Dessart, D. J. (1986). Effects of hand-held calculators in precollege mathematics education: A meta-analysis. Journal for Research in Mathematics Education, 17, 83-99. Hult, C. (1985, February). A study of the effects of word processing on the correctness of student writing. Paper presented at the annual meeting of the Conference on College Composition and Communication, Minneapolis, MN. (ERIC Document Reproduction Service No. ED 260 425) Jackson, L. W., McHaney, J. H., & Handley, H. M. (1984). Relating learning styles to performance on written composition using microcomputer word processing and the traditional hand-written method. Unpublished manuscript. (ERIC Document Re­ production Service No. ED 276 014) Johanson, R. P. (1988). Computers, cognition and curriculum: Retrospect and prospect. Journal of Educational Computing Research, 4, 1-30. Juettner, V. W. (1987). The word processing environment and its impact on the writing of a group of high school students. Dissertation Abstracts International, 48, 635A. Kantz, K. E. (1989). A study of computer-based composition for the learning disabled and underprepared university student. Unpublished master's thesis, California State University, Long Beach. Kiefer, K. E., & Smith, C. R. (1983). Textual analysis with computers: Tests of Bell Laboratories' computer software. Research in the Teaching of English, 17, 201-214. King, B., Birnbaum, J., & Wageman, J. (1984). Word processing and the basic college writer. In T. E. Martinez (Ed.), The written word and the word processor (pp. 251-261). Philadelphia, PA: Delaware Valley Writing Council. Kozma, R. B. (1991). Learning with media. Review of Educational Research, 61, 179-211. Kurth, R. J. (1987, January). Using word processing to enhance revision strategies during student writing activities. Educational Technology, 27, 13-19. Lehrer, R., Levin, B., DeHart, P., & Comeaux, M. (1987). Voice-feedback as a scaffold for writing: A comparative study. Journal of Educational Computing Research, 3, 335-353. Lehrer, R., Samcilio, L., & Randle, L. (1989). Learning pre-proof geometry with Logo. Cognition and Instruction, 6, 159-184. Levin, J. A., Boruta, M. J., & Vascocellos, M. T. (1983). Microcomputer-based environments for writing: A writer's assistant. In A. C. Wilkinson (Ed.), Classroom computers and cognitive science (pp. 219-232). New York: Academic Press. Lytle, M. J. (1987). Word processors and writing: The relation of 7th grade students' learner characteristics and revision behaviors. Dissertation Abstracts International, 48, 2852A. (University Microfilms No. 88-00,537) McLuhan, M. (1962). The Gutenberg galaxy: The making of typographic man. New York: New American Library. Miller, S. K. (1984). Plugging your pencil into the wall: An investigation of word processing and writing skills at the middle school level. Dissertation Abstracts International, 45, 3535A. Moore, M. A. (1987). The effect of word processing technology in a developmental writing program on writing quality, attitude towards composing, and revision strategies of fourth and fifth grade students. Dissertation Abstracts International, 48, 635A. Olson, D. R. (1976). Towards a theory of instructional means. Educational Psychologist, 12, 14-35. 91

Bangert-Drowns Ong, W. J. (1982). Orality and literacy: The technologizing of the word. New York: Methuen. Palumbo, D. B. (1990). Programming language/problem-solving research: A review of relevant issues. Review of Educational Research, 60, 65-89. Papert, S. (1980). Mindstorms: Children, computers and powerful ideas. New York: Basic Books. Pea, R. D. (1985). Beyond amplification: Using the computer to reorganize mental functioning. Educational Psychologist, 20, 167-182. Perkins, D. N. (1985). The fingertip effect: How information-processing technology shapes thinking. Educational Researcher, 14, 11-17. Posey, E. J. (1986). The writer's tool: A study of microcomputer word processing to improve the writing of basic writers. Dissertation Abstracts International, 48, 39A. Robinson-Staveley, K., & Cooper, J. (1990). The use of computers for writing: Effects of an English composition class. Journal of Educational Computing Research, 6, 41-48. Rosenbaum, N. J. (1987). A study to determine the value of computers in the revision process of written communication. Dissertation Abstracts International, 49, 1124A. (University Microfilms No. 88-12, 577) Russell, R. G. (1991, April). A meta-analysis of word processing and attitudes and the impact on the quality of writing. Paper presented at the Annual Meeting of the American Educational Research Association, Chicago. Salomon, G. (1988, April). AI in reverse: Computer tools that become cognitive. Invited address at the Annual Meeting of the American Educational Research Association, New Orleans. (ERIC Document Reproduction Service No. ED 295 610) Salomon, G., Globerson, T., & Guterman, E. (1989). The computer as a zone of proximal development: Internalizing reading-related metacognitions from a read­ ing partner. Journal of Educational Psychology, 81, 620-627. Schank, E. T. (1986). Word processor versus "The Pencil" effects on writing. Unpublished master's thesis, Kean College, Union, NJ. (ERIC Document Repro­ duction Service No. 270 791 Sherzer, J. (1987). A discourse-centered approach to language and culture. American Anthropologist, 89, 295-309. Shinn, J. A. (1986). The effectiveness of word processing and problem-solving computer use on the skills of learning disabled students. Dissertation Abstracts International, 47, 4O69A. (University Microfilms No. 87-05, 046) Summers, N. (1985). Responding to student writing. College Composition and Communication, 31, 378-387. Swan, K. (1991). Programming objects to think with: Logo and the teaching and learning of problem-solving. Journal of Educational Computing Research, 7, 89-112. Teichman, M., & Poris, M. (1985). Word processing in the classroom: Its effects on freshman writers. New York: Marist College. (ERIC Document Reproduction Service No. ED 276 062) Vockell, E. L., & Schwartz, E. (1988). Microcomputers to teach English composi­ tion. Collegiate Microcomputer, 6, 148-154. Wetzel, K. A. (1985). The effect of using the computer in a process writing program on the writing quality of third, fourth, and fifth grade pupils. Dissertation Abstracts International, 47, 76A. (University Microfilms No. 86-05,868) Whorf, B. L. (1956). The relation of habitual thought and behavior to language. In J. B. Carrol (Ed.), Language, thought, and reality: Selected writings of Benjamin Lee Whorf (pp. 134-159). Cambridge, MA: MIT Press. 92

Meta-Analysis of Word Processing Author ROBERT L. BANGERT-DROWNS is Assistant Professor, University at Albany, State University of New York, ED 110, 1400 Washington Avenue, Albany, NY 12222. He specializes in research synthesis, instructional design, and instructional psychology. Received July 1, 1992 Revision received October 26, 1992 Accepted October 30, 1992

93