Using a Web-Based Interactive Test as a Learning Tool

Using a Web-Based Interactive Test as a Learning Tool JOE BOBHESTER The use of computer-mediated communication (CMC) as a pedagogic tool for class c...
Author: Ira Wood
1 downloads 3 Views 549KB Size
Using a Web-Based Interactive Test as a Learning Tool JOE

BOBHESTER

The use of computer-mediated communication (CMC) as a pedagogic tool for class communication has been encouraged by numerous authors. Smith, Kim, and Bernstein (1993) suggested that classroom communication using CMC provides benefits such as increased communication, fostering efficiency in teamwork situations, facilitating problem-solving, and providing communication means for people who have trouble expressing themselves or being understood in face-to-face situations. Menges (1994) argued that interactive technology would ensure attention, and that “well-designed computermediated instruction is more likely to engage individuals than are the words of a professor in front of a room filled with students” (p. 187). Many of the reports on the use of CMC in journalism and mass communication refer primarily to the use of electronic mail and bulletin boards. Elasmar and Carter (1996) suggested that journalism professors could use email as part of their curricular activi-

ties. For example, the authors suggested using electronic mail to distribute homework to students, asking students to submit assignments by e-mail, and creating a dedicated electronic bulletin board for each course. Smith, Kim, and Bernstein (1993) offered anecdotal evidence of the successful integration of e-mail and bulletin boards into journalism classes. As technology has developed, a number of authors have turned their attention to the use of the Internet, specifically the World Wide Web, in journalism a n d mass communication courses. Gunaratne and Lee (1996) reported on their development of a Web site tailor-made for teaching three journalism courses. Their “Internet Resources for Journalism” homepage is primarily a collection of links to resources a n d tools located on the Internet. In course evaluations for the three courses, students indicated that the courses improved their skills in the use of the Internet research tools. Farnall and Geske (1996) investi-

Hester (JOEBO&T~-U.EDU) is assistant professor of mass communications at Texas Tech University

gated the use of the Internet in teaching an advertising strategy class, comparing class sections taught using traditional means with class sections taught using a heavy emphasis on the

Internet as a source of information. They found that integration of the Internet into the classroom improved student and instructor performance and evaluation of the class as a whole. Results indicated that 72 percent of the students preferred the Internet over textbooks as a source of information. The Internet browser Netscape Navigator was considered a good learning tool, and 87 percent of the students indicated they wanted to see the Internet used in more courses. The results of the use of CMC in the classroom are not always favorable. In an experiment conducted by Smith (1994),students in traditional sections of a media law course reported higher satisfaction with the course than those students i n sections using CMC to supplement traditional instruction methods. Smith (1994)also reported no significant differences in final exam results between traditional and on-line sections. Hudson and Holland (1992) reported that students who received interactive multimedia instruction in a video production class perceived that

they learned more. However, the students did not score significantly higher than students who received traditional lecture instruction. In addition, Elasmar and Carter (1996)found that computer anxieties such as rejection and technical-based phobia of computers were negatively related to acceptance and potential use of email. While some studies have reported gender differences in the uses of CMC, Elasmar and Carter (1996)found no significant difference in e-mail use between male and female respondents.

The current study This study investigated the effectiveness of one particular CMC technique, a Web-based interactive sample test, as a part of other CMC techniques used to supplement traditional instruction in a large lecture class. The course, mass communication theories, seemed particularly suited for the use of CMC due to its large section size. With an enrollment of 190 students in a single section and no teaching assistant, traditional means of student contact outside the classroom are difficult. In an effort to implement CMC into the course, each student was required to obtain an e-mail account. Students were encouraged to use e-mail to ask questions outside of the classroom. In addition, a web site was created for the course. The web site consisted of an on-line syllabus and semester schedule, plus numerous web pages containing course materials that supplemented

lectures and readings. These materials ranged from copies of lecture overheads to a list of example questions from exams. Approximately one week before the first exam, the instructor created an interactive sample test that allowed students to experience the types of questions they would later face on the exam. The interactive sample test consisted of five multiple choice questions. For each question, a student could click on one of four hyperlinked answers. If the student chose an incorrect answer, the link connected the student with a statement informing the student not only that the answer was incorrect, but also explaining why. After this explanation, another hyperlink allowed the student to return to the question and try again. Once the correct answer was selected, the student was linked to the next question. Student response to the interactive sample test was very positive from those students who actually tried it. Email messages to the instructor seemed to indicate not only that sample of questions were helpful, but also that the interactive nature of the sample test made it a useful tool for studying the course materials. However, based on an informal poll of the class after the first exam, a large number of students had not even tried to access and use the interactive sample test.

Research questions Based on the literature, trial use of an interactive sample test, and informal feedback from students about the interactive sample test, the following research questions were developed: Question 1: Is an interactive sample test a valuable educational tool for students? Question 2: Does the level of interactivity in a sample test affect its usefulness?

Question 3:What role does computer anxiety play in the use of an interactive sample test in the classroom? Question 4: What role does gender play in the use of an interactive sample test in the classroom?

Method Two versions of an interactive sample test were prepared. Each version contained 10 questions covering the same information. The only difference between the two versions was the level of interactivity. The first version was the least interactive of the two. For each question, regardless of which answer a student selected, the student was linked to a portion of the page that included all four possible answers along with explanations why each selection was either correct or incorrect. The second version was more interactive a n d worked like the original sample test. Selecting an incorrect answer resulted in an explanation of why the answer was incorrect and a link back to the question to try again. Each of the 150 students in the class who had provided the instructor with an e-mail address was sent an email message announcing the availability of the new interactive sample test covering the material for the second exam. Smith, Kim, and Bernstein (1993) noted that a critical factor in motivating usage of computer-mediated communication is for the instructor to provide incentives for students to use it. Therefore, the message also offered an extra credit point on the student’s final grade for completing the interactive sample test and the short questionnaire that followed within the next three days. The message included a URL for the interactive sample test. Selected at random, one half of the class received the URL for the first version, and the 37

SPRrmC ‘99

other half of the class received the URL for the second version. Of the 150 students who were sent this e-mail message, 73 completed the questionnaire within the three-day time period. A response rate of 49 percent yielded a sample consisting of 31.5 percent males and 68.5 percent females. The average age of the respondents was 20.3 years. Evaluation of the interactive sample test. The questionnaire located at the end of the interactive sample test included ten statements specifically about the sample test. Respondents were asked to indicate their level of agreement with each statement on a 7point scale with “1” indicating “strongly disagree” and “7” indicating “strongly agree.” The statements were: The questions on the sample test were too easy. Taking this sample test will help me get a better grade on the next exam. The sample test was easy to use. There were not enough questions on the sample test. The sample test is a good learning tool for me. There should be more sample tests for this course. The questions on the sample test are not representative of questions on real exams. The material on the sample test was valuable to me. The questions on the sample test were challenging. I enjoyed taking the sample test. The sample test data was analyzed using a principal components factor analysis with varimax rotation. The factor analysis resulted in a three-factor solution accounting for 58 percent of the variance in the data. The first factor was deemed to measure the perceived value of the sample test to the JOURNALISM

6h h S S

COMMCINICAnON

EDUCATOR 38

respondents as a learning tool. This factor was labeled “perceived value,” and it accounted for 34 percent of the variance (eigenvalue = 3.39, Chronbach’s alpha = 0.63). The second factor was deemed to measure the ease and enjoyment of using the sample test. This factor was labeled “ease of use,” and it accounted for 13.5 percent of the variance (eigenvalue = 1.36,Chronbach’s alpha = 0.53). The third factor was deemed to measure the level of difficulty of the sample questions. This factor was labeled “difficulty level,” and it accounted for 10.7 percent of the variance (eigenvalue = 1.07, Chronbach’s alpha = -0.25). Computer anxiety. The questionnaire also included items adapted from Raub’s (1981)Computer Anxiety scale. (Also see Elasmar & Carter, 1996.) The scale is a measure of anxiety toward computers and technology. Respondents were asked to indicate their agreement on a 7-point scale with ‘‘1’’indicating “strongly disagree” and “7” indicating “strongly agree,” by responding to the following statements: I don’t like using computers for class because it is too time consuming. It is difficult to enjoy using a computer. I am confident in my computer skills. I have avoided computers because they are unfamiliar to me. I feel apprehensive about using a computer terminal. I hesitate to use a computer for fear of making mistakes I cannot correct. I have difficulty in understanding most technical matters. Computer terminology sounds like confusing jargon to me. The computer anxiety data was analyzed using a principal components factor analysis with varimax rotation.

The factor analysis resulted in a twofactor solution accounting for 62 percent of the variance in the data. The first factor was deemed to measure unease with the technical aspects of computers and was labeled “technical-based phobia” (Elasmar and Carter, 1996). It accounted for 48.7 percent of the variance (eigenvalue = 3.90, Chronbachk alpha = 0.78). The second factor was deemed to measure lack of enjoyment in using computers and was labeled “computer rejection” (Elasmar and Carter, 1996). This factor accounted for 13.3 percent of the variance (eigenvalue = 1.06, Chronbach’s alpha = 0.71).

Results Of the 73 students who responded, 82.2 percent had their own computers at school. The majority (72.6%) owned IBM/compatible computers and 71.2 percent also owned a modem. Almost all respondents (95.9%) reported using e-mail at least once every few weeks, with 45.2 percent reporting using e-mail at least weekly. A similar number of respondents (95.5%)reported using the World Wide Web at least once every few weeks, with 78.1 percent reporting using the World Wide Web at least once per week. There were no significant dif-

ferences in the use of e-mail between males and females. Males tended to report using the Internet more than females, both before taking the course (x’= 15.9, df - 4, p = .003= and while taking the course (x‘= 14.88, df = 4, p = .005). Overall, students strongly agreed (mean = 6.49 on a 7-point scale) that more courses should include a Web site as part of the course materials. “Perceivedvalue” of the interactive sample test. The students in the sample strongly agreed (mean = 6.28, SD = 0.69, R = 73) with the statements that loaded most heavily on the “perceived value” factor. Analysis of variance of “perceived value” factor scores indicated a significant two-way interaction between form and gender (MSE = 0.90, F(1,69) = 8 . 0 5 , p = 0.006). Table 1 shows the results of the follow-up analysis for each of the four statements that loaded most heavily on this factor. The results show that females differed very little in their evaluation of the “perceived value” of the two forms of the sample interactive test. However, males who had used the least interactive form of the test agreed significantly less with the statements “There should be more sample tests for this course” and “The material on the sample test was valuable to me.” In other words, males per-

TABLE 1 COMPARISON OF MEANS FOR ‘PERCEIVED VALUE’ FACTOR STATEMENTS Form 1 Question

“Therewere not enough questions on the sample test.” “The sample test is a good learning tool for me.” “There should be more sample tests for this course.” “The material on the sample test was valuable to me.”

Male Female 4.57a 5.80” 6.148 6.65” 6.21” 6.85b 6.00” 6.80b

Form 2 Male Female 6.00‘ 5.00‘ 6.89” 6.63’ 6.8gb 6.77b 6.8gb 6.67b

Note. Means not sharing superscripts are significantly different at the .05 level using the Tukey-HSD test. Means are computed on a 7-point scale with “ 1 ” indicating “strongly disagree” and “7” indicating “strongly agree.”

39

SPRING ‘99

ceived the more interactive version of the two forms of the sample test as being more valuable. Analysis of variance for the “ease of use” factor scores did not indicate any significant main effects or interactions. The “difficulty level” factor scores were not analyzed due to the low reliability score for the factor. Impact of computer anxiety. Students who do not enjoy using computers tended to find the interactive sample test more difficult to use than students who enjoy using computers. Scores on the rejection-based computer anxiety factor were significantly and negatively correlated with evaluations of the interactive sample test’s ease of use (r = -0.35, p 5 .01). No significant correlations were found between technicalbased phobia and the evaluations of the sample interactive test. Students who do not enjoy using computers also reported less use of both e-mail and the Internet prior to taking the course. Rejection-based computer anxiety was significantly and negatively correlated with prior use of e-mail (r = -0.46, p 1. .01) and prior use of the Internet (r = -0.44, p 5 .01). Students who were uneasy with the technical aspects of using computers reported less prior use of the Internet but not email. Technical-based phobia was significantly and negatively correlated with the use of the Internet (r= -0.30, p < .05)but was not significantly related to use of e-mail. Exam performance. Mean scores on the second exam in the course revealed a significant difference (t= -2.92, df = 166, p = .004)in scores between those students who had used either version of the interactive sample test and the rest of the class. Students who had taken the interactive sample test (mean = 72.99) scored more than four points JOURNAWSM B MASSCOMMUNICATION EDUCATOR

40

higher than the rest of the students in the sample (mean = 68.69).

Discussion and conclusion The results of this experiment seem to indicate that the introduction of the interactive sample test into this course was successful. Students indicated that other courses should make use of the World Wide Web in course materials. They seem to want and expect the use of the World Wide Web in the university curriculum. This finding is not surprising considering the large percentage of students who reported owning a personal computer and modem and using e-mail and the Internet. While males reported more use of the Internet than females, the fact that there was no significant difference in e-mail use by gender may indicate that gender differences in computer usage are disappearing. The use of an interactive sample test proved to be a valuable learning tool for the students who used it. However, this is one area where gender differences were apparent. Particularly among male students, the more interactive version of the test was more valuable. The influence of computer anxiety is of particular concern to any instructor contemplating the addition of an interactive test or other CMC to a course. Students who do not enjoy using computers or are uneasy with the technical aspects of using computers are less likely to enjoy or value these tools. This is especially important considering the sample in this study. Even when offered extra credit, over one-half (51%) of the students in the class did not try the interactive sample test during the time period in question. One wonders how this group of students would score if evaluated on computer anxiety.

Finally, the fact that students who used the interactive sample test scored higher on the subsequent exam is encouraging for those instructors who are interested in adding similar learning tools to their course(s). However, it should be noted that it is possible that the students who chose to take the interactive sample test may have done so because they are better students. If they are already more dedicated and interested in the course, they would probably have performed better on the exam anyway. The study is not without limitations. The student sample used in this study was primarily one of convenience, and the results cannot be generalized beyond this particular course and university from which the sample was taken. In addition, the lack of data from students who decided not to try the interactive sample test is troublesome and makes interpretation of some of the data more difficult. Finally, this study focused primarily on one single type of CMC - an interactive sample test - and one cannot expect similar results for other forms of CMC. Future studies should continue to look at the role of computer anxiety in other forms of CMC in the classroom. In particular, it is important to look for ways to overcome these types of anxieties. While many students report that

CMC techniques such as the interactive sample test reported in this study are valuable learning tools, educators cannot overlook the fact that a large number of students are still not entirely comfohable with this medium. As long as some students feel uneasy and do not enjoy using computers, the promised benefits of computer-mediated communication will not be available to them. 0 Elasmar, M. G. & Carter, M. E. (1996).Use of email by college students and implications for curriculum. Journalism 6 Mass Communication Educator, 51(2),46-54. Farnall, 0. & Geske, J. (1996).The Internet as a teaching tool in advertising education. Paper presented to the annual conference of AEJMC, Advertising Division, Aneheim, CA. Hudson, T. J. & Holland, S.D. (1992).Interactive multimedia instruction in video production classes. Journalism Educator, 47(2), 18-26. Menges, R. J. (1994).Teaching in the age of electronic information. In W. J. McKeachie (Ed.), Teaching 7Sps:Strategies, Research, and Theory for College and University Teachers, Lexington, MA: D. C. Heath and Company. Raub, A. C. (1981)cited in Kernan, M. C. & Howard, G. S. (1990).Computer anxiety and computer attitudes: An investigation of construct and predictive validity issues. Educational and Psychological Measurement, 50(3),681-690.

Smith, C., Kim, H., & Bernstein, J. (1993). Computer-mediated communication and strategies for teaching. Journalism Educator, 48(1),80-83.

Smith, W. E. (1994).Computer-mediated communication: An experimental study. lournalism Educator, 48(4),27-33.

41

S P r n G ‘99

Suggest Documents