Faculty Perceptions of Academic Dishonesty

*E Holly Seirup Pincus Liora Pedhazur Schmelkin Faculty Perceptions of Academic Dishonesty A Multidimensional Scaling Analysis One need not search f...
Author: Kory Russell
12 downloads 4 Views 913KB Size
*E Holly Seirup Pincus Liora Pedhazur Schmelkin

Faculty Perceptions of Academic Dishonesty A Multidimensional Scaling Analysis

One need not search far for evidence that academic in our society. Indeed, newspaper accounts of ubiquitous dishonesty is cheating by students, teachers, and administrators appear on a fairly regular basis, and HBO recently premiered a movie, Cheaters (May 20, 2000), based on a Chicago high school's cheating scandal. In the research literature, academic dishonesty has been the subject of research for decades, addressing a wide variety of issues and questions, including what academic dishonesty is, how prevalent it is, who cheats, why students cheat, what the faculty reactioii is, and what the institutional response is. For a review of much of this research, see Cizek (1999), Crown and Spiller, (1998), and Whitley (1998). One of the main issues that emerges from the literature relates to inconsistencies in the definition of academically dishonest behaviors and the lack of consensus and general understanding of academic dishonesty among all members of the campus community. According to Roberts and Rabinowitz (1992), "Our ability to alter the environment in which cheating takes place will be determined by our understanding of how people (both faculty and students) perceive cheating and its seriousness" (p. 189).

Various attempts have been made to define academic dishonesty. For example, in an August 1993 report published by the United States Department of Education, Maramark and Maline note: "Cheating takes l-lolly Seirup Pincus is vice president tor Campus Life, and Liora PedhazurSchmelkin is Leo A. Guthart Distinguished Professor of Teaching Excellence, Hojstra University Hempstead, NY Please direct all correspondence to the first author at [email protected]. The Journalof Higher Fducation, Vol. 74. No. 2 (March/April 2003)

2003 by The Ohio State University Copyright ©O

Acacdemic Dishonesty

197

many formns-from simnply copying another student's paper to stealing an exam paper to forging an official university transcript" (p. 3). Gehring and Pavela, in a 1994 report on academic integrity to the National Association of Student Personnel Administrators, offer the following definition: We regard academic dishonesty as an intentional act of fraud, in which a student seeks to claim credit for the work or efforts of another without authorization, or uses unauthorized materials or fabricated information in any academic exercise. We also consider academic dishonesty to include forgery of academic documents, intentionally impeding or damaging the academic work of others, or assisting other students in acts of dishonesty. (p. 5) The definitions and classifications that have been attempted are broad and ambiguous. Most people agree that ch-eating is unethical (Barnett & Dalton, 1981), and yet there remains some confusion about what particular behaviors constitute academic misconduct. Some forms of academic dishonesty are apparent to everyonie: plagiarism, copying from someone else's exam, purchasing term papers, stealing a test, or forging a university document. Other more questionable formns of academic behaviors are still debated among faculty, students, and administrators: collaborating on homework and take-home exams when individual work is specified, handing in the same work for two separate classes, or inappropriately utilizing the services of a tutor or a writing center (Fass, 1986). Even plagiarism, which most people describe as academically dishonest behavior, is sometirnes controversial. For example, Miller, in an article in the Chronicle of Higher Education 1993), argues that plagiarism may not be universally understood, and teachers cannot assume that every student comes into the classroom with the same belief system. Russell Baker, in an op-ed article in Thle New York Times, offers another example of unintended plagiarism: !(

Charges of plagiarism, which have become so common lately, may well be the fault of the computer instead of the author. That's because volumes of research material from other people's work get stored in the electronic file along with the manuscr-ipt in progress. The author searching his index for an appropriate citation finds a file name for stuff he stored up a year or two ago, presses a couple of keys and-shazam!-like magic, another author's paragraph is inserted whole into the manuscript, its attribution lost somewhere down the cracks in the electronic floorboards. (1 995, p. A25) In addition to the general problem of arnbiguity of definition, differences exist among the various studies as to the assessment of severity of the various indicators of academic disho.nesty (e.g., Cannon, Fox, &

198

The Journalof Higher Education

Renjilian 1998; Franklyn-Stokes & Newstead 1995; Graham, Monday, O'Brien, & Steffen, 1994; Nuss, 1984; Roig & Ballew, 1994; Sims, 1995). One potential explanation for these differences could be due to the different scales used by the researchers. For example, Cannon et al. (1998) used a 5-point rating scale, with I indicating that the depicted scenario was definitely not representative of academic dishonesty, and 5 indicating that it definitely was. Graham et al. (1994) used a 4-point scale, with I = "not cheating" and 4 = "very severe," whereas Sims (1995) used a 6-point scale, ranging from not at all dishonest to very severe. Nuss (1984), on the other hand, used a forced-choice ranking system whereby respondents ordered the 14 presented behaviors from most serious to least serious. It is not surprising, therefore, that differences emerged in the particular ordering of the behaviors on the various continua. The purpose of the present study was to focus specifically on faculty in an attempt to uncover some of their underlying perceptions and to gain a better understanding of how they conceptualize academic dishonesty. Specifically, do faculty conceptualize academically dishonest behavior on a single dimension or on multiple dimensions, and what is the nature of these dimensions? Moreover, given the ambiguity that exists in the definitions, the choice of the analytic method-multidimensional scaling (MDS)-was particularly important, because it was desired to utilize a methodology that does not automatically constrain the results to a single dimension and allows for multiple dimensions to emerge. In addition, it does not impose the researchers' a priori conceptions of the relevant dimensions. MDS (for discussions of various MDS methods and their assumptions, see, for example, Davison, 1983; Kruskal & Wish, 1978; Schiffman, Reynolds, & Young, 1981) is a set of methods that attempt to uncover perceptions or cognitions of a stimulus set in a relatively unconstrained fashion. In the present research, the advantage of utilizing MDS was that it yielded information from the faculty on their underlying perceptions of the academic behaviors with little constraint or bias from the researchers. For example, if faculty were asked a priori to rate each illustration of academic dishonesty on how serious they perceive it to be, then it is possible that the scale provided (seriousness, in this example) may predetermine faculty members' perceptions and responses. Moreover, rating a set of stimuli on a single rating scale may produce a lack of discernable differentiation among the stimuli if all or most of the ratings are concentrated on one end of the rating scale. Thus, with a 7point rating scale, where the anchors are labeled I = "not at all serious" and 7 = "very serious," if respondents view most of the stimuli as being

---------------------------- ....... .. ..... .......... -- .............. - ...... .......... ,-....

......... ,_--

------

Academic Dishonesty

199

in the very serious end of the scale, the resulting ratings will all be negatively skewed and similar in value, even though the underlying perceptions may be differentiated. The main point is that respondents "may draw on what is supposedly a purely formal feature of the rating scale, namely its numeric values" (Schwarz, 1999, p. 95), and their responses may be influenced and constrained by the nature of the rating scale itself. Another advantage of MDS is that it provides a graphical presentation of the results, thereby making the underlying dimensions more potentially discernible. "In useful or interesting applications, the characteristics we discover in this way are seldom totally unsuspected or surprising. On the other hand, they are usually part of a much longer list of characteristics which might just as plausibly have appeared. One useful role of MDS is to indicate which particular characteristics are important in contrast to others which are just as plausible" (Kruskal & Wish, 1978, p. 35).

Method Instruments Two versions of the data collection instrument were developed. One, considered the primary instrument, consisted of pairwise similarity ratings and will henceforth be called the MEIS version. The other consisted of bipolar rating scales and will be referred to as the Rating Scale (RS) version. The same list of examples of academically dishonest behaviors (e.g., plagiarizing, taking a test for someone else, using crib sheets) were used in both the MDS and RS versions. The original list of examples based on the research literature was reviewed by several faculty, administrators and students to verify that a thorough and representative list was generated. From the feedback received, modifications to the list were made, and the final list contained 283 behaviors. The complete list appears in Table 2. Both the MDS and RS versions of the instrument contained the same background sections. The background portion included a demographic section that requested information such as school, level of primary teaching (graduate or undergraduate), tenure status, and sex. In addition, a section on incidents that a faculty member may have encountered was presented, followed by a section on policy concerns. MDS version. In the MDS version. pairwise ratings of the 28 examples of academic dishonesty were requested of the respondents. This required them to use their own frame of reference to "judge the similarity

200

The Journalof Higher Education

of the behaviors in each pair" on a scale of I (very different) to 9 (very similar). For example: Very

Very

Different

Similar

123456789

plagiarizing failing to report a grading error

With 28 stimuli, 378 pairs were constructed. These were developed according to a maximally efficient ordering of the pairs as derived by Ross (1934). This ordering ensures separation of the stimuli and also their placement (in the first or second position of a pair). Asking respondents to complete 378 pairwise similarity comparisons would have been too time consuming and onerous a task. Therefore, the 378 ordered pairs were systematically assigned to four forms labeled A, B, C, and D. Forms A and D had 95 pairs, and forms B and C had 94 pairs. Although the specific pairs were not identical, the four forms could be considered equivalent with each stimulus appearing between five and eight times on each one of the forms. RS Version. In order to externally validate the interpretation of the MDS analysis, 10 bipolar rating scales (seriousness, minor vs. major violation, effect of student intent on faculty response, flexibility, ease of detection, difficulty of proving, clear example. firm versus lenient sanction/resolution required, should a notation be placed on the transcript, formal versus informal resolution) were developed based on characteristics that have previously been identified in the literature. As is typically done in MDS interpretations (e.g., Davison, 1983; Kruskal & Wish, 1978; Schiffman et al., 1981), unidimensional rating scales can be thought of as exploratory "hypotheses" of the important attributes in the dimensional space and are used to "'objectify' the process of naming [the] dimensions, or to 'verify' the names used by the researcher" (Schmelkin, 1985, p. 227). Participants were asked to rate each of the behaviors using a 9-point scale. Using the first scale of seriousness as an example: Very Serious

Not Serious 123456789

plagiarizing

Requiring ratings of each of the 28 behaviors on the 10 scales would have yielded a total of 280 ratings per respondent. Once again, in the

hopes of alleviating respondent fatigue, two bipolar forms were created, each containing five scales. Consequently, faculty only had to complete 140 ratings.

----------------------_-------------------------- . .............. -----,---- -- -------- ..-_ ................ I..",-

',............

...................... 111

Academic Dishonesty

20(1

Sample and Procedures

The study was conducted at a private university in the Northeast enrolling 12,000 students, approximately 7,000 of them full-time undergraduates. About 1,000 faculty members are active during any given semester, half full-time and half adjuncts. From the pool of active faculty, 150 full-time faculty and 150 adjunct faculty were randomly selected to participate in the study, for an initial sample of 300. The 300 faculty members were randomly assigned to receive one of four MDS forms or one of two RS forms with 25 full-time and 25 adjunct faculty receiving each form. Following mail survey procedures outlined in Dillman (1978), three complete mailings were conducted, with the two follow-ups occurring at three weeks and seven weeks after the original mailing; a reminder postcard was sent one week after the original mailing. The complete mailings included: (a) one of the six instrumenits in booklet form, (b) a personalized cover letter explaining the project, (c) a postage-paid return envelope, and (d) a postage-paid return postcard. Faculty were guaranteed anonymity as no identification markings were placed on the instruments or the envelopes. In order to be able to send follow-up mailings to those who did not respond, a separate response postcard was utilized. Faculty were instructed (in the cover letter) to return the postcard when they comp]eted the questioinaire so that they would be removed from the follow-up list. Response Rate and FinalSample

Of the 300 instruments originally mailed, 72% (216) were returned, and 71% (212) were usable. Of the 212 tuseable instruments returned, the breakdown by form was similar, as were the response rates by faculty status. Of those who responded, 48% (102) were full-time faculty, 45% (96) were adjunct faculty, and 7% (14) did not indicate their status. There were 63% (133) males, and 36% (77) females. Of the overall sample, 77% (161) primarily taught undergraduate classes. The mean number of years teaching was 16.33 (SD = 11.90). Comparisons of the final sample with the complete faculty census information at the time of the data collection revealed that the sample was representative based upon the variables studied (e.g., status, sex, rank, teaching emphasis, tenure, division or school affiliation). Results

The primary data set for the MDS analysis consisted of a super mean matrix. Prior to generating this matrix, differences between various

202

The Journal of Higher Education

subgroups (e.g., based on sex, status) were explored. One-way analysis of variance results indicated that there were no major differences based on full-time/adjunct status, sex, primary teaching emphasis, tenure status, rank, or division. Consequently, the means from all the pairwise similarity ratings across the four forms were collected into one super mean matrix. This 28 x 28 (stimulus by stimulus) lower-half mean matrix constituted the raw input into the MDS analysis. Using the ALSCAL procedure of SPSS with ordinal symmetric similarity data, solutions were sought for one to six dimensions. Decisions as to appropriate dimensionality in MDS were made on both statistical, as well as substantive, interpretive grounds. The statistical indices used were the Stress, Sstress, and RSQ values. The first two are badness of fit measures, with lower values indicating better solutions. RSQ, on the other hand, indicates the proportion of variance in the data accounted for by the solution. An examination of these values (see Table 1) indicated that there appeared to be improvements in fit as dimensionality increased from I to 2, and perhaps 2 to 3, with subsequent leveling off beyond 3 dimensions. Therefore, the 2- and 3-dimensional solutions were examined in more detail in order to determine which was more appropriate from a substantive standpoint. When reviewing the 2- and 3-dimensional solutions in detail, it was noted that the 3-dimensional solution did not appear to add very much more substantively. In fact the third dimension was not interpretable, and the first two were substantially the same as the two dimensions of the 2-dimensional solution. Therefore, the 2-dimensional solution, accounting for 82% of the variance, was retained as most appropriate. Figure 1 presents the solution graphically, whereas Table 2 provides a complete listing of the stimuli, their corresponding abbreviations on the figure, and the actual coordinates. In interpreting the dimensions, an internal examination is presented first, followed by an external validation. TABLE 1 Goodness/Badness of Fit Indices for the MDS Solutions # Dimensions 6 5 4 3 2 1

Stress

S-stress

RSQ

0.129 0.152 0.175 0.212 0.254 0.299

0.084 (.101 0.119 0.158 0.210 0.310

0.923 0.905 0.88 0.857 0.819 0.748

---------.- ................................

.

.

.

{.

..............

_

Academic Dishoonesty 203

15

n0oo te t 0 otre nofootnt studnote

1.0

f Isebib for ulas

notcontr .5 *owyhw

0.0

obtdurng plagiar

col a .

hhavewrit T f~~~~~~~~ra pap hieet purpap

00 *

Dim2

tutor obtexam

-.5

subsame grderror

iveanSw stealtstf9 writpap taketest sabotage giveltrtaeetsbtg

excuse

-1.0

-1.5

_

-4

_

_

_

-3

_

_

_

_

_

-2

_

_

_

-1

_

_

_

_

_

0 Dim.

_

_

_

1

_

_

_

_

2

_

_

_

_

3

_

_

_

4

1

FIG. I

Dimension I

Behaviors such as "sabotaging someone else's work," "forging a University document," "stealing a test," "usinig crib sheets," and "obtaining answers from someone else during an exam" anchor one end of the dimension or continuum. At the other extreme are "studying from someone's notes," "failing to report a grading error," "not contributing a fair share to a group project," "delaying taking an exam or turning in a paper due to a false excuse," and "utilizing a tutor or writing center inappropriately." This dimension was labeled Seriousness. At one end of the continuum are behaviors that faculty code as severe and that are clearly considered academically dishonest behaviors with some bordering on illegal. At the other end of the continuum are those behaviors that are perceived as being less severe. Looking at Figure 1, it is clear that the stimulus "studying from someone else's notes" is not considered a serious violation. In fact it is not considered an academically dishonest behavior at all by many of the respondents. Examining the behaviors that fell in the middle range of the continuum

204

The Journal of Higher Education

TABLE 2 Stimulus Listing and MDS Coordinates Academic Behavior

collaborating with others on an assignment that was assigned as individual work copying homework copying information without utilizing quotation marks copying material without proper footnotes or citations delaying taking an exam or turning in a paper due to a false excuse failing to report a grading error falsifying or fabricating a bibliography forging a University document giving answers to sonieone else during an exatn giving exam questions to students in a later section having someone else write a term paper for you hiring a ghostwriter inputting information or formulas needed for an exam into a calculator not contributing a fair share in a group project obtaining a copy of the exant to be given prior to class obtaining a test from a previous semester obtaining answers from someone else during an exam plagiarizing purcitasing a termti paper to be turned in as one's own sabotaging someone else's work (on a disk, in a lab, etc.) stealing or copying a test studying fronm someone else's notes submitting the same term paper to another class without permission taking a test for someone else using crib sheets utilizing a term paper or exam from a fraternity or sorority test file utilizing a tutor or writing center inappropriately writing a term paper for someone else

Abbrcv.

Dim. I

Dim. 2

collab copyhw quote footnt

-1.0348 - 0.7275 -0.4589 -0.5973

0.3221 0.3788 1.3288 1.1805

excuse grerror biblio forge givans givexam havewrit hireghst

-1.4964 -2.1533 0.1905 1.5256 0.310)0 0.3327 1.0361 0.8976

-0.9800 -0.8934 0.9695 -0.5299 -0.5699 -0.9959 0.0292 -0.0376

formula notcont obtprior obtprev obtdur plagiar purpap sabotage steal studnote

-0.0343 -2.0186 1.0465 -0.3354 1.1009 0.9288 0.9507 1.7820 1.1298 -3.1690

0.6773 0.7301 -0.4143 1.2590 0.4000 0.3000 -0.1309 -0.9611 -0.6007 0.9129

samepap taketest crib

-0.8971 0.9614 1.1271

-0.5948 -0.9447 0.2284

ftatpap tutor writpap

0.0277 -1.2454 0.8205

-0.1117 -0.2490 -0.7000

is also interesting. These behaviors are seen by some as serious, while others disagree. An example is the stimulus that fell right at the center of the graph, "utilizing a term paper or exam from a fraternity or sorority test file." Using previous exams has been viewed by some as cheating, while other professors may even encourage this type of behavior as an excellent study tool. In order to validate the internal interpretation of the dimension presented above, the bipolar rating scales were used as a method of external validation. Means on each of the 10 bipolar scales were computed. As in the case of the pairwise similarity ratings, tests were performed on these means which indicated that there were no statistically significant differ-

Academic Dishonesty

205

ences between subgroups. Therefore, means over all individuals within the groups were collapsed by form. Each of the 10 means was regressed on the coordinates of the 2-dimensional M[DS solution. The results of these analyses are summarized in Table 3, which presents the multiple correlations (R) and the optimum weights corresponding to each. The latter "are the direction cosines, that is, regression coefficients normalized so that their sum of squares equals 1.00 for every scale" (Kruskal & Wish, 1978, p. 37). In order to be useful in interpretation, R values greater than 0.90 are desired, although values of 0.70 are often used; p should be ' 0.01. In addition, the coefficients should be large for at least one of the dimensions (Kruskal & Wish, 1978). Of the 10 scales, 7 have high R values of 0.88 or above and are therefore highly interpretable. These scales are seriousness, major vs. minor violation, formal vs. informal resolution, degree of flexibility (when the situation is encountered), clarity, firm vs. lenient sanction required, and whether a notation should be placed on the transcript. For each of the scales, there is a high coefficient corresponding to the first dimension. The 7 scales seem very interrelated and focus on both the nature of the behavior itself (whether it can be viewed as serious, is a major violation, and is a clear example of academically dishcnest behavior) as well as the sanctions or consequences that faculty deem are appropriate. It is understandable that serious behaviors are identified as being clear unambiguous examples and are considered major violations, where a formal, firm

TABLE 3 Multiple Regression of Bipolar Scale Ratings on Dimensions of Perceptions of Academic Dishonesty

Ratinig Scales

Not at all serious/Very serious Minor violation/Major violation Mostly informal (resolution needed)/Formal Intent (of student) does not affect me/ Intent affects me No flexibility (if extenuating circumstances)! A great deal of flexibility Very difficult to detect/Very easy to detect Very difficult to prove/Very easy to prove Unclear (example of academic dishonestv)/Clear Lenient (resolution or sanction)/Firm Should never be noted on transcript! Sthould always be noted on transcript *p

Suggest Documents