Blind forces: Ethical infrastructures and moral disengagement in organizations

Article Blind forces: Ethical infrastructures and moral disengagement in organizations Organizational Psychology Review Organizational Psychology R...
Author: Simon Holmes
2 downloads 2 Views 355KB Size

Blind forces: Ethical infrastructures and moral disengagement in organizations

Organizational Psychology Review

Organizational Psychology Review 2014, Vol. 4(4) 295–325 ª The Author(s) 2014 Reprints and permission: DOI: 10.1177/2041386613518576

Sean R. Martin Cornell University, USA

Jennifer J. Kish-Gephart University of Arkansas - Fayetteville, USA

James R. Detert Cornell University, USA

Abstract This review integrates research regarding organizations’ ethical infrastructure and moral disengagement to illustrate the complicated relationship between these constructs. We argue that employee perceptions of strong ethical infrastructures may reduce individuals’ tendencies to rationalize and engage in clearly self-interested unethical behaviors, but might motivate moral disengagement about other behaviors by tapping into members’ desires to preserve a positive self-image and reduce cognitive burden. This research builds upon scholars’ understanding that ‘‘good’’ people can be morally blind and engage in unsavory acts without awareness of the unethical nature of their actions, and suggests that even in organizations with formal and informal systems prioritizing ethics, unethical decisions and behaviors may be rationalized and go unnoticed. Finally, we discuss theoretical and methodological implications—notably that scholars should be concerned about conclusions drawn from employee perceptions about the ethicality of the organizational context, and supplement perceptual measures with direct observation and more objective assessment.

Paper received 1 July 2013; revised version accepted 9 December 2013. Corresponding author: Sean R. Martin, Johnson Graduate School of Management, Cornell University, 401 Sage Hall, Ithaca, NY 14850, USA. Email: [email protected]


Organizational Psychology Review 4(4)

Keywords Cognition/perception, culture, employee–organization relationships, ethics, job attitudes/beliefs/ values Business ethics research has proliferated in recent years in concert with, and in response to, increasing recognition of unethical practices— defined as practices that violate widely held societal norms for behavior (Trevin˜o, Weaver, & Reynolds, 2006)—in the business community. To address unethical behavior in organizations, scholars have discussed the importance of creating an ethical organizational context or ethical infrastructure (Tenbrunsel, Smith-Crowe, & Umphress, 2003) that encourages ethical, and sanctions unethical, behavior both formally and informally. Numerous studies have shown that ethical climates and cultures (component parts of the broader term ‘‘ethical infrastructure’’) influence (un)ethical behavior as reported by organizational members (e.g., Marta, Singhapakdi, & Higgs-Kleyn, 2001; Mayer, Kuenzi, Greenbaum, Bardes, & Salvador, 2009; Paolillo & Vitell, 2002; Ross & Robertson, 2003; Schaubroeck, Hannah, Avolio, Kozlowski, & Lord, 2012; Sweeney, Arnold, & Pierce, 2010; Trevin˜o, Butterfield, & McCabe, 1998; Vitell et al., 2003). This approach has been highly valuable, particularly for identifying the more explicit and easily measurable aspects of ‘‘bad barrels’’ and the ‘‘bad apples’’ who lead or function within them (Felps, Mitchell, & Byington, 2006; Trevin˜o & Youngblood, 1990). As we will argue, though, this approach remains limited because it has yet to adequately account for our current understanding of the common cognitive processes underlying unethical behavior. Specifically, in recent decades, research in social psychology, behavioral economics, and behavioral ethics has increasingly uncovered the multitude of ways in which otherwise good people can be morally blind and engage in unsavory acts without being aware of the unethical nature of their actions. Descriptions of limited human cognition related to unethical

reasoning and behavior include bounded ethicality (Chugh, Bazerman, & Banaji, 2005), self-deception and ethical fading (Tenbrunsel & Messick, 2004), intuitive morality (Haidt, 2001), plus a host of other cognitive biases (e.g., omission bias [Cushman, Young, & Hauser, 2006], indirect agency biases [Paharia, Kassam, Greene, & Bazerman, 2009], attribution biases [Knobe, 2003; Pizarro, Uhlmann, & Bloom, 2003]). Moral disengagement, a theory that explains the process and mechanisms by which an individual’s moral self-regulatory system is decoupled from his or her thoughts and actions, represents a particularly powerful manner by which individuals can rationalize or neutralize reprehensible conduct (Bandura, 1986; Sykes & Matza, 1957). While organizational infrastructures may be effective in reducing the unethical behavior that organizational members are aware of, this aforementioned research suggests that even in organizations with formal and informal systems prioritizing ethics, many unethical decisions and behaviors may go unrecognized, or be rationalized in ways that make them seem ethical to insiders. As Giessner and van Quaquebeke (2010) note, ‘‘Judging ethical behavior is largely a perceptual process that is grounded in beliefs about what is normatively appropriate’’ (p. 22). In other words, it is possible that individuals perceive their organization as being ethical while rationalizing or otherwise cognitively distorting unethical actions. After all, in numerous cases routinely cited as examples of ‘‘bad barrels,’’ many organizational members were not only blind to ethical issues, but also likely would have rated their organizational culture or climate as being rather ethical prior to the identification of scandal. Extreme examples of this phenomenon, referred to in O’Reilly and Chatman’s (1996) review of organizational

Martin et al. culture, include the thoughts and actions of cult members who see their organization as morally beyond reproach. And in the news, we read about members of these organizations routinely engaging in what society would deem unethical behavior, but rationalizing their behaviors by believing they are pursuing a greater good or higher power, or reasoning that every group of their type engages in some unsavory acts (McKinley, 2009; Turner, 2013). In a business context, Trevin˜o and Brown (2004, p. 75) described Arthur Anderson employees believing in the ethicality of their organization saying, ‘‘we’re ethical people; we recruit people who are screened for good judgment and values.’’ Yet at the same time, their auditors and consultants were engaged in numerous unethical activities. These examples suggest it is possible for members to perceive their organization as being one in which ethics are prioritized, routinely enacted, and as having formal and informal systems supporting those priorities— that is, as having a strong ethical infrastructure— and yet still be working in an environment where various unethical behaviors go unnoticed or are easily rationalized. In sum, we argue that if humans who make ethical mistakes are no longer automatically considered ‘‘bad apples’’ (at least from a social science perspective), but rather as likely decent people who sometimes behave unethically due to the inevitable fallibility that comes from being human (see Gino, Moore, & Bazerman, 2009; C. Moore & Gino, 2013; Tenbrunsel & SmithCrowe, 2008, for reviews), then perhaps it is time to similarly recognize that even in those organizations where leaders endeavor to encourage ethical behavior and create contexts in which ethics are prioritized, human fallibility may leave the organization prone to collectively rationalizing and unwittingly institutionalizing unethical practices. In other words, organizations where unethical practices occur need not be classified as ‘‘bad barrels,’’ but rather as fragile collectives of very human motivations and goals that sometimes trigger, and then justify, unethical behavior. If

297 so, our goal should be to expand understanding of where and why unethical behavior occurs in organizations by moving beyond a focus on the usual suspects—such as those organizations led by abusive or unethical leaders, or those with obviously faulty compliance systems—and focusing instead on explaining and illustrating the pathways by which unethical practices can exist and endure even in apparently ‘‘good’’ organizations that prioritize ethical behavior. In this paper, we pursue that goal. We first review work regarding ethical infrastructures and the component aspects of ethical climates and ethical cultures. We then review research on moral disengagement to illustrate how it can not only exist in all types of organizations, but in some cases even be elicited by member perceptions that their organization has a strong ethical infrastructure. We focus specifically on moral disengagement because it has been theorized to play a key role in normalizing unethical behavior (e.g., C. Moore, 2008), and because it tends to accompany many of the other cognitive biases that have been uncovered by behavioral ethicists in recent years. Importantly, we do not argue that strong ethical infrastructures necessarily foster more unethical behavior in an absolute sense. Indeed, they likely do root out severe and blatantly unethical types of behavior (Jones, 1991). Rather, we argue that there are numerous outcomes associated with perceptions of a strong ethical infrastructure that can trigger members’ tendencies to morally disengage about common, less intense behaviors. Further, we argue that moral disengagement likely plays a role in reinforcing members’ perceptions that the ethical infrastructure of their organization is strong.

Ethical infrastructures The notion that organizational environments influence members’ behaviors in many ways, including ethically, has a significant lineage. Traditionally, scholars have used the terms

298 organizational culture or climate to describe the shared norms and expected behavior within the organizational context (e.g., Denison, 1996; Sackmann, 1992; Schneider, Ehrhart, & Macey, 2013). Though often used interchangeably, the terms culture and climate represent theoretically distinct aspects of an organization’s context, at least as originally delineated and studied (Kuenzi & Schminke, 2009). The notion of culture was imported to organizational studies from anthropology (Sackmann, 1992). As such, it originally had a distinctly anthropological orientation, prioritizing the study of symbols, rituals, rites of passage, and other factors that link individuals to their context, and that constrain or guide their collective behaviors (Denison & Mishra, 1995; O’Reilly & Chatman, 1996; Trice & Beyer, 1993). By contrast, organizational climate research has roots in psychological traditions (Denison, 1996; Schneider, 1975). Accordingly, it refers to a perceptual phenomenon—namely, organizational members’ shared perceptions and interpretations of their environment (Schneider & Reichers, 1983), with particular focus on the things that organizational members agree constitute desirable and sanctionable behavior (Zohar & Luria, 2005). The general culture and climate concepts have been adopted by researchers studying various aspects of organizational life to refer to shared perspectives or understandings around more specific phenomena such as learning (Fiol & Lyles, 1985; Popper & Lipshitz, 1998), psychological safety (Edmonson, 1999), or creativity (Isaksen, Lauer, & Ekvall, 1999) to name a few. The concepts have also been drawn upon by business ethics scholars who use the terms ethical culture (Trevin˜o, 1986) and ethical climate (Victor & Cullen, 1988) to refer to shared understandings of the organizational context regarding expectations for ethical behavior. Below, we briefly introduce each of these ethicsspecific research streams and review work suggesting how the two are related. We follow Tenbrunsel and colleagues’ (2003) lead in considering culture and climate as key components

Organizational Psychology Review 4(4) of a more expansive term—ethical infrastructure—that incorporates these constructs and others to describe the general ethical context of an organization.

Ethical culture and ethical climate Trevin˜o (1986) was the first to introduce the concept of ethical culture to the management literature. According to her theorizing, ethical culture is a powerful force on ethical decision making because most employees operate at the conventional level of cognitive moral development and are thus looking outside themselves for information about appropriate behavior (Kohlberg, 1969). Messages regarding appropriate and inappropriate behavior are communicated by the organization’s ethical culture to employees via the interplay of formal and informal systems of behavioral control (Trevin˜o et al., 1998). Formal systems (such as reward systems, company policies, codes of ethics, and selection systems) tend to be under the direct control of organizational decision makers whereas informal systems describe the way things are in the organization as transmitted through behavioral norms, rituals, stories, and language. When organizational members perceive consistent expectations being communicated by the formal and informal systems, the organization’s ethical culture is said to be strong and employees are likely to abide by the clear and consistent messages about behavioral expectations. When these messages are seen as conflicting, the ethical culture is deemed weaker (Trevin˜o, 1990). Whether based on Trevin˜o’s theorizing or other models of ethical culture that have been proposed (e.g., Hunt & Vitell’s [1986] research on corporate ethical values, and Kaptein’s [2008, 2011] corporate ethical values model), empirical work generally supports the expected negative relationship between perceptions of the organization’s ethical culture and unethical behavior (e.g., Baker, Hunt, & Andrews, 2006; Kamp & Brooks, 1991; Marta

Martin et al. et al., 2001; Schaubroeck et al., 2012; Trevin˜o et al., 1998). Corresponding to the introduction of ethical culture, Victor and Cullen (1987) introduced the ethical climate construct, or ‘‘a group of perspective climates reflecting the organizational procedures, policies, and practices with moral consequences’’ (Martin & Cullen, 2006, p. 177). The authors identified two dimensions that, when crossed, theoretically derive nine ethical climate types. The first dimension, ethical criteria, includes three broad categorizations of moral philosophies: egoism, benevolence, and principled. These dimensions parallel Kohlberg’s theory of cognitive moral development wherein an individual’s level of moral reasoning is classified as self-centered (Level 1), other-centered (Level 2), or focused on broad principles of fairness and justice (Level 3). The second dimension, locus of analysis, draws on sociology literature (e.g., Merton, 1957) to identify the referent group as individual (i.e., within the individual), local (i.e., internal to the organization such as a work group), or cosmopolitan (i.e., external to the organization such as a professional organization). Although Victor and Cullen (1988) introduced nine ethical climate types, subsequent studies have generally supported a subset of three to five of the original nine climate types (e.g., Cullen, Victor, & Bronson, 1993; Martin & Cullen, 2006; Wimbush, Shepard, & Markham, 1997a). In the three-factor model, ethical climates closely follow the three broad moral philosophies: egoistic, benevolent, and principled climates. Specifically, in an egoistic climate, employees perceive that ethical decisions are made on the basis of self-interest. In a benevolent ethical climate, ethical decisions are determined by doing what is best for others, including employees, customers, or the community. And in a principled ethical climate, decisions are based on formal guidelines, such as rules, codes, or laws. In contrast to the three-factor model, the five-factor model breaks down the principled climate into a law and code

299 climate and a rules climate, in which individuals make decisions based on external principles (e.g., laws) or local principles (e.g., company rules and codes), respectively. This model also includes an instrumental category, referring to climates where ‘‘individuals believe they should act on deeply held, personal moral convictions’’ (Martin & Cullen, 2006, p. 179). In their early work, Victor and Cullen (1987) stopped short of offering formal, specific predictions regarding the relationship between ethical climate types and (un)ethical behavior. Instead, the authors theorized that certain types of ethical climate might be more prone to particular ethical problems than other climates. Later theorizing offered a more simplified model of the relationship between ethical climate types and unethical behavior, arguing that employees are more likely to behave ethically in organizations that stress ‘‘the consideration of others’’ (such as benevolent and principled climates) versus in organizations that stress selfinterest (egoistic climate; Wimbush & Shephard, 1994, p. 640). Empirical results, which rest on employees’ perceptions of their work environment (e.g., D. M. Mayer, Kuenzi, & Greenbaum, 2009), generally support a positive relationship between egoistic climate and unethical behavior, and negative relationships between benevolent and principled climates and unethical behavior (e.g., Aquino, 1998; Kish-Gephart, Harrison, & Trevin˜o, 2010; Peterson, 2002; Trevin˜o et al., 1998; Vardi, 2001; Wimbush, Shepard, & Markham, 1997b). Beyond their effects on behavior, both ethical climate and ethical culture have been argued and shown to relate to certain employee attitudes. For example, Cullen, Parboteeah, and Victor (2003) suggested that benevolent ethical climates are positively related to organizational commitment because such environments ‘‘encourage positive affect’’ and ‘‘higher levels of cohesiveness among organizational members’’ (Cullen et al., 2003, p. 138). Indeed, benevolent and principled climates have been shown to be positively (and egoistic climates, negatively) related to

300 organizational commitment (Cullen et al., 2003; Ruppel & Harrington, 2000; Schwepker, 2001) and job satisfaction (Goldman & Tabak, 2010; Wang & Hsieh, 2012). Similarly, empirical research supports the hypothesis that employees are more committed to organizations that support ethical behavior through a strong ethical culture (Trevin˜o et al., 1998). Ethical culture is also positively related to organizational identification and supervisory trust (DeConinck, 2011; Mulki, Jaramillo, & Locander, 2006).

Ethical infrastructure Researchers have recognized that ethical climate and ethical culture are highly related descriptors of an organization’s overall ethical context (e.g., Arnaud & Schminke, 2007; Jones, Felps, & Bigley, 2007; Kaptein, 2008, 2011; Kish-Gephart et al., 2010; Tenbrunsel & SmithCrowe, 2008). Indeed, a recent meta-analysis revealed that Victor and Cullen’s three ethical climate types and operationalizations of ethical culture are strongly related, with mean correlations ranging from |.50| to |.72| (Kish-Gephart et al., 2010). In a comprehensive model, Tenbrunsel et al. (2003) subsumed elements of ethical culture and ethical climate under the term ‘‘ethical infrastructure,’’ which they defined as the organizational climates, informal systems, and formal systems relevant to ethics in an organization (Tenbrunsel & Smith-Crowe, 2008). The authors modeled ethical infrastructure as three concentric circles—starting with the innermost circle of formal systems, followed by informal systems, and then encompassed by the outermost circle, organizational climates—that simultaneously support and influence each other. The formal systems refer to the documented and standardized procedures upholding (un)ethical standards. The informal systems are those signals that are not documented—they are felt and expressed through interpersonal relationships. Both the formal and informal elements are undergirded by individuals’ shared perceptions of those systems.

Organizational Psychology Review 4(4) Thus, in the rest of this paper, we follow others in treating ethical climate and culture as being largely intertwined and indicative of a broader ethical context (Denison, 1996; Glick, 1985; Tenbrunsel et al., 2003). We use Tenbrunsel and Smith-Crowe’s (2008) term ethical infrastructure to refer to this broader ethical context of an organization, using the component terms ethical culture or climate specifically when necessary to remain consistent with other authors’ original work. In addition, consistent with the dominant treatment in behavioral ethics literature of assessing an organization’s ethical context via employee perceptions (Kish-Gephart et al., 2010; D. M. Mayer et al., 2009), we refer to ethical infrastructure as it is perceived by organizational members. However, as we lay out in the ensuing sections, such perceptions may not always reflect the reality that would be observed by outsiders or captured through more objective measures.

Moral disengagement Over the past decades, approaches to studying the ethical decision making of individuals have proliferated and evolved (for review, see Tenbrunsel & Smith-Crowe, 2008; Trevin˜o, Den Nieuwenboer, & Kish-Gephart, in press). Some emphasize ethical decision making from a more deliberative frame—emphasizing, for example, individuals’ moral awareness and reasoning (e.g., Hunt & Vitell, 1986; Jones, 1991; Rest, 1986), their level of moral development (e.g., Kohlberg, 1969), their dispositional tendency to attend to and reflect upon moral information (Reynolds, 2008), or their prioritization of a moral identity (their desire to be and be seen as a moral person; Aquino & Reed, 2002). From this perspective, individuals are treated as decision makers who perceive moral information, establish moral judgment, form an intention for action, and act accordingly (Jones, 1991; Kohlberg, 1969; Rest, 1986). Research in this vein is ongoing, with scholars exploring numerous factors that influence individuals’

Martin et al. moral awareness or judgment (see O’Fallon & Butterfield, 2005; Tenbrunsel & Smith-Crowe, 2008, for review). And indeed, moral awareness and level of moral development have been found to be positively (negatively) related to (un)ethical intentions (Singhapakdi, 1999; Singhapakdi, Vitell, & Franke, 1999) and choices (Kish-Gephart et al., 2010). And moral identity (e.g., Aquino, Reed, Thau, & Freeman, 2007; DeCelles, DeRue, Margolis, & Ceranic, 2012; Reed & Aquino, 2003) and moral attentiveness (Reynolds, 2008) have both been negatively related to unethical thought and action. Recently, however, other research has shown that individuals, and not just those with obvious moral development limitations, often engage in unethical behavior with little preact cognition about the moral considerations involved. This work has shown how various factors can lead individuals to make decisions that result in unethical behavior that is either unseen or cognitively reconstrued (see Gino et al., 2009; Messick & Bazerman, 1996; C. Moore & Gino, 2013; Tenbrunsel & Smith-Crowe, 2008, for reviews). One particularly valuable approach to explaining the overlooking or reconstruing of unethical behavior is the study of moral disengagement (Bandura, 1986)—a process by which the connection between individuals’ moral selfregulation systems and thoughts and actions is interrupted. Moral disengagement can operate as an automatic and anticipatory factor preventing individuals from perceiving moral cues, or as a post hoc rationalization to justify unethical decisions (Ashforth & Anand, 2003). In other words, not only can it facilitate unethical action by dampening moral awareness and preventing individuals from perceiving moral information (Bandura, 1990; Detert, Trevin˜o, & Sweitzer, 2008; Tenbrunsel & Smith-Crowe, 2008), but it can also bias judgment when individuals are somewhat morally aware. For this reason, we focus on moral disengagement as a vehicle by which unethical behavior can become an engrained and taken-for-granted part of an organization’s ethical infrastructure, including those

301 infrastructures that are generally perceived by insiders as ethical. The notion that individuals have the cognitive capability to rationalize inconsistencies in their espoused moral beliefs and their behavior in practice, and thus make themselves (and others) blind to ethical gaffs, has a long history (e.g., Argyris & Scho¨n, 1978; Sutherland, 1983). For example, drawing on interviews of white-collar criminals accused of embezzling money from their employers, Cressey (1953) noted that ‘‘normal’’ people refused to accept their actions as criminal. Rather, they minimized their indiscretions by using neutral language (e.g., ‘‘borrowing’’ rather than ‘‘stealing’’) or citing injustices at the hand of the victim (i.e., their organizations). Similarly, criminal theorists Sykes and Matza (1957) argued against the prevailing theory that juvenile delinquency was the result of learning a different set of values in low socioeconomic environments. Instead, the authors suggested that juvenile delinquents share society’s conventional values but, in certain situations, use cognitive neutralization techniques to weaken the apparent necessity of applying those values. The authors identified several neutralization techniques such as denying responsibility for one’s actions or denying that a victim had been unjustly harmed (or harmed at all). Drawing on this foundational work, organizational researchers have suggested additional types of cognitive distortion techniques (e.g., Ashforth & Anand, 2003; Brief, Buttram, & Dukerich, 2001) that are commonly found in organizations where systemic corruption is uncovered. Bandura’s (1986) moral disengagement theory helps to elucidate how and why cognitive distortions are used, and thus provides a unifying theoretical framework for the aforementioned work. Moral disengagement theory posits that people generally behave in ways that are consistent with their internal standards of morality because they experience anticipatory negative emotions such as guilt, shame, or remorse when they consider deviating from those

302 standards. However, individuals are at times motivated (consciously or nonconsciously) to disengage this moral self-regulatory process in ways that fit their needs, effectively bypassing the negative emotions that would normally come from violating internal standards (Bandura, 1986; Detert et al., 2008). Bandura (1986) articulated eight cognitive distortion mechanisms by which individuals morally disengage.1 Moral justification occurs when individuals justify their actions as serving the ‘‘greater good’’ (as in the case of substandard jobs being characterized positively as ‘‘economic development’’). Euphemistic labeling involves using sanitized or convoluted language to make an unacceptable action sound acceptable—such as ‘‘borrowing’’ software purchased by someone else, or engaging in ‘‘creative accounting.’’ Advantageous comparison involves making a behavior seem less harmful or of no import by comparing it to even worse behavior. For example, a person who takes a ream of paper home from the office for personal use might say, ‘‘It’s not like I’m taking a printer home with me.’’ With displacement of responsibility, people deflect responsibility for their own behavior by attributing it to social pressures or the dictates of others, usually a person of higher power or authority (e.g., ‘‘I was just following orders’’). Diffusion of responsibility allows individuals to avoid personal feelings of culpability for their actions by hiding behind a collective that is engaged in the same behavior, or by using the rationale that ‘‘everyone else is doing it, too.’’ Distortion of consequences involves misrepresenting the results of one’s actions by minimizing them or focusing only on the positive. Claiming that one’s (unethical) actions are ‘‘no big deal,’’ or that they ‘‘don’t hurt anyone’’ are common ways of trying to convince oneself and/or others that one’s behavior is acceptable because little or no harm is done. Attribution of blame (also known as ‘‘blaming the victim’’) is the process of justifying one’s behaviors in reaction to someone else’s provocation or behavior (e.g., ‘‘It’s their own fault for trusting others with this responsibility’’).

Organizational Psychology Review 4(4) The notion of ‘‘buyer beware’’ may be considered a broader example of the way business behavior has been construed so as to make harming a victim easily justifiable as being the victim’s own fault. Last, dehumanization involves minimizing or distorting the humanity of others so as to lessen identification with or concern for those who might be harmed by one’s actions (e.g., ‘‘those clowns’’). Additional examples of each moral disengagement mechanism are provided in Table 1. Moral disengagement has been studied in a wide variety of settings and from a number of perspectives. For instance, many scholars have approached moral disengagement as a dispositional tendency (e.g., Bandura, Caprara, Barbaranelli, Pastorelli, & Regalia, 2001; Claybourn, 2011; Detert et al., 2008; Duffy, Aquino, Tepper, Reed, & O’Leary-Kelly, 2005; Hinrichs, Wang, Hinrichs, & Romero, 2012; McFerran, Aquino, & Duffy, 2010; C. Moore, Detert, Trevin˜o, Baker, & Mayer, 2012). Dispositional moral disengagement can be defined as ‘‘an individual difference in the way that people cognitively process decisions and behavior with ethical import that allows those inclined to morally disengage to behave unethically without feeling distress’’ (Moore et al., 2012, p. 2). According to this approach, people who have a tendency to morally disengage will be more likely to engage in unethical or deviant behavior across situations. This expectation has been supported in a host of empirical studies (e.g., Christian & Ellis, 2011; Detert et al., 2008; C. Moore et al., 2012; Stevens, Deuling, & Armenakis, 2012). Prior research has also identified numerous individual-level antecedents of morally disengaged reasoning, including trait empathy, trait cynicism (Detert et al., 2008), moral identity (McFerran et al., 2010), social dominance orientation (Rosenblatt, 2012) and task self-efficacy (Shepherd, Patzelt, & Baron, 2013). More recently, however, researchers have begun to consider how situational or contextual characteristics may also trigger morally disengaged reasoning and subsequent unethical behavior (e.g., Barsky, 2011; Gino & Galinsky, 2012;

Martin et al.


Table 1. Moral disengagement mechanisms. Mechanism


Moral justification

Individuals reconstrue harm to others in ‘‘It’s for the greater good’’; ways that appear morally justifiable ‘‘We’re actually doing them a favor’’ Use of morally neutral language to make ‘‘I’m just borrowing it’’; ‘‘Strategic omission is how unethical conduct seem benign or less we do it here’’ harmful Comparison of unethical behavior with ‘‘At least we’re not doing what those people are doing’’; ‘‘It even worse behavior to make the could be worse’’ original behavior seem acceptable ‘‘We’re not harming anyone’’; Distorting or minimizing the ‘‘It’s not a big deal’’ consequences of unethical behavior in order to disconnect unethical actions and self-sanctions ‘‘Everybody is doing it’’; ‘‘We Placing responsibility for unethical made this decision behavior onto a group, thereby making together’’ one feel less responsible for a collective’s unethical actions ‘‘I’m doing what I was told’’; Placing responsibility for unethical ‘‘I’m just following orders’’ behavior onto an authority figure, thereby neutralizing personal responsibility ‘‘It’s their own fault’’; ‘‘They Placing the responsibility for unethical deserve it’’ behavior onto the victim in order to exonerate one’s self Recasting victims of unethical behavior as ‘‘They are being treated like the animals they are’’; ‘‘They’re ‘‘less than human,’’ or unworthy of just cogs in a wheel’’ human treatment

Euphemistic language

Advantageous comparison

Distortion of consequences

Diffusion of responsibility

Displacement of responsibility

Attribution of blame (blaming the victim) Dehumanization

Kish-Gephart, Detert, Trevin˜o, Baker, & Martin, in press; Shalvi, Eldar, & Bereby-Meyer, 2012). This work draws heavily on the idea that people have a basic desire to appear good and moral— both to themselves and to others (Bandura, 1986; Z. Kunda, 1990; Tsang, 2002)—thus maintaining a positive self-image. However, as Tsang (2002) pointed out, situational features can trigger similarly strong motives, such as selfinterest, that conflict with one’s desire to appear moral. For example, an employee might desire a large financial bonus that can only be secured with unethical behavior (e.g., Kish-Gephart et al., in press); or, an employee may desire to please an authority figure and thus feel compelled to follow the authority’s unethical demands (e.g., Milgram, 1969). In


such situations, individuals will ‘‘approach the decision with a preference toward any plausible interpretation of events that would allow them to appear moral and still choose the immoral action’’ (Tsang, 2002, p. 35). Moral disengagement mechanisms provide one avenue to do just that. As such, research in this vein suggests that moral disengagement and subsequent unethical behavior is most likely to occur in situations characterized by conflicting motives (e.g., being moral and securing a self-interested outcome) and the accessibility of some ostensibly legitimate justification (to oneself and potentially to others) for the (unethical) behavior. Although currently limited in number, studies support the general argument that certain situational features can influence morally

304 disengaged reasoning. Bersoff (1999, p. 423), for instance, suggested that ‘‘the less moral ambiguity there is surrounding a situation, the less latitude an agent has in negotiating reality’’ by using moral rationalizations. While Bersoff did not measure moral disengagement directly, his empirical findings suggest that it was harder for participants in his study to keep money they had not earned in conditions that created barriers to rationalizations. Similarly, Paharia, Vohs, and Deshpande (2013) and Kish-Gephart et al. (in press) found evidence that situations involving self-interest—such as desiring a product or being offered a monetary bonus—motivated the use of morally disengaged reasoning. Experiencing unfairness or having one’s morality questioned is further theorized to motivate moral justification in support of self-serving behavior (e.g., ‘‘It’s not fair so I’m righting this wrong’’ or ‘‘this person is uninformed, so I’m teaching him a lesson’’; C. Moore & Gino, 2013; Sumanth, Mayer, & Kay, 2011). The presence (or absence) of others in a situation may also be related to morally disengaged reasoning. For example, work by Wiltermuth (2011) and Umphress and colleagues (Umphress & Bingham, 2011; Umphress, Bingham, & Mitchell, 2010) indicates that individuals are more likely to use justifications when they can claim that their self-serving behavior also helps others (e.g., peers; cf. Apel & Paternoster, 2009; C. Moore & Gino, 2013; Sutherland, 1983). Similarly, the presence of a leader may be related to increased (in the case of a politically astute leader; Beu & Buckley, 2004) or decreased (in the case of an ethical leader; Liu, Lam, & Loi, 2012) moral disengagement in followers. According to Beu and Buckley (2004), for instance, politically astute leaders can influence followers toward unethical behavior by reframing actions and situations in ways that draw attention away from ethical issues and by encouraging the use of morally disengaged reasoning. An important part of using their political skill effectively is the ability to inspire trust, defined as one’s willingness to be vulnerable to

Organizational Psychology Review 4(4) another (R. C. Mayer, Davis, & Schoorman, 1995), which in turn reduces others’ felt need to closely monitor their words and deeds (Ammeter, Douglas, Gardner, Hochwarter, & Ferris, 2002). In effect, the leader, whose rationale is trusted with little thought or questioning, helps the follower to reinterpret the situation using a morally disengaged lens. The very nature of moral disengagement is alarming because it demonstrates the power of the human mind to distort perceptions and rationales such that unethical thinking and behavior is not recognized as such. If individuals’ perceptions of unethical behavior are readily distorted in this way, it seems plausible that employees could perceive (and report) that an organization’s infrastructure is ethical when indeed unethical rationales and practices exist and persist but are simply unnoticed. In the following section, we thus caution against the assumption that organizational infrastructures are ethical because members—even many members—view them as such. Instead, we argue that an ethical infrastructure may not only harbor unethical thinking and behavior, but also, in some ways, may make it more difficult for members to see certain types of problems— particularly those of the day-to-day, less morally intense variety (Jones, 1991). Further, we posit that moral disengagement is, to some degree, an important factor in preserving employees’ perceptions that their organization enjoys an ethical infrastructure. Our argument rests on the recognition that several fundamental human tendencies found in prior work to motivate morally disengaged thinking may actually be present more often in situations in which employees perceive themselves to be part of an ‘‘ethical’’ infrastructure.

Interplay of moral disengagement and ethical infrastructure Undoubtedly, the perception that one works in a strong ethical infrastructure has many benefits. As discussed before, numerous studies have

Martin et al.


Step 4

Perceptions of strong ethical infrastructure

(+) Step 1

Organizational commitment Organizational identification Trust

(+) Step 2

Motivation to: (+) Preserve positive self-image Step 3 Reduce cognitive load

Moral disengagement

Rationalized unethical behavior

(–) (+) (–)

Motivation to engage in purely self-interested endeavors

Figure 1. A model of the interplay between ethical infrastructure and moral disengagement.

shown that ethical cultures and climates are important antecedents of organizational commitment, identification, and trust (Cullen et al., 2003; DeConink, 2011; Martin & Cullen, 2006; Ruppel & Harrington, 2000; Trevin˜o et al., 1998; Tsai & Huang, 2008). These factors are also associated with reduced turnover and higher job satisfaction (Simha & Cullen, 2012; Trevin˜o et al., 1998). Although most researchers (us included) would like to believe that a strong ethical infrastructure could also prevent moral disengagement, this belief may be overly optimistic. As we will outline next, the functions served by an organization’s infrastructure in general—to consistently guide behavior in certain directions rather than others, to allow people to feel good about their role in a collective, and to reduce the need for cognitive effort or deliberative decision making at every decision point (Schutz, 1964)— are positive factors when concerning group coordination or performance outcomes, but are also related to the core motives underlying morally disengaged reasoning. Moreover, strong ethical infrastructures, while effective for dampening certain triggers of moral disengagement, may not address some and could potentially enhance others. Our arguments are illustrated in Figure 1, which articulates the means by which employee perceptions that they are in a strong ethical infrastructure can influence the likelihood of morally disengaged reasoning.

Ethical infrastructure, personal gain motivations, and moral disengagement Defined as ‘‘a motive or behavior that seeks to benefit the self’’ (Cropanzano, Stein, & Goldman, 2007, p. 6), self-interest is a powerful human motive. From an evolutionary perspective, self-interest promotes survival and success (Miller, 1999; Moore & Lowenstein, 2004; Sen, 1977). It also operates automatically, potentially leading to unethical action even when people consciously attempt to adhere to ethical, other-oriented standards (Moore & Loewenstein, 2004). Appeals to members’ self-interest are commonplace in organizations. Self-interest often takes the form of pursuing personal, material, or status-related gains. Rather than being forbidden, such pursuits are generally recognized, managed, and even encouraged through the use of goals and incentives that attempt to align individual and collective interests (Eisenhardt, 1989; Ross, 1973). For instance, to accomplish their core mission and strategies, researchoriented universities generally incentivize individual pursuit of publications through the rewards associated with research productivity (raises, tenure, titles); likewise, for-profit organizations incentivize sales with material and nonmaterial rewards. In these instances, and more generally, the core beliefs, values, artifacts, stories and other elements of an organization’s culture function to support rather than

306 retard the behaviors aimed at mission attainment: The strongest performers are generally given the largest salaries, sit in the largest offices, are said to best embody the organization, are most likely to become the basis for stories told about the organization’s heroes, etcetera. (Kunda, 1992). A potential problem arises in that both broad organizational objectives and specific performance goals can at times be extremely challenging or even impossible to achieve, and thus potentially motivate employees to take shortcuts or engage in unethical behavior (Schweitzer, Ordon˜ez, & Douma, 2004) to avoid losing out on maximum personal gain. According to Bersoff (1999), personal gain situations are ripe for morally disengaged reasoning because rationalizations allow individuals to reach the desired (self-interested) outcome while escaping the self-censure that would otherwise result from acting unethically. As Ashforth and Anand (2003, p. 28) pointed out, rewards can unintentionally ‘‘skew’’ employees’ thinking and behaviors such as when a broker ‘‘conclude[s] that the offerings he is rewarded for pushing are in fact the best investments.’’ This reasoning sounds eerily similar to moral justification: the broker has rationalized that he is simply doing what is in ‘‘the best interest of the client.’’ And, because self-interest can be blinding (Moore & Loewenstein, 2004), in many cultures this rationale would be shared and go unnoticed by others in the sales function. Schein (2004) has noted that rationalizations for unethical behavior that is easily identifiable to outsiders—including those in different functions within the same organization—is often unrecognized by embedded members for whom it has become part of the taken-for-granted fabric of their environment. This is because ‘‘normatively appropriate’’ is largely a perceptual process that can vary among individuals and groups who have chosen, over time, to prioritize different bases for judging social action (Fiske, 1992; Giessner & van Quaquebeke, 2010). For

Organizational Psychology Review 4(4) instance, accountants who readily criticize the unethical thinking and practice of their counterparts in marketing and sales see no problem with the euphemisms they use (e.g., ‘‘juggling numbers’’) and their diffusion of responsibility (e.g., ‘‘everyone does it’’) to justify questionable accounting practices. Likewise, bankers who operate according to the norms of ‘‘market pricing’’ (Fiske, 1992) will be unlikely to see problems with the excessive bonuses that are widely criticized during tough economic times by outsiders who use a different lens for evaluating fairness (Giessner & van Quaquebeke, 2010). The tendency to make decisions on the basis of personal gain or personal instrumentality, and to use morally disengaged reasoning to support such decision making (Kish-Gephart et al., in press), typified what Victor and Cullen (1987, 1988) referred to as egoistic ethical climate. And such climates have been empirically linked to higher levels of unethical behavior (see Kish-Gephart et al., 2010, for a meta-analytic review). Yet, the good news is that strong ethical infrastructures are likely to eschew many of the purely self-interested, instrumental motivations common in these environments. The formal mechanisms in place in an ethical infrastructure are likely to emphasize the welfare of the collective (e.g., promoting workplace safety, and sanctioning harassment or the giving or receiving of bribes) over purely individual benefit (Tenbrunsel et al., 2003). As well, they are likely to cultivate a more benevolent climate type in which informal systems base decisions on care for employees, customers and communities, and increases commitment to and identification with the collective (Cullen et al., 2003; Victor & Cullen, 1988). Indeed, prior research supports that these organizational factors are negatively related to employee perceptions of unethical behavior (Simha & Cullen, 2012; Trevin˜o et al., in press). These paths—from strong ethical infrastructure and the resultant commitment, identification, and trust they evoke to reduced engagement of

Martin et al. self-interest motives—is depicted in the bottom path of Figure 1. The bad news, however, is that although strong ethical infrastructures are likely to suppress blatantly self-interested motivations and unethical behavior, they are not necessarily equally effective at suppressing morally disengaged reasoning and unethical behavior related to other motivations—such as the desire to maintain a positive self-image or the desire to reduce cognitive load—commonly linked to strong ethical infrastructures. Indeed, in their original theorizing, Victor and Cullen (1987) recognized that even the venerable benevolent (or caring) ethical climate is imperfect: Corporations with caring or rules climates may be more prone to violations of trade laws than corporations with a professional climate . . . when faced with the dilemma of offering a bribe or losing a contract, an employee from a caring climate may judge that s/he is expected to give the bribe because the contract would help people who work for the firm, even though it is illegal. (Victor & Cullen, 1987, pp. 67–68)

This suggests that even organizations with noble ethical intentions prioritize some values over others, and some groups or people over others (e.g., in-groups such as employees over out-groups such as customers or competitors), which creates a series of opportunities for distorted cognition about what is appropriate (Giessner & van Quaquebeke, 2010). Following this logic, we elucidate next how seemingly ‘‘good’’ organizations—with the many benefits they bestow, including increased identification, commitment, and trust—may paradoxically encourage morally disengaged reasoning by strengthening members’ desire to maintain a positive self-image and to rely on taken-for-granted logic of their trusted organization rather than to think through each ethically charged statement or behavior as independent moral agents.


Ethical infrastructure, positive self-image, and moral disengagement The protection or maintenance of one’s selfimage is another powerful human motivation. Positive self-image is in part defined by a society’s or group’s cultural norms and standards (Steele, 1988). Such standards include ‘‘the importance of being intelligent, being rational, being independent and autonomous, and exerting control over important outcomes . . . [as well as] the importance of being a good group member and maintaining close relationships’’ (Sherman & Cohen, 2006, p. 186). When individuals fail to meet these standards, they experience a threat to their identity (Leary & Baumeister, 2000) as well as attendant cognitive dissonance (Festinger, 1957) and social pain (MacDonald & Leary, 2005). The numerous cultures of which we are part (e.g., work, religious, community) comprise our identities to a greater or lesser degree (Adler & Adler, 1998; Ashforth & Mael, 1989), and impact our self-image (Trice & Beyer, 1993). In organizational settings, the desire to maintain a positive self-image may manifest in moral disengagement in two ways. First, the organizational environment sends powerful messages about expected behavior. In addition to the material gain or loss concerns noted before (e.g., bonuses, getting fired), failure to follow an organization’s rules or expectations can result in ostracism by peers and superiors alike. As social beings, loss of approval within a collective presents an extreme threat to an individual’s positive self-image (Leary, 2010; Williams, 2001). Thus, rather than being the basis for noticing and speaking up about problematic norms or expectations, individuals may morally disengage to be able to behave ‘‘just like everyone else around here’’ without noticing problems or feeling any attendant guilt. Second, organizations of any degree of complexity have multiple goals, multiple priorities, and multiple stakeholders they are trying to serve. For instance, health care organizations

308 must balance financial viability against quality and amount of care, universities must balance research productivity against teaching quality, and for-profit organizations of many types must balance demands for innovation against the potential for exploitation. Considered explicitly on a regular basis, these competing motives would likely make it difficult for individuals to feel good about their organization or their role in it. After all, most doctors likely prefer to avoid feeling they are treating patients as ‘‘income generators,’’ most teachers likely prefer to avoid thinking about the student needs they are failing to meet, and most marketing and sales people presumably prefer to avoid thinking they are peddling unnecessary or marginal quality products to their customers. Organizational infrastructures help provide ‘‘solutions to contradictions which exist ‘naturally’’’ for organizational members (Lucas, 1987, p. 152) by providing rationales for action and shaping interpretation of stimuli (O’Reilly & Chatman, 1996; Schein, 1996; Smircich, 1983), including specifying expected behaviors in commonly faced situations or encouraging normative rationalizations (Schein, 2004). In one example from Margolis and Molinsky’s (2008, p. 856) study of ‘‘necessary evils,’’ a police officer must evict a delinquent tenant from her home. Although this action will cause emotional and financial pain to the tenant, the officer needs to carry out the act to comply with the law and protect the rights of the landlord. The officer’s reasoning—‘‘Well, they put themselves in this situation’’ (attribution of blame)—is likely an institutionalized rationalization that helps to minimize the discomfort of a challenging situation while maintaining the positive self-image of officers who must undertake such behavior. In organizations perceived by members as strong ethical infrastructures, the motivation to use morally disengaged reasoning on some occasions may be, ironically, stronger than in other organizations. As reviewed before and depicted in the first step of Figure 1,

Organizational Psychology Review 4(4) perceptions that the component parts of the ethical infrastructure (e.g., the informal norms and values) are strong are positively related to member commitment to (Cullen et al., 2003; Kunda, 1992; Ruppel & Harrington, 2000; Schwepker, 2001; Trevin˜o et al., 1998) and identification with the organization (DeConinck, 2011; Mulki et al., 2006). Organizational commitment and identification, in turn, increase the degree to which individuals are psychologically and emotionally involved with the organization (Allen & Meyer, 1990; Becker, Billings, Eveleth, & Gilbert, 1996; Cheney & Tompkins, 1987; Meyer & Allen, 1984; Mowday, Porter, & Steers, 1982). As Dutton, Dukerich, and Harquail (1994, p. 239) explained, ‘‘when [organizational members] identify strongly with the organization, the attributes they use to define the organization also define them.’’ In other words, individuals broaden their sense of self to incorporate the organization such that beliefs and perceptions of the organization become selfreferential (Aron & Aron, 1986; Mael & Ashforth, 1992). Consistent with C. Moore and Gino’s (2013) theorizing, individuals strongly identifying with their organization may be more likely to cognitively rationalize unethical behavior, especially when it is done to better the collective. For example, in an organization providing services to a needy population or community, highly identified individuals may feel justified in rationalizing seemingly harmless unethical behaviors (such as bribing officials) that keep the organization afloat and allow it to continue helping clients. A further concern deriving from strong employee identification and commitment is that challenges to the ethics of the organization become challenges to the ethics of the individual. Organizational members thus have additional incentive to morally disengage to protect their positive self-image because that image is now closely tied to the positive image of the organization (see Figure 1, Step 2). As Haidt (2012) and Tetlock (1985) argued, in these situations, we will be particularly

Martin et al. motivated (generally subconsciously) to persuade ourselves and others that we and our team are behaving acceptably; and we will be less motivated to actually discover the truth through our reasoning. Furthermore, recent research suggests that psychological closeness to an individual or a group ‘‘blurs the boundaries between the self and others and, as a result can lead individuals to experience and behave more consistently with’’ the other, including experiencing cognitive dissonance and vicariously justifying the other’s and one’s own unethical actions (Gino & Galinsky, 2012, p. 17). In their studies, Gino and Galinsky found that psychological closeness to a cheater increased individuals’ tendencies to morally disengage about cheating. Thus, the organizational identification and commitment accompanying membership in an organization perceived to have an ethical infrastructure increases employees’ tendencies to feel psychologically close to the organization and to incorporate the organizational image into their own. These psychological processes have many well-documented positive benefits. However, we argue they can also potentially lead to increased self-image maintenance concerns that can increase the likelihood of morally disengaged reasoning and the potential for unnoticed unethical behavior. In this way, the actual behaviors, products, and processes of the organization get ‘‘glossed over’’ using words that connote ‘‘inevitability and rightness’’ (Kunda, 1992, p. 226).

Ethical infrastructure, limited cognition, and moral disengagement Classic psychological research has shown various risks resulting from humans’ desire to reduce cognitive effort (Fiske & Taylor, 1984) and their susceptibility to social influences. For example, followers in a hierarchy will often automatically experience an ‘‘agentic shift’’ in which they become an instrument of a perceived authority figure and do not think carefully for

309 themselves about the ethical ramifications of their own (leader instructed) behavior (Milgram, 1969). In the classic Milgram experiments and more recent replications, participants used morally disengaged language to explain why they continued to shock another person when directed to do so by an experimenter: ‘‘I was just doing what he told me’’ (displacement of responsibility; ‘‘Basic instincts,’’ 2007). Similarly, work in social learning theory and social information processing indicates that individuals learn about norms and expected behaviors from those around them (Bandura, 1986; Salancik & Pfeffer, 1978), thus sparing themselves the cognitive effort of having to think through or experience everything for themselves. And when it comes to moral reasoning and behavior more generally, the finding that most individuals operate at a ‘‘conventional level’’ of moral development (Kohlberg, 1969; Trevin˜o, 1992)— wherein they take their cue from what they see others around them doing—suggests that individuals do not routinely ‘‘think through’’ the ethical implications of every stimulus they face in their work life. The organizational context serves to guide collective action in ways that are often unseen by members. As G. Kunda (1992, p. 5) noted in his ethnographic account, the goal of a culture is to ensure organizational members ‘‘have the religion and not know how they ever got it.’’ By providing implicit expectations and rationales for action—that is, the taken-for-granted assumptions that undergird automatic action— organizational infrastructures generally help to facilitate (and in some ways encourage) cognitive miserliness in members. As Schutz (1964, p. 95) argued, ‘‘the function of the cultural pattern [is] to eliminate troublesome inquiries by offering ready-made directions for use, to replace truth hard to attain by comfortable truisms, and to substitute the self-explanatory for the questionable.’’ Swidler (1986), likewise, argued that cultures provide an accepted set of thoughts, feelings, actions, and reactions to stimuli that reduce the need for conscious

310 deliberation. Employees need not think about or question the rationales or decision criteria for, or meaning of, their actions because someone else has presumably already done this for them at the time the organization’s culture was being established. For example, selectively culling relevant data in a sales pitch to present the most favorable (though perhaps not fully accurate) picture to clients may be supported by the statements that this is ‘‘what everyone does,’’ this is ‘‘no big deal because it doesn’t change the conclusions,’’ and this is ‘‘what we have to do because they’re not smart enough to understand a more complex picture.’’ This practice, with its attendant forms of moral disengagement, may fail to generate scrutiny as long as both fit cultural norms such as ‘‘put your best foot forward in public’’ and ‘‘do whatever it takes to keep your existing clients.’’ This logic is consistent with arguments that corrupt practices come to be rationalized and routinized such that organizational members engage in them without thinking deeply about them: ‘‘indeed, the actions may come to seem like the right and only course to take’’ (Ashforth & Anand, 2003, p. 15; Kelman, 1973). Similarly, C. Moore’s (2008) argument that morally disengaged rationales and explanations can become the taken-for-granted assumptions that facilitate and perpetuate corrupt practices suggests there can be a symbiotic relationship between moral disengagement and an organization’s culture. We argue that employee perceptions of a strong ethical infrastructure strengthen tendencies toward cognitive miserliness, making moral disengagement more likely to occur and to go unnoticed, and therefore to persist in a culture that members would generally regard to be highly ethical. When individuals believe they work in a strong ethical environment, they may be less likely to ‘‘decide’’ what is ethical in a conscious manner and instead rely on organizational norms, assuming that the bounds of ethicality have already been decided (Tenbrunsel et al., 2003) and that the organization’s procedures and formal systems are sufficient

Organizational Psychology Review 4(4) checks against unethical behavior. The inherent problem is that systems and procedures can only be created and enforced for unethical behavior that is recognized as such. Accordingly, behaviors that have long been rationalized and thus part of the taken-for-granted fabric of the organization are unlikely to be caught by formal systems. For instance, a clear policy requiring disclosure of conflicts of interest may feed into employees’ perception that their organization has a strong, principled culture (Victor & Cullen, 1988). Yet, recent research suggests that policies requiring the disclosure of conflicts of interest can inadvertently backfire because individuals may use this transparency as a basis for pinning responsibility on the client for how the client subsequently behaves (Cain, Loewenstein, & Moore, 2005; Loewenstein, Sah, & Cain, 2012). As shown in Figure 1, the influence of a strong perceived ethical infrastructure on decreased cognition and hence potentially increased moral disengagement is proposed to operate in part through increased trust, commitment, and identification. For instance, ethical infrastructures have been linked empirically and theoretically to trust (DeConink, 2011; see Figure 1, Step 1). And, people are less suspicious of and less concerned about monitoring the behaviors of those they trust (van Dyne, Vandewalle, Kostova, Latham, & Cummings, 2000), and more open to absorbing new knowledge from them (Levin & Cross, 2004; Mayer et al., 1995) without careful analysis. In short, trust allows individuals to reduce their cognitive effort (see Figure 1, Step 2). Thus, if trust minimizes the extent to which people are likely to closely examine others’ rationales for action, it follows that moral disengagement in coworkers may be less likely to be noticed or questioned and more likely to be mimicked in ethical infrastructures because of the trust that exists in such environments (see Figure 1, Step 3). Similarly, strong ethical infrastructures encourage organizational identification (DeConinck, 2011; Mulki et al., 2006) such that

Martin et al. individuals broaden their sense of self to incorporate the organization (Aron & Aron, 1986; Mael & Ashforth, 1992). As Gilovich and Griffin (2010, p. 564) pointed out, identities and norms are markedly intertwined: ‘‘to follow a norm is to align oneself with others’’ and to signal that one agrees with their ‘‘take on the world.’’ Moreover, individuals are often unaware of the norms to which they subscribe (Goffman, 1963). Following this logic, then, identification with the organization is not only related to less cognitive consideration of normative rationalizations, but it also likely encourages the implicit adoption of such normative thinking because it reinforces one’s identity with the organization.

Moral disengagement and its effects on perceptions of ethical infrastructures Beyond enabling the rationalization of unethical behavior in any given instance, we argue that moral disengagement is also likely to play an ongoing role in sustaining organizational members’ perceptions of their organization as an ethical infrastructure (see Figure 1, Step 4). Believing one works in a strong ethical environment depends in large part on the perceived absence (or at least comparative lack when considering competitors or other organizations) of unethical behavior in that context. After all, perceiving that unethical behavior exists or is regularly tolerated would undermine the alignment between formal and informal ethical systems, and the broader climate perceptions that typify strong ethical infrastructures (Tenbrunsel et al., 2003). But if, as reviewed before, moral disengagement dampens moral awareness and leads individuals to overlook some types of unethical behavior (Bandura, 1990; Detert et al., 2008; Moore, 2008), then the moral disengagement itself can help sustain the ethical belief system of the organization. This may be particularly true in organizations where higher levels of commitment, identification, and trust make it more likely that rationalizations take

311 the ‘‘apparently noble’’ form of viewing the inevitable tradeoffs of organizational life— some of which cause financial, physical, or mental harm—as done in the service of some greater good (i.e., moral justifications based in doing right for the company, its employees, or some other group), or being done in a better or less damaging way than other organizations (i.e., advantageous comparison). In this way, the culture that sets the stage for a type of rationalization is also dependent upon those rationalizations to perpetuate. As noted by Ashforth and Anand (2003, p. 16), as individuals collectively internalize morally disengaged rationales, they become ‘‘woven into a selfsealing belief system that routinely neutralizes the potential stigma of corruption.’’ Importantly, we are not stating that organizations with strong ethical infrastructures will necessarily be home to more moral disengagement and unethical behavior in an absolute sense. Instead, we are suggesting that in all organizations, even those that work to align their formal and informal systems in ways that prioritize ethical thought and action, employees face situations in which they prioritize the interests of some over others, engage in some ‘‘necessary evils,’’ and suffer instances where unethical decisions are unintentionally made due to unrecognized forces and faulty reasoning. And in these situations, by dint of being more highly committed and identified, and by trusting that the informal rationales and formal systems in place are sufficient, individuals in these infrastructures may be more prone to miss some of the ways in which morally disengagement exists and persists in their environment.

Theoretical and methodological implications In this article, we integrated work concerning the ethical infrastructure of organizations and the literature on moral disengagement to argue that in all contexts, even those perceived to be strongly ethical, certain types of unethical

312 rationales and practices are likely to sometimes go unrecognized or be unwittingly justified as ‘‘good’’ or ‘‘right’’ by organizational members. Based on prior theoretical and empirical work, we introduced a model demonstrating how morally disengaged reasoning can exist and persist in nearly all organizations, including those that appear to have formal and informal forces promoting and supporting ethical behavior. Indeed, the organizations whose members feel most convinced of their strong ethical infrastructure may be at significant risk for subtler forms of morally disengaged reasoning and behavior precisely because these perceptions of collective ethicality enhance motivations to preserve a positive self-image (which is closely tied to the organization’s image) and to reduce conscious processing of some ethics-related stimuli. In this way, employees’ perceptions that they work in a strong ethical infrastructure may not only promote moral disengagement of certain kinds, but also may be dependent on moral disengagement to sustain them. This interplay between the cultural underpinnings of an organization and morally disengaged reasoning by its members is problematic for practitioners and scholars alike. Regarding the former, our review suggests that leaders would be remiss to assume that unethical behavior does not exist in their organization simply because they or their employees do not perceive it or because large, blatantly unethical acts are not being revealed with regularity. Although having formal systems in place to mitigate unethical behavior is undoubtedly worthwhile, it is likely difficult to create systems to prevent unseen or previously rationalized unethical behaviors. Regarding the latter, scholars may be making unwarranted assumptions about organizations’ ethicality based on measurement approaches that do not account for the collective ‘‘blind spots’’ (Ashkanasy, Broadfoot, & Falkus, 2000; Bazerman & Tenbrunsel, 2011) likely to exist in all organizational contexts. Thus, our work suggests several opportunities for future

Organizational Psychology Review 4(4) research and specific implications for the methodologies used in such work.

Future research directions Numerous questions remain regarding how and why people in good organizations do bad things. For instance, one of the individual biases or cognitive distortions mentioned briefly in our review that seems related to the model we proposed is moral licensing—or, the tendency for ‘‘prior good acts [to] earn points in a mental account that subsequent immoral acts can spend’’ (Zhong, Liljenquist, & Cain, 2009, p. 83). Interestingly, moral licensing can be triggered by vicarious influences: simply ‘‘observing similar other’s chosen moral actions’’ (Kouchaki, 2011, p. 702) or being affiliated with a group that is seen to be highly ethical may result in deposits being made into individuals’ or collectives’ ‘‘accounts.’’ As such, future research might find that organizations with high volumes of ‘‘moral credits’’ are particularly likely to have ethical infrastructures that foster rationalizations among members (who are well aware of the accolades their company receives for its ethics) of potentially troubling behavior (e.g., ‘‘we have done plenty of good works; this [unethical behavior] is no big deal’’). Recent scandals involving some of the most well respected corporations in the world, including Johnson & Johnson, Merck, and Toyota, provide some anecdotal evidence for this possibility. In 2008, for instance, Johnson & Johnson initiated a ‘‘phantom recall,’’ instructing employees to surreptitiously buy back problematic Motrin IB caplets from convenience stores (Besser & Adhikari, 2010). Given Johnson & Johnson’s reputation for recalling Tylenol in the early 1980s and its corporate reputation for a climate of care, organizational decision makers may have unconsciously engaged in moral licensing when initiating and overseeing this discreet recall. Furthermore, because the legal implications of the action were unclear and the ultimate

Martin et al. outcome was intended to be positive (i.e., prevent sickness from tainted medication), there were certainly multiple bases for rationalizing that the action was ‘‘morally justified’’ and in line with the company’s strong ethical culture. Considering different elements of formal systems of behavioral control—those systems that managers are best able to influence and that may serve as ‘‘checks’’ to help regularly question the existing reasoning within organizations—may be another useful direction for future research. For example, researchers might consider how training can be used to help employees identify morally disengaged reasoning in their own and others’ thinking. Organizational members might benefit from learning ‘‘stop and think’’ moments most pertinent to their own context, such as key phrases often indicative of morally disengaged reasoning (e.g., ‘‘my boss told me to do it,’’ ‘‘it’s no big deal,’’ ‘‘it’s common practice in this industry’’), that signal the need for further, careful deliberation of the situation (cf. Trevin˜o et al., in press). In addition, when such red flags are raised, members might be trained with ‘‘cognitive tools’’ to help loosen the grasp of any taken-for-granted assumptions prevalent in the organization. For example, extant research suggests that reaffirming one’s core values helps to counter the negative effects of ego depletion (i.e., weakened self-control; Baumeister & Heatherton, 1996) because it refocuses one’s perspective on the bigger picture (Schmeichel & Vohs, 2009). Similarly, perhaps reminders of one’s core values can help decision makers set aside their self-image motivations or natural desire for limited cognitive effort, and to reevaluate a situation from a different perspective than the one that has become ‘‘taken-for-granted’’ as the right way to think about and do things inside their organization. In addition to training, researchers might consider how formal decision processes—and who is involved in them—can be tailored to identify and minimize the proliferation of moral disengagement. From prior research, we know

313 that individuals who engage in misconduct use moral disengagement when they believe they can ‘‘legitimately’’ justify their actions to others (Z. Kunda, 1990; Tsang, 2002). Creating situations that question the legitimacy of these explanations, then, may be particularly effective. For example, researchers might consider the role of cross-functional teams or appointed ‘‘devil’s advocates’’ during the formal decision-making process. As Schein (2004, p. 204) argued, members of different functional areas, while having internalized the rationalizations endemic to their function, are often keenly aware of moral hypocrisy in other functions. Internal or external ethics officers might also be considered for this role, especially given their separation from specific functions and occupational areas. Adding this additional perspective into formal decision-making processes might prove valuable not only because these ‘‘outsiders’’ are more likely to identify insiders’ rationalizations, but also because insiders might feel the need to more carefully evaluate the ethicality of their own statements and behaviors (knowing that they will be scrutinized by others; Gino, Gu, & Zhong, 2009). Importantly, we note that merely having ‘‘devil’s advocates’’ or members of an out-group present is likely insufficient in itself. Those individuals would also need to be endowed with sufficient power to avoid being blindly overruled or shouted down by the majority. This line of reasoning brings up an intriguing question about who is most likely to identify moral disengagement within the work environment. As we argued earlier, strong organizational commitment and identification by employees in ethical infrastructures can motivate employees to perceive their organization as ethical and to miss subtler signs of problems (such as morally disengaged reasoning) because doing so sustains their positive self-image. Indeed, prior work has argued that high group self-esteem can distort people’s evaluation of their group and cause members to react to threats to the group image in protective ways (Crocker

314 & Luhtanen, 1990). Following this logic, members who are least identified with the organization may be the ones who have the best perspective on whether the formal and informal systems are ethical or not, just as it may be those with more pessimistic or negative views of the self and the groups to which they belong, who are more accurate in their assessments of organizational ethicality (Alloy & Abramson, 1979; Dobson & Franche, 1989). For researchers, the implication is that we may need to pay closer attention to those with less positive organizational views as they may well have the least biased perspective. For practitioners, steps need to be taken so that individuals who ‘‘rock the boat’’ and provide negative input are welcomed rather than shunned (as is often the case with whistleblowers [e.g., Miceli & Near, 1992]). Of course, identifying morally disengaged reasoning is only the first step in preventing the spread of, or rooting out of, collective rationalizations in a culture. An equally vexing challenge is getting employees to speak up when they hear morally disengaged reasoning that seems to be accepted by those around them. Monin, Sawyer, and Marquez (2008) showed that those who take moral stands, while admired externally, are rejected internally because their actions threaten the identity of those against whom they stand. Likewise, many employees who see wrongdoing do not report it (Miceli & Near, 1992), and many who do feel retaliated against (Mesmer-Magnus & Viswesvaran, 2005). In organizations with a strong ethical infrastructure, this effect may be amplified due to the high levels of identification members experience with their organization. Thus, reliance on hiring and keeping the ‘‘right type’’ of employees—those who speak up because they are assertive (LePine & van Dyne, 1998), have a proactive personality (Detert & Burris, 2007), have efficacy in their voice ability (Kish-Gephart, Detert, Trevin˜o, & Edmondson, 2009), or have moral courage (Kidder, 2009)—is likely to be insufficient for getting a majority of one’s employees to

Organizational Psychology Review 4(4) question morally disengaged reasoning. Similarly, it is unlikely that simply training leaders to be open and responsive (Detert & Burris, 2007) or to routinely discuss the importance of ethics (Brown, Trevin˜o, & Harrison, 2005) will be sufficient if the goal is to get employees to speak up about morally disengaged thinking that appears to be culturally accepted. Having a psychologically safe environment (Edmondson, 1999), where the ‘‘whole village’’ supports ethical behavior (Mayer, Nurmohamed, Trevin˜o, Shapiro, & Schminke, 2013), is also necessary, but not a wholly sufficient solution for generating this type of voice. We suggest that one promising avenue for future work is the study of ‘‘ethical questioning’’ (e.g., ‘‘Is there really no harm done?,’’ ‘‘Does it matter if they won’t notice?’’) as a potentially less threatening, and more effective, method of raising issues for further consideration. Finally, future research might consider if and how individual dispositions affect the relationships we proposed. For instance, individuals with high levels of moral attentiveness (Reynolds, 2008) may be more likely to notice and reflect on their own and others’ actions, and to recognize unethical practices regardless of their beliefs about or commitment to the organization. Similarly, individuals with a strong moral identity might be less likely to morally disengage (Aquino et al., 2007) even when cultural forces make it easy to do so. In contrast, other individual differences may enhance, rather than override, organizational members’ affiliative needs, and hence susceptibility to the influences of a strong ethical infrastructure. As one example, individuals high in the symbolization dimension of moral identity (which captures the degree to which individuals’ moral traits are reflected in their behaviors [Aquino & Reed, 2002]) might be motivated to be part of an organization that has a reputation for ethics because it helps them to demonstrate a valued aspect of their identity. Ironically, this need may also make such individuals more likely to protect the organization’s (and their own)

Martin et al. identity by accepting, rather than questioning, the explanations and norms of the collective.

Methodological implications Our theorizing also has implications for the methodologies used to study ethical infrastructure and its component aspects. A common assumption in theories of ethical culture and climate is that an ethical context should reduce the incidence of unethical workplace behavior and, indeed, empirical results in the published literature generally support this relationship (e.g., Baker et al., 2006; Marta et al., 2001; Paolillo & Vitell, 2002; Ross & Robertson, 2003; Schaubroeck et al., 2012; Vitell et al., 2003). However, as our model suggests, and given the plethora of other ways in which unethical behavior can occur and persist below awareness, traditional measures of ethical culture and climate may not adequately capture reality in organizations. Specifically, much of the empirical work to date on the component parts of ethical infrastructures is based on employee perceptions of the organization’s ethical infrastructure (Mayer et al., 2009) rather than direct, independent assessments of it. For example, a popular measure of ethical culture is a 14-item ‘‘ethical environment’’ scale (Trevin˜o et al., 1998) that includes ‘‘six items for the sanctions of ethical and unethical conduct, three items for role modeling of top management, three items for implementation of an ethics code, and one item for whether ethical behavior is the norm in the organization’’ (Kaptein, 2008, p. 923). Organizations that score higher on this scale are characterized as having a stronger ethical environment. Similarly, ethical climate is most commonly assessed using the Ethical Climate Questionnaire (ECQ) (Cullen et al., 1993; Victor & Cullen, 1988), in which individuals are ‘‘asked to act as observers reporting on organizational expectations’’ (Simha & Cullen, 2012, p. 29). Sample items for benevolent and egoistic climates, respectively, include, ‘‘People in this organization are actively

315 concerned about the customer’s and public’s interest,’’ and ‘‘In this organization, people protect their own interests above other considerations.’’ Relying, as these and related measures do, on members’ individual-level perceptions, limits social scientists’ ability to confidently say that an organizational infrastructure is ethical. Indeed, it may only appear to be so according to members because, as we have argued throughout, survey respondents in apparently highly ethical organizations may no longer perceive some types of thinking and behavior as problematic (even though they are). Socialization, bolstered by commitment, identification, and trust, helps to sustain the taken-for-granted collective rationalizations that paint the organization and the members within it as virtuous. This observation suggests the need for different research designs and methodologies than those currently common in the study of business ethics. Returning to the anthropological roots of culture scholarship, we suggest that future research should complement surveybased research with that based in sustained, direct observation by outsiders (Schein, 1996). Researchers who sit between the extremes of complete detachment and having ‘‘gone native’’ will likely be better positioned to unearth the buried rationalizations that support institutionalized practice and thus be better able to report on actual values-in-use rather than those espoused by organizational members (Argyris & Scho¨n, 1978). In support of this assertion, Rousseau (1990, p. 162) argued that, ‘‘the rationale for the use of qualitative methods in culture research is the presumed inaccessibility, depth, or unconscious nature of culture’’ to insiders. This form of ‘‘culture auditing’’ (Trevin˜o & Nelson, 2010) by an outside observer who is less biased by the organizational pulls of personal gain and self-image maintenance, and presumably more motivated to expend the cognitive energy to question everything, would certainly be more time consuming than currently popular methods. But, we believe that effort is likely to

316 be handsomely rewarded. After all, many of the biggest insights about cultural phenomena—such as the power of concertive control (Barker, 1993) and the nature and functioning of employees in specific occupations (e.g., Kunda’s [1992] study of engineers; Barley’s [1983] study of caretakers; and Lortie’s [1975] study of school teachers)— have come from authors who invested significant time doing field observation. Similar to these concerns about the measurement of organizations’ ethical environment, the use of certain dependent variable measures in business ethics research leaves open the possibility that we are not capturing the correct amount or type of unethical behavior in organizations. For example, scholars often assess the link between an organization’s ethical infrastructure and (un)ethical behavior using members’ perceptual ratings of both the ethical environment and employee misconduct or deviance (Mayer et al., 2009). Even when multiple respondents are garnered from the same organization, the responses are likely to reflect the same embeddedness in, and thus socialization to, a set of taken-for-granted beliefs about what constitutes acceptable behavior. Furthermore, much of the ethical infrastructure research employs as dependent variable measures representing deviance, lying for personal gain, or actions meant to restore equity or ‘‘get even’’ with the organization (e.g., Bennett & Robinson, 2000; Greenberg, 1990). However, these measures generally do not capture the types of unethical actions that may be perpetrated on behalf of an organization that otherwise appears ethical. As Ashforth and Anand (2003, p. 41) noted: Although we treated the normalization of corruption on behalf of the organization as equivalent to that of corruption against the organization, there are real differences that should be investigated. For example, identification, commitment, and other attributes that are usually highly valued are likely to predict corruption on an organization’s behalf

Organizational Psychology Review 4(4) whereas precisely the opposite—disindentification, etc.—is likely to predict corruption against it.

Indeed, Umphress et al. (2010) found that individuals who highly identify with their organization might engage in unethical prosocial behaviors. Future research thus might consider casting a wider net, and using nonmember sources, when measuring unethical behavior. This would likely help to capture both those problematic behaviors that are noticed and unnoticed by insiders and allow researchers to ascertain whether some unethical acts undertaken to ‘‘help’’ the organization or its members actually serve to strengthen, rather than weaken, members’ perceptions regarding the ethicality of their organization. In sum, we argue that field-based assessments of aspects of ethical infrastructures have not sufficiently changed to benefit from much of the understanding of the nature of unethical thinking and action that has been provided by social psychologists in recent years. The lure of expediency via deductive abstraction and survey research rather than grounded observation is particularly dangerous when considering ethics or culture because much of what makes each construct so powerful and interesting lies below the surface of perception by insiders. To remedy this, we urge organizational scholars interested in ethics to enter the field for extended periods, be it for ethnographic observation or to conduct field experiments.

Conclusion In this article, we reviewed and integrated the literatures concerning the ethical infrastructure of organizations and the tendency of individuals to engage in morally disengaged reasoning. We argued that there is an inherent difficulty in creating formal and informal systems to prevent unethical thinking and ways of acting that are collectively rationalized and thus part of the taken-for-granted fabric of an organizational

Martin et al. infrastructure. Indeed, while ethical infrastructures are useful for rooting out certain types of unethical behaviors and their rationalizations, they may still harbor and in some cases foster moral disengagement about other types of behaviors. Finally, we argued that future research regarding the ethical context of organizations should employ more direct observational techniques in addition to survey methods because our current understanding of the psychological processes undergirding unethical behavior suggests that many insiders may perceive their organizational context and the actions of fellow organizational members as being ethical when they very well may not be. Such research poses numerous difficulties, but given the magnitude of harm done and harm potentially averted in organizational life, we urge scholars to take up the challenge. Note 1. In this paper, we use the terms moral rationalizations, justifications, neutralizations, and moral disengagement techniques/mechanisms interchangeably.

References Adler, P., & Adler, P. (1998). Peer power: Preadolescent culture and identity. New Brunswick, NJ: Rutgers University Press. Allen, N. J., & Meyer, J. P. (1990). The measurement and antecedents of affective, continuance and normative commitment to the organization. Journal of Occupational Psychology, 63, 1–18. Alloy, L. B., & Abramson, L. Y. (1979). Judgment of contingency in depressed and nondepressed students: Sadder but wiser? Journal of Experimental Psychology: General, 108, 441–485. Ammeter, A. P., Douglas, C., Gardner, W. L., Hochwarter, W. A., & Ferris, G. R. (2002). Toward a political theory of leadership. The Leadership Quarterly, 13, 751–796. Apel, R., & Paternoster, R. (2009). Understanding ‘‘criminogenic’’ corporate culture: What whitecollar crime researchers can learn from studies of the adolescent employment–crime relationship. In S. S. Simpson & D. Weisburd (Eds.), The

317 criminology of white collar crime (pp. 15–33). Albany, NY: Springer Science. Aquino, K. (1998). The effects of ethical climate and the availability of alternatives on the use of deception during negotiation. International Journal of Conflict Management, 9, 195–217. Aquino, K., & Reed, A. (2002). The self-importance of moral identity. Journal of Personality and Social Psychology, 83, 1423–1440. Aquino, K., Reed, A., Thau, S., & Freeman, D. (2007). A grotesque and dark beauty: How moral identity and mechanisms of moral disengagement influence cognitive and emotional reactions to war. Journal of Experimental Social Psychology, 43, 385–392. Argyris, C., & Scho¨n, D. A. (1978). Theory in practice: Increasing professional effectiveness. San Francisco, CA: Jossey-Bass. Arnaud, A., & Schminke, M. (2007). Ethical work climate: A weather report and forecast. In S. W. Gillialand, D. D. L. Steiner, & D. P. Skarlicki (Eds.), Managing social and ethical issues in organizations (pp. 181–227). Greenwich, CT: Information Age Publishing. Aron, A., & Aron, E. N. (1986). Love and the expansion of self: Understanding attraction and satisfaction. New York, NY: Hemisphere. Ashforth, B. E., & Anand, V. (2003). The normalization of corruption in organizations. Research in Organizational Behavior, 25, 1–52. Ashforth, B. E., & Mael, F. (1989). Social identity theory and the organization. Academy of Management Review, 14, 20–39. Ashkanasy, N. M., Broadfoot, L. E., & Falkus, S. A. (2000). Questionnaire measures of organizational culture. In N. M. Ashkanasy, C. P. Wilderom, & M. F. Peterson (Eds.), Handbook of organizational culture and climate (pp. 131–145). Thousand Oaks, CA: Sage. Baker, T. L., Hunt, T. G., & Andrews, M. C. (2006). Promoting ethical behavior and organizational citizenship behaviors: The influence of corporate ethical values. Journal of Business Research, 59, 849–857. Bandura, A. (1986). Social foundations of thought and action. Upper Saddle River, NJ: Prentice Hall.

318 Bandura, A. (1990). Selective activation and disengagement of moral control. Journal of Social Issues, 46, 27–46. Bandura, A., Caprara, G. V., Barbaranelli, C., Pastorelli, C., & Regalia, C. (2001). Sociocognitive selfregulatory mechanisms governing transgressive behavior. Journal of Personality and Social Psychology, 80, 125–135. Barker, J. R. (1993). Tightening the iron cage: Concertive control in self-managing teams. Administrative Science Quarterly, 38, 408–437. Barley, S. R. (1983). Semiotics and the study of occupation and organizational cultures. Administrative Science Quarterly, 28(3), 393–413. Barsky, A. (2011). Investigating the effects of moral disengagement and participation on unethical work behavior. Journal of Business Ethics, 104, 59–75. Basic instincts: The science of evil. (2007). ABC News. Retrieved from Primetime/story?id¼2765416&page¼1&single Page¼true Baumeister, R. F., & Heatherton, T. F. (1996). Selfregulation failure: An overview. Psychological Inquiry, 7, 1–15. Bazerman, M. H., & Tenbrunsel, A. E. (2011). Blind spots: Why we fail to do what’s right and what to do about it. Princeton, NJ: Princeton University Press. Becker, T. E., Billings, R. S., Eveleth, D. M., & Gilbert, N. L. (1996). Foci and bases of employee commitment: Implications for job performance. Academy of Management Journal, 39, 464–482. Bennett, R. J., & Robinson, S. L. (2000). Development of a measure of workplace deviance. Journal of Applied Psychology, 85, 349–360. Bersoff, D. M. (1999). Explaining unethical behavior among people motivated to act prosaically. Journal of Moral Education, 28, 413–428. Besser, R., & Adhikari, B. (2010). Contractor questions order to remove Motrin from shelves. ABCNews. Retrieved from WN/contractor-paid-remove-motrin-off-shelvesjohnsonjohnson/story?id¼11682953 Beu, D. S., & Buckley, M. R. (2004). This is war: How the politically astute achieve crimes of

Organizational Psychology Review 4(4) obedience through the use of moral disengagement. Leadership Quarterly, 15, 551–568. Brief, A. P., Buttram, R. T., & Dukerich, J. M. (2001). Collective corruption in the corporate world: Toward a process model. In M. E. Turner (Ed.), Groups at work: Theory and research (pp. 471–499). Mahwah, NJ: Lawrence Erlbaum. Brown, M. E., Trevin˜o, L. K., & Harrison, D. A. (2005). Ethical leadership: A social learning perspective for construct development and testing. Organizational Behavior and Human Decision Processes, 97, 117–134. Cain, D. M., Loewenstein, G., & Moore, D. A. (2005). The dirt on coming clean, perverse effects of disclosing conflicts of interest. Journal of Legal Studies, 34, 1–25. Cheney, G., & Tompkins, P. K. (1987). Coming to terms with organizational identification and commitment. Communication Studies, 38, 1–15. Christian, M. S., & Ellis, A. P. J. (2011). Examining the effects of sleep deprivation on workplace deviance: A self-regulatory perspective. Academy of Management Journal, 54, 913–934. Chugh, D., Bazerman, M. H., & Banaji, M. R. (2005). Bounded ethicality as a psychological barrier to recognizing conflicts of interest. In D. A. Moore, D. M. Cain, G. Loewenstein, & M. X. Bazerman (Eds.), Conflicts of interest: Challenges and solutions in business, law, medicine, and public policy (pp. 74–95). New York, NY: Cambridge University Press. Claybourn, M. (2011). Relationships between moral disengagement, work characteristics, and workplace harassment. Journal of Business Ethics, 100, 283–301. Cressey, D. R. (1953). A study in the social psychology of embezzlement: Other people’s money. Glencoe, IL: Free Press. Crocker, J., & Luhtanen, R. (1990). Collective selfesteem and ingroup bias. Journal of Personality and Social Psychology, 58, 60–67. Cropanzano, R., Stein, J., & Goldman, B. M. (2007). Self-Interest and its discontents. In J. Bailey (Ed.), Handbook of organizational and managerial wisdom (pp. 181–221). Thousand Oaks, CA: Sage.

Martin et al. Cullen, J. B., Parboteeah, K. P., & Victor, B. (2003). The effects of ethical climates on organizational commitment: A two-study analysis. Journal of Business Ethics, 46, 127–141. Cullen, J. B., Victor, B., & Bronson, J. (1993). The ethical climate questionnaire: An assessment of its development and validity. Psychological Reports, 73, 667–674. Cushman, F., Young, L., & Hauser, M. (2006). The role of conscious reasoning and intuition in moral judgment testing three principles of harm. Psychological Science, 17, 1082–1089. DeCelles, K. A., DeRue, D. S., Margolis, J. D., & Ceranic, T. L. (2012). Does power corrupt or enable? When and why power facilitates selfinterested behavior. Journal of Applied Psychology, 97, 681–689. DeConinck, J. B. (2011). The effects of ethical climate on organizational identification, supervisory trust, and turnover among salespeople. Journal of Business Research, 64, 617–624. Denison, D. R. (1996). What is the difference between organizational culture and organizational climate? A native’s point of view on a decade of paradigm wars. Academy of Management Review, 21, 619–654. Denison, D. R., & Mishra, A. K. (1995). Toward a theory of organizational culture and effectiveness. Organization Science, 6, 204–223. Detert, J. R., & Burris, E. R. (2007). Leadership behavior and employee voice: Is the door really open? Academy of Management Journal, 50, 869–884. Detert, J. R., Trevin˜o, L. K., & Sweitzer, V. L. (2008). Moral disengagement in ethical decision making: A study of antecedents and outcomes. Journal of Applied Psychology, 93, 374–391. Dobson, K., & Franche, R. L. (1989). A conceptual and empirical review of the depressive realism hypothesis. Canadian Journal of Behavioural Science/Revue canadienne des sciences du comportement, 21, 419–433. Duffy, M. K., Aquino, K., Tepper, B. J., Reed, A., & O’Leary-Kelly, A. M. (2005, August). Moral disengagement and social identification: When does being similar result in harm doing? Paper

319 presented at the Academy of Management Annual Conference in Honolulu, Hawaii. Dutton, J. E., Dukerich, J. M., & Harquail, C. V. (1994). Organizational images and member identification. Administrative Science Quarterly, 39, 239–263. Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44, 350–383. Eisenhardt, K. M. (1989). Agency theory: An assessment and review. The Academy of Management Review, 14, 57–74. Felps, W., Mitchell, T. R., & Byington, E. (2006). How, when, and why bad apples spoil the barrel: Negative group members and dysfunctional groups. Research in Organizational Behavior, 27, 175–222. Festinger, L. (1957). A theory of cognitive dissonance. Oxford, UK: Row, Peterson. Fiol, C. M., & Lyles, M. A. (1985). Organizational learning. Academy of Management Review, 10, 803–813. Fiske, A. P. (1992). Four elementary forms of sociality: Framework for a unified theory of social relations. Psychological Review, 99, 689–723. Fiske, S. T., & Taylor, S. (1984). Social cognition (1st ed.). New York, NY: McGraw-Hill. Giessner, S., & van Quaquebeke, N. (2010). Using a relational models perspective to understand normatively appropriate conduct in ethical leadership. Journal of Business Ethics, 95, 43–55. Gilovich, T. D., & Griffin, D. W. (2010). Judgment and decision making. In S. T. Fiske, D. T. Gilbert, & G. Lindzey (Eds.), Handbook of social psychology (pp. 542–588). Hoboken, NJ: Wiley & Sons. Gino, F., & Galinsky, A. D. (2012). Vicarious dishonesty: When psychological closeness creates distance from one’s moral compass. Organizational Behavior and Human Decision Processes, 119, 15–26. Gino, F., Gu, J., & Zhong, C. B. (2009). Contagion or restitution? When bad apples can motivate ethical behavior. Journal of Experimental Social Psychology, 45, 1299–1302. Gino, F., Moore, D. A., & Bazerman, M. H. (2009). See no evil: When we overlook other people’s

320 unethical behavior In R. M. Kramer, A. E. Tenbrunsel, & M. H. Bazerman (Eds.), Social decision making: Social dilemmas, social values, and ethical judgments (pp. 241–263). New York, NY: Routledge. Glick, W. H. (1985). Conceptualizing and measuring organizational and psychological climate: Pitfalls in multilevel research. Academy of Management Review, 10, 601–616. Goffman, E. (1963). Behavior in public places. New York, NY: Free Press. Goldman, A., & Tabak, N. (2010). Perception of ethical climate and its relationship to nurses’ demographic characteristics and job satisfaction. Nursing Ethics, 17, 233–246. Greenberg, J. (1990). Employee theft as a reaction to underpayment inequity: The hidden cost of pay cuts. Journal of Applied Psychology, 75, 561–568. Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108, 814–834. Haidt, J. (2012). The righteous mind: Why good people are divided by science and religion. New York, NY: Pantheon Books. Hinrichs, K. T., Wang, L., Hinrichs, A. T., & Romero, E. J. (2012). Moral disengagement through displacement of responsibility: The role of leadership beliefs. Journal of Applied Social Psychology, 42(1), 62–80. Hunt, S. D., & Vitell, S. J. (1986). A general theory of marketing ethics. Journal of Macromarketing, 48, 30–42. Isaksen, S. G., Lauer, K. J., & Ekvall, G. (1999). Situational outlook questionnaire: A measure of the climate for creativity and change. Psychological Reports, 85(2), 665–674. Jones, T. M. (1991). Ethical decision making by individuals in organizations: An issue-contingent model. Academy of Management Review, 16, 366–395. Jones, T. M., Felps, W., & Bigley, G. A. (2007). Ethical theory and stakeholder-related decisions: The role of stakeholder culture. Academy of Management Review, 32(1), 137–155. Kamp, J., & Brooks, P. (1991). Perceived organizational climate and employee counterproductivity. Journal of Business and Psychology, 5, 447–458.

Organizational Psychology Review 4(4) Kaptein, M. (2008). Developing and testing a measure for the ethical culture of organizations: The corporate ethical virtues model. Journal of Organizational Behavior, 29, 923–947. Kaptein, M. (2011). Understanding unethical behavior by unraveling ethical culture. Human Relations, 64(6), 843–869. Kelman, H. G. (1973). Violence without moral restraint: Reflections on the dehumanization of victims and victimizers. Journal of Social Issues, 29(4), 25–61. Kidder, R. M. (2009). Moral courage. New York, NY: Harper Collins. Kish-Gephart, J. J., Detert, J., Trevin˜o, L. K., Baker, V., & Martin, S. (in press). Situational moral disengagement: Can the effects of self-interest be mitigated? Journal of Business Ethics. Kish-Gephart, J. J., Detert, J. R., Trevin˜o, L. K., & Edmondson, A. C. (2009). Silenced by fear: The nature, sources, and consequences of fear at work. Research in Organizational Behavior, 29, 163–193. Kish-Gephart, J. J., Harrison, D. A., & Trevin˜o, L. K. (2010). Bad apples, bad cases, and bad barrels: Meta-analytic evidence about sources of unethical decisions at work. Journal of Applied Psychology, 95, 1–31. Knobe, J. (2003). Intentional action in folk psychology: An experimental investigation. Philosophical Psychology, 16, 309–324. Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to socialization. In D. A. Goslin (Ed.), Handbook of socialization theory and research (pp. 347–380). Chicago, IL: Rand McNally. Kouchaki, M. (2011). Vicarious moral licensing: The influence of others’ past moral actions on moral behavior. Journal of Personality and Social Psychology, 101(4), 702–715. Kuenzi, M., & Schminke, M. (2009). Assembling fragments into a lens: A review, critique, and proposed research agenda for the organizational work climate literature. Journal of Management, 35, 634–717. Kunda, G. (1992). Engineering culture: Control and commitment in a high-tech corporation. Philadelphia, PA: Temple University Press.

Martin et al. Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108, 480–498. Leary, M. R. (2010). Affiliation, acceptance, and belonging: The pursuit of interpersonal connection. In S. T. Fiske, D. T. Gilbert, & G. Linzey (Eds.), Handbook of social psychology (Vol. 2, pp. 864–897). Hoboken, NJ: Wiley & Sons. Leary, M. R., & Baumeister, R. F. (2000). The nature and function of self-esteem: Sociometer theory. Advances in Experimental Social Psychology, 32, 1–62. LePine, J. A., & van Dyne, L. (1998). Predicting voice behavior in work groups. Journal of Applied Psychology, 83, 853–868. Levin, D. Z., & Cross, R. (2004). The strength of weak ties you can trust: The mediating role of trust in effective knowledge transfer. Management Science, 50, 1477–1490. Liu, Y., Lam, L. W. R., & Loi, R. (2012). Ethical leadership and workplace deviance: the role of moral disengagement. Advances in Global Leadership, 7, 37–56. Loewenstein, G., Sah, S., & Cain, D. M. (2012). The unintended consequences of conflict of interest disclosure. The Journal of the American Medical Association, 307, 669–670. Lortie, D. C. (1975). Schoolteacher: A sociological study. Chicago, IL: University of Chicago Press. Lucas, R. (1987). Political-cultural analysis of organizations. Academy of Management Review, 12, 144–156. MacDonald, G., & Leary, M. R. (2005). Why does social exclusion hurt? The relationship between social and physical pain. Psychological Bulletin, 131, 202–223. Mael, F., & Ashforth, B. E. (1992). Alumni and their alma mater: A partial test of the reformulated model of organizational identification. Journal of Organizational Behavior, 13, 103–123. Margolis, J. D., & Molinsky, A. (2008). Navigating the bind of necessary evils: Psychological engagement and the production of interpersonally sensitive behavior. Academy of Management Journal, 51, 847–872. Marta, J. K. M., Singhapakdi, A., & Higgs-Kleyn, N. (2001). Corporate ethical values in South Africa.

321 Thunderbird International Business Review, 42, 755–772. Martin, K. D., & Cullen, J. B. (2006). Continuities and extensions of ethical climate theory: A meta-analytic review. Journal of Business Ethics, 69, 175–194. Mayer, D. M., Kuenzi, M., & Greenbaum, R. L. (2009). Making ethical climate a mainstream management topic: A review, critique and prescription for the empirical research on ethical climate. In D. De Cremer (Ed.), Psychological perspectives on ethical behavior and decision making (pp. 181–213). Greenwich, CT: Information Age Publishing. Mayer, D., Kuenzi, M., Greenbaum, R., Bardes, M., & Salvador, R. (2009). How low does ethical leadership flow? Test of a trickle down model. Organizational Behavior and Human Decision Processes, 108, 1–13. Mayer, D. M., Nurmohamed, S., Trevin˜o, L. K., Shapiro, D. L., & Schminke, M. (2013). Encouraging employees to report unethical conduct internally: It takes a village. Organizational Behavior and Human Decision Processes, 121, 89–103. Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20, 709–734. McFerran, B., Aquino, K., & Duffy, M. (2010). How personality and moral identity relate to individuals’ ethical ideology. Business Ethics Quarterly, 20, 35–56. McKinley, J. C., Jr. (2009, November 6). Polygamist sect leader convicted of sexual assault. The New York Times. Retrieved from http://www.nytimes. com/2009/11/06/us/06polygamy.html Merton, R. K. (1957). Social theory and social structure. Glencoe, IL: Free Press. Mesmer-Magnus, J. R., & Viswesvaran, C. (2005). Whistleblowing in organizations: An examination of correlates of whistleblowing intentions, actions, and retaliation. Journal of Business Ethics, 62, 277–297. Messick, D. M., & Bazerman, M. H. (1996). Ethical leadership and the psychology of decision making. Sloan Management Review, 37, 9–22.

322 Meyer, J. P., & Allen, N. J. (1984). Testing the ‘‘sidebet theory’’ of organizational commitment: Some methodological considerations. Journal of Applied Psychology, 69, 372–378. Miceli, M. P., & Near, J. P. (1992). Blowing the whistle: The organizational and legal implications for companies and employees. New York, NY: Lexington Books. Milgram, S. (1969). Obedience to authority. New York, NY: Harper & Row. Miller, D. T. (1999). The norm of self-interest. American Psychologist, 54, 1–8. Monin, B., Sawyer, P. J., & Marquez, M. J. (2008). The rejection of moral rebels: Resenting those who do the right thing. Journal of Personality and Social Psychology, 95, 76–93. Moore, C. (2008). Moral disengagement in processes of organizational corruption. Journal of Business Ethics, 80, 129–139. Moore, C., Detert, J. R., Trevin˜o, L. K., Baker, V. L., & Mayer, D. M. (2012). Why employees do bad things: Moral disengagement and unethical organizational behavior. Personnel Psychology, 65, 1–48. Moore, C., & Gino, F. (2013). Ethically adrift: How others pull our moral compass from true north. In B. Staw & A. P. Brief (Eds.), Research in organizational behavior (Vol. 33, pp. 53–77). New York, NY: Elsevier. Moore, D. A., & Loewenstein, G. (2004). Selfinterest, automaticity, and the psychology of conflict of interest. Social Justice Research, 17, 189–202. Mowday, R. T., Porter, L. W., & Steers, R. M. (1982). Employee–organization linkages: The psychology of commitment, absenteeism, and turnover (Vol. 153). New York, NY: Academic Press. Mulki, J. P., Jaramillo, F., & Locander, W. B. (2006). Emotional exhaustion and organizational deviance: Can the right job and a leader’s style make a difference? Journal of Business Research, 59, 1222–1230. O’Fallon, M. J., & Butterfield, K. D. (2005). A review of the empirical ethical decision making literature: 1996–2003. Journal of Business Ethics, 59, 375–413.

Organizational Psychology Review 4(4) O’Reilly, C. A., & Chatman, J. A. (1996). Culture as social control: Corporations, cults, and commitment. Research in Organizational Behavior, 18, 157–200. Paharia, N., Kassam, K. S., Greene, J. D., & Bazerman, M. H. (2009). Dirty work, clean hands: The moral psychology of indirect agency. Organizational Behavior and Human Decision Processes, 109, 134–141. Paharia, N., Vohs, K. D., & Deshpande, R. (2013). Sweatshop labor is wrong unless the shoes are cute: Cognition can both help and hurt moral motivated reasoning. Organizational Behavior and Human Decision Processes, 121, 81–88. Paolillo, J. G. P., & Vitell, S. J. (2002). An empirical investigation of the influence of selected personal, organizational, and moral intensity factors on ethical decision making. Journal of Business Ethics, 35, 65–74. Peterson, D. (2002). Computer ethics: The influence of guidelines and universal moral beliefs. Information Technology & People, 15, 346–361. Pizarro, D. A., Uhlmann, E., & Bloom, P. (2003). Causal deviance and the attribution of moral responsibility. Journal of Experimental Social Psychology, 39, 653–660. Popper, M., & Lipshitz, R. (1998). Organizational learning mechanisms: A structural and cultural approach to organizational learning. The Journal of Applied Behavioral Science, 34, 161–179. Reed, A. I., & Aquino, K. F. (2003). Moral identity and the expanding circle of moral regard toward out-groups. Journal of Personality and Social Psychology, 84, 1270–1286. Rest, J. R. (1986). Moral development: Advances in research and theory. New York, NY: Praeger. Reynolds, S. (2008). Moral attentiveness: Who pays attention to the moral aspects of life? Journal of Applied Psychology, 93, 1027–1041. Rosenblatt, V. (2012). Hierarchies, power inequalities, and organizational corruption. Journal of Business Ethics, 111, 237–251. Ross, S. A. (1973). The economic theory of agency: The principal’s problem. The American Economic Review, 63, 134–139. Ross, W. T., & Robertson, D. C. (2003). A typology of situational factors: Impact on salesperson

Martin et al. decision-making about ethical issues. Journal of Business Ethics, 46, 213–234. Rousseau, D. M. (1990). Assessing organizational culture: The case for multiple methods. In B. Schneider (Ed.), Organizational climate and culture (pp. 153–192). San Francisco, CA: Jossey-Bass. Ruppel, C. P., & Harrington, S. J. (2000). The relationship of communication, ethical work climate, and trust to commitment and innovation. Journal of Business Ethics, 25, 313–328. Sackmann, S. A. (1992). Culture and subcultures: An analysis of organizational knowledge. Administrative Science Quarterly, 37, 140–161. Salancik, G. R., & Pfeffer, J. (1978). A social information processing approach to job attitudes and task design. Administrative Science Quarterly, 23, 224–253. Schaubroeck, J. M., Hannah, S. T., Avolio, B. J., Kozlowski, S. W. J., & Lord, R. G. (2012). Embedding ethical leadership within and across organization levels. Academy of Management Journal, 55, 1053–1078. Schein, E. H. (1996). Culture: The missing concept in organization studies. Administrative Science Quarterly, 41, 229–240. Schein, E. H. (2004). Learning how and when to lie: A neglected aspect of organizational and occupational socialization. Human Relations, 57, 260–273. Schmeichel, B. J., & Vohs, K. (2009). Selfaffirmation and self-control: Affirming core values counteracts ego depletion. Journal of Personality and Social Psychology, 96, 770. Schneider, B. (1975). Organizational climates: An essay. Personnel Psychology, 28, 447–479. Schneider, B., Ehrhart, M. G., & Macey, W. H. (2013). Organizational climate and culture. Annual Review of Psychology, 64, 361–388. Schneider, B., & Reichers, A. E. (1983). On the etiology of climates. Personnel Psychology, 36, 19–39. Schutz, A. (1964). The stranger: An essay in social psychology. In A. Brodersen (Ed.), Collected papers II: Studies in social theory (pp. 93–105). The Hague, the Netherlands: Nijhoff. Schweitzer, M. E., Ordon˜ez, L., & Douma, B. (2004). Goal setting as a motivator of unethical

323 behavior. The Academy of Management Journal, 47, 422–432. Schwepker, C. H., Jr. (2001). Ethical climate’s relationship to job satisfaction, organizational commitment, and turnover intention in the salesforce. Journal of Business Research, 54, 39–52. Sen, A. K. (1977). Rational fools: A critique of the behavioral foundations of economic theory. Philosophy & Public Affairs, 6, 317–344. Shalvi, S., Eldar, O., & Bereby-Meyer, Y. (2012). Honest requires time (and lack of justifications). Psychological Science, 23, 1264–1270. Shepherd, D. A., Patzelt, H., & Baron, R. A. (2013). ‘‘I care about nature, but . . . ’’: Disengaging values in assessing opportunities that cause harm. Academy of Management Journal, 56, 1251–1273. Sherman, D. K., & Cohen, G. L. (2006). The psychology of self-defense: Self-affirmation theory. Advances in Experimental Social Psychology, 38, 183–242. Simha, A., & Cullen, J. B. (2012). Ethical climates and their effects on organizational outcomes: Implications from the past and prophecies for the future. Academy of Management Perspectives, 26, 20–34. Singhapakdi, A. (1999). Perceived importance of ethics and ethical decisions in marketing. Journal of Business Research, 45, 89–99. Singhapakdi, A., Vitell, S. J., & Franke, G. R. (1999). Antecedents, consequences, and mediating effects of perceived moral intensity and personal moral philosophies. Journal of the Academy of Marketing Science, 27, 19–36. Smircich, L. (1983). Concepts of culture and organizational analysis. Administrative Science Quarterly, 28, 339–358. Steele, C. M. (1988). The psychology of selfaffirmation: Sustaining the integrity of the self. Advances in Experimental Social Psychology, 21, 261–302. Stevens, G. W., Deuling, J. K., & Armenakis, A. A. (2012). Successful psychopaths: Are they unethical decision-makers and why? Journal of Business Ethics, 105, 139–149. Sumanth, J. J., Mayer, D. M., & Kay, V. S. (2011). Why good guys finish last: The role of justification

324 motives, cognition, and emotion in predicting retaliation against whistleblowers. Organizational Psychology Review, 1, 165–184. Sutherland, E. H. (1983). White collar crime: The uncut version. New Haven, CT: Yale University Press. Sweeney, B., Arnold, D., & Pierce, B. (2010). The impact of perceived ethical culture of the firm and demographic variables on auditors’ ethical evaluation and intention to act decisions. Journal of Business Ethics, 93, 531–551. Swidler, A. (1986). Culture in action: Symbols and strategies. American Sociological Review, 51, 273–286. Sykes, G. M., & Matza, D. (1957). Techniques of neutralization: A theory of delinquency. American Sociological Review, 22, 664–670. Tenbrunsel, A. E., & Messick, D. M. (2004). Ethical fading: The role of self-deception in unethical behavior. Social Justice Research, 17, 223–236. Tenbrunsel, A. E., & Smith-Crowe, K. (2008). Ethical decision making: Where we’ve been and where we’re going. Academy of Management Annals, 2, 545–607. Tenbrunsel, A. E., Smith-Crowe, K., & Umphress, E. E. (2003). Building houses on rocks: The role of the ethical infrastructure in organizations. Social Justice Research, 16, 285–307. Tetlock, P. E. (1985). Accountability: The neglected social context of judgment and choice. Research in Organizational Behavior, 7, 297–332. Trevin˜o, L. K. (1986). Ethical decision making in organizations: A person–situation interactionist model. Academy of Management Review, 11, 601–617. Trevin˜o, L. K. (1990). A cultural perspective on changing and developing organizational ethics. Research in Organizational Change and Development, 4, 195–230. Trevin˜o, L. K. (1992). Moral reasoning and business ethics: Implications for research, education, and management. Journal of Business Ethics, 11, 445–459. Trevin˜o, L. K., & Brown, M. E. (2004). Managing to be ethical: Debunking five business ethics myths. Academy of Management Executive, 18, 69–81. Trevin˜o, L. K., Butterfield, K. D., & McCabe, D. L. (1998). The ethical context in organizations:

Organizational Psychology Review 4(4) Influences on employee attitudes and behaviors. Business Ethics Quarterly, 8, 447–476. Trevin˜o, L. K., Den Nieuwenboer, N. A., & KishGephart, J. J. (in press). (Un)ethical behavior in organizations. Annual Review of Psychology. Trevin˜o, L. K., & Nelson, K. A. (2010). Managing business ethics. New York, NY: Wiley. Trevin˜o, L. K., Weaver, G. R., & Reynolds, S. J. (2006). Behavioral ethics in organizations: A review. Journal of Management, 32, 951–990. Trevin˜o, L. K., & Youngblood, S. A. (1990). Bad apples in bad barrels: A causal analysis of ethical decision-making behavior. Journal of Applied Psychology, 75, 378–385. Trice, H. M., & Beyer, J. M. (1993). The cultures of work organizations. Englewood Cliffs, NJ: Prentice-Hall. Tsai, M. T., & Huang, C. C. (2008). The relationship among ethical climate types, facets of job satisfaction, and the three components of organizational commitment: A study of nurses in Taiwan. Journal of Business Ethics, 80, 565–581. Tsang, J. (2002). Moral rationalization and the integration of situational factors and psychological processes in immoral behavior. Review of General Psychology, 6, 25–50. Turner, A. (2013, March 12). The Branch Davidian siege, 20 years later. San Antonio Express-News. Retrieved from news/article/The-Branch-Davidian-siege-20years-later-4346599.php#src¼fb Umphress, E. E., & Bingham, J. B. (2011). When employees do bad things for good reasons: Examining unethical pro-organizational behaviors. Organization Science, 22, 621–640. Umphress, E. E., Bingham, J. B., & Mitchell, M. S. (2010). Unethical behavior in the name of the company: The moderating effect of organizational identification and positive reciprocity beliefs on unethical pro-organizational behavior. Journal of Applied Psychology, 95, 769–780. Van Dyne, L., Vandewalle, D., Kostova, T., Latham, M. E., & Cummings, L. L. (2000). Collectivism, propensity to trust and self-esteem as predictors of organizational citizenship in a non-work

Martin et al. setting. Journal of Organizational Behavior, 21, 3–23. Vardi, Y. (2001). The effects of organizational and ethical climates on misconduct at work. Journal of Business Ethics, 29, 325–337. Victor, B., & Cullen, J. B. (1987). A theory and measure of ethical climate in organizations. Research in Corporate Social Performance and Policy, 9, 51–71. Victor, B., & Cullen, J. B. (1988). The organizational bases of ethical work climates. Administrative Science Quarterly, 33, 101–125. Vitell, S. J., Bakir, A., Paolillo, J. G. P., Hidalgo, E. R., Al-Khatib, J., & Rawwas, M. Y. A. (2003). Ethical judgments and intentions: A multinational study of marketing professionals. Business Ethics: A European Review, 12, 151–171. Wang, Y. D., & Hsieh, H. H. (2012). Toward a better understanding of the link between ethical climate and job satisfaction: A multilevel analysis. Journal of Business Ethics, 105, 535–545. Williams, K. D. (2001). Ostracism: The power of silence. New York, NY: Guilford. Wiltermuth, S. S. (2011). Cheating more when the spoils are split. Organizational Behavior and Human Decision Processes, 115, 157–168. Wimbush, J. C., & Shepard, J. M. (1994). Toward an understanding of ethical climate: Its relationship to ethical behavior and supervisory influence. Journal of Business Ethics, 13, 637–647. Wimbush, J. C., Shepard, J. M., & Markham, S. E. (1997a). An empirical examination of the multidimensionality of ethical climate in organizations. Journal of Business Ethics, 16, 67–77. Wimbush, J. C., Shepard, J. M., & Markham, S. E. (1997b). An empirical examination of the relationship between ethical climate and ethical behavior from multiple levels of analysis. Journal of Business Ethics, 16, 1705–1716. Zhong, C., Liljenquist, K. A., & Cain, D. M. (2009). Moral self-regulation: Licensing and compensation.

325 In D. C. Cremer (Ed.), Psychological perspectives on ethical behavior and decision making (pp. 75–89). Greenwich, CT: Information Age Publishing. Zohar, D., & Luria, G. (2005). A multilevel model of safety climate: Cross-level relationships between organization and group-level climates. Journal of Applied Psychology, 90, 616–628.

Author biographies Sean R. Martin is the Johnson Leadership Programs Fellow at the S. C. Johnson Graduate School of Management at Cornell University. He received his PhD in Management from Cornell University. His research focuses on the interplay between leadership, values, and organizational culture. Jennifer J. Kish-Gephart is an assistant professor of management at the Walton College of Business at the University of Arkansas. She received her PhD in Organizational Behavior from The Pennsylvania State University. She is broadly interested in social issues in management, with a particular focus on behavioral ethics, and diversity and inequality. James R. Detert is an associate professor of management and organizations at the S. C. Johnson Graduate School of Management at Cornell University. His research focuses on voice, leadership processes and behaviors, and ethical decision making. He holds an MBA from the University of Minnesota, a Master’s degree in Sociology from Harvard University, and a PhD in Organizational Behavior from Harvard University.

Suggest Documents