Delusion and Self-Deception: Mapping the Terrain

Delusion and Self-Deception: Mapping the Terrain Tim Bayne University of Oxford and St. Catherine's College Manor Road Oxford OX1 3UJ United Kingdom ...
Author: Guest
18 downloads 0 Views 387KB Size
Delusion and Self-Deception: Mapping the Terrain

Tim Bayne University of Oxford and St. Catherine's College Manor Road Oxford OX1 3UJ United Kingdom [email protected] This paper is forthcoming in T. Bayne and J. Fernandez (eds) Delusion and Self-Deception: Motivational and Affective Influences on Belief-Formation (Psychology Press). Please consult the published version for purposes of quotation.

The papers in this volume are drawn from a workshop on delusion and self-deception, held at Macquarie University in November of 2004. Our aim was to bring together theorists working on delusions and self-deception with an eye towards identifying and fostering connections—at both empirical and conceptual levels—between these domains. As the contributions to this volume testify, there are multiple points of contact between delusion and self-deception. This introduction charts the conceptual space in which these points of contact can be located and introduces the reader to some of the general issues that frame the discussion of subsequent chapters.

1. Identifying the phenomena What are accounts of delusion and self-deception accounts of? It would be premature to insist on strict definitions of these phenomena prior to theories of them: here, as elsewhere in science, definitions are subsequent to theory-development rather than prior to them (Murphy 2006). Indeed, as notions that have their home in folk psychology, it is not clear that we should expect to be able to give strict definitions of either delusion or selfdeception. Rather than begin with definitions we begin with exemplars—ideal cases that serve as paradigms of the entities in question. Consider the following vignettes:

1

Harriet says that she is in contact with aliens. She complains that they control both her actions and her thoughts, and that her mind is no longer her own. James claims that the government is out to get him. He refuses to leave his house for fear of being followed by secret agents. When asked to justify his belief that he is being persecuted, James refers to the fact that he now receives fewer letters than he once did as proof that the government is stealing his mail. Amir says that his wife, with whom he lives, has been replaced by an impersonating robot. This robot “looks and acts just like my wife but it isn’t my wife.” When asked how he knows that the person he is looking at is not his wife Amir says that “she just looks different in some way”, but he cannot say anything more than this. Clinicians would describe each of these individuals as delusional. Harriet has delusions of alien control and thought insertion; James has delusions of persecution, and Amir has the Capgras delusion—the delusion that someone close to you, typically a family member, has been replaced by an impostor. What makes all these cases delusions? Why group them together as instances of a single phenomenon? We return to this question shortly. Here are some exemplars of self-deception: Martha has good evidence that her son has been killed in the war. His apparently lifeless body was sighted by a fellow-soldier 3 years ago, and Martha has not heard from her son despite the fact that the war ended a year ago. Yet Martha continues to insist that her son is still alive. Last year Justin left his wife of 40 years for his 25 year-old secretary. He says that his marriage had been on the rocks for decades, and that his wife will actually be happier without him. Those who know Justin and his wife well say that their marriage had turned sour only recently, and that his wife is devastated by the fact that she has been left for another, and younger, woman. Sonia has cancer, and has been told by doctors that she has 1 month to live. She avoids talking about the diagnosis, and continues to live as though her illness is merely temporary. She is saving money for a trip to see her son in one year; and refuses to put her affairs in order despite the requests of her friends and family to do so. In light of these exemplars, what should we say about how delusion and self-deception are related? Most fundamentally, delusion and self-deception appear to be examples of pathological belief—of belief that has gone wrong in some way. In the case of self-deception it is fairly 2

clear—at least in general terms—what has gone wrong: the subject’s motivational and affective states have led them to flout certain norms of belief formation. In the case of delusion, it is rather less clear why the subject has ended up with pathological beliefs. Although many of the classical analyses of delusion in the psycho-analytical literature were heavily motivational in nature (Enoch & Trethowan 1991), the focus of much recent theorizing about delusions has been on ‘cold’ rather than ‘hot’ factors. The guiding thought behind this volume is that this focus might have led us to miss important insights into delusion, and that there is much to learn about delusional belief by examining the role of affective and motivational processes in belief-formation. In the next section we examine in more detail just what it might mean to say that delusion and self-deception are pathologies of belief. In section three we turn to the question of whether delusion and self-deception are really forms of belief, or whether—as some have claimed—these states are only belief-like. The aim of section four is to clarify the notion of hot cognition, and in sections five and six respectively we provide overviews of the ways in which hot cognition might enter into the explanation of self-deception and delusion. We conclude, in section seven, with an overview of the chapters that constitute this volume.

2. Delusion and self-deception as pathologies of belief-formation The standard account of what it might be for delusion and self-deception to be pathologies of belief-formation appeals to the notion of epistemic rationality. According to this view, delusional and self-deceptive belief is pathological in that the subject in question flouts the epistemic norm of believing only what one’s evidence licenses. What little evidence the subject has is outweighed by evidence against the proposition in question—evidence that is in the subject’s possession. This epistemic approach to matters is built into the DSM characterization of delusion: Delusions: A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary (DSM-IVTR, 2000: 821). The details of the DSM characterization are widely contested—must a delusion be about external reality?—but there would be broad agreement with the claim that what makes a belief delusional is the fact that it is held in the face of obvious proof or evidence to the contrary. The epistemic approach points to a deep connection between delusion and self-deception, for self-deceptive belief also involves a failure to believe in accordance with one’s 3

evidence. Of course, we might want to reserve the term ‘delusion’ for gross failures of epistemic rationality. We might want to say that self-deception becomes delusional only when the belief in question is held in the face “of incontrovertible and obvious proof or evidence to the contrary.” We might want to employ a notion of self-deception— ‘everyday’ or ‘garden-variety’ self-deception—there the subject’s failure of epistemic rationality falls short of delusional. On this conception of things, we would have a partial overlap between the categories of delusion and self-deception: certain instances of selfdeception would qualify as delusional, but there would be instances of delusion that are not also instances of self-deception and instances of self-deception that are not also instances of delusion. (Just where to locate Martha, Justin and Sonia in this framework might be a matter of some debate.) Although there is much to recommend this epistemic analysis of the sense in which delusion and self-deception are pathologies of belief, there are certain problems—some more serious than others—with it. A first point to note is that we should allow that the deluded and the self-deceived might have some evidence for their belief. This is certainly true of those with mundane delusions, such as James, who has persecutory delusions. Governments have been known to persecute their citizens, and the hypothesis that the government is stealing his mail provides James with an explanation, even if it is not the best explanation, of why he receives fewer letters than he once did. Indeed, even those with bizarre delusions might have some evidence for their delusional beliefs. Let us take a moment to explain this claim. Inspired by Maher’s work (1974, 1988), a number of contemporary theorists have suggested that (many) delusions might be grounded in unusual experiences. Following Campbell (2001), we will call this the empiricist approach to delusion (see also Davies et al 2001). Arguably, the ‘poster child’ for the empiricist approach is the Ellis and Young account of Capgras’ delusion (Ellis and Young 1990; Ellis et al 1997; see also Stone and Young 1997). Ellis and Young’s model builds on Bauer’s two-route model of face processing, according to which visual recognition of faces involves a covert route that responds to the affective significance of familiar faces and an overt route that involves semantic information (Bauer 1984, 1986). Ellis and Young proposed that the Capgras delusion arises when the covert route is damaged but the overt route remains intact: the patient recognizes the target individual in some sense, but they take them to be an impostor because they lack the expected positive affective response. The perceived person ‘looks like’ but does not ‘feel like’ the family member in question, hence the adoption of the Capgras belief that they are an impostor of some kind.

4

Another example of the empiricist approach to delusion can be found in the model of alien control and thought insertion developed by Chris Frith and colleagues (Frith 1987; Frith 1992; Frith et al. 2000a, 2000b). The details of Frith’s model have changed over the years, but the basic idea is that patient with delusions of alien control or thought insertion suffer from an impairment to the action monitoring systems, leading to disturbances in the sense of agency. According to this account, the patient develops the delusion that their thoughts or actions are under the control of alien forces in order to make sense of their experiences of loss of control. In short, even the delusional belief that one’s wife has been replaced by an impostor or that one’s movements are under the control of alien forces might be grounded in evidence—experiential evidence—of a certain kind. And if that is right, then it is no longer obvious that these delusions are held “despite what constitutes incontrovertible and obvious proof or evidence to the contrary.” Of course, one might argue that even if the empiricist approach brings delusions within the realm of the comprehensible, there is nonetheless a robust sense in which the model paints the delusional patient as epistemically negligent. It is one thing to have an experience of unfamiliarity when looking at one’s wife or to lack the normal experience of agentive control, it is quite another to believe that one’s wife has been replaced by an impostor or that one’s actions are controlled by aliens from another planet. Much more can be said for and against the epistemic conception of delusion (and selfdeception), but instead of going down that path we want to introduce another way in which to conceptualise pathologies of belief-formation. Rather than think of doxastic pathology in terms of departures from the norms of epistemic rationality, one can think of it in terms of departures from the operating norms of belief-formation, where an operating norm is a norm that specifies how a psychological system ought to function. Just as agnosias involve departures form the operating norms of visual perception, so too delusion and self-deception might involve departures from the operating norms of belief formation. This approach is often overlooked because it is implicitly assumed that the operating norms of belief-formation and the epistemic norms of belief-formation must converge: that is, that an account of epistemic norms will double as an account of the operating norms of belief-formation. On this view, a human being who fails to believe only so far as their evidence allows will also manifest abnormalities of belief-formation. This picture should be resisted. Leaving aside the effects of motivation on belief-formation, there is ample evidence for the view that the operating norms of belief-formation involve significant departures from those of epistemic rationality (Gilovich 1991; Stanovich 1999). It is perfectly normal for human beings to commit the conjunction fallacy, despite the fact 5

that it violates the norms of epistemic rationality. Consider also a certain kind of Humean skeptic (not, perhaps, Hume himself), who denies that we have adequate evidence for belief in a world of ordinary objects, a real causal relation, or the reasonableness of induction. Although this skeptic denies that these beliefs are epistemically justified, she can allow that such beliefs come very naturally to us, and that someone with a normally functioning belief-formation system will believe that objects continue to exist unperceived, that there is a real causal relation, and that induction is reasonable. It is only those with damaged belief-formation systems (or those corrupted by philosophy!) who refrain from forming such beliefs. The point generalizes. Belief in a world of objective moral fact, supernatural entities, and personal immortality seem to be near-universal features of the human doxastic condition, but the epistemic status of these beliefs is very much up for debate. Do delusion and self-deception involve departures from the operating norms of beliefformation? Self-deception—at least, everyday self-deception—need involve no departure from the operating norms of belief-formation. There is overwhelming evidence that normal human beings have a systematically distorted self-conception (Alicke et al 2001; Taylor 1989; Taylor & Brown 1988). Drivers tend to believe that their driving abilities are above average (McKenna et al 1991); teachers tend to believe that their teaching abilities are above average (Cross 1977); and most of us believe that we are less prone to selfserving biases than others are (Promin et al 2004). Having an overly positive self-image seems to be part of the functional profile of the human doxastic system; indeed, one might even argue that having an accurate self-conception is an indication of doxastic malfunction (Sackeim 1978, 1983, 1988; Taylor 1989; Taylor & Brown 1988). This is not to say that motivationally-driven belief-formation cannot count as pathological—this could happen if, for example, motivational states had an influence on beliefs not within their normal reach, or if they had an abnormal strong degree of influence over belief-fixation—but there is no reason to regard motivationally-driven belief-formation as such as pathological. What about delusions? On the face of things, it seems obvious that delusions involve departures—typically quite radical departures—from the operating norms of human belief-formation. Delusions stand out as exotic specimens in the garden of belief, as examples of what happens precisely when the mechanisms of belief-formation breakdown. In support of this point, it is of some note that the DSM characterization of delusion includes a special exemption for religious beliefs. This exemption appears to be ad hoc from the perspective of the epistemic account of delusions, but it is perfectly appropriate in the context of the operating norms account, for—certain commentators to the contrary—there is no reason to suppose that religious belief as such is indicative of 6

doxastic malfunction.1 Delusions, by contrast, seem to be symptomatic of doxastic malfunction. We noted above that the conjunction of an epistemic conception of delusion and an empiricist-based account of delusions threatens to ‘take the delusion out of delusion’, for the upshot of the empiricist accounts seems to be that the patient’s belief is not held despite what constitutes incontrovertible and obvious proof or evidence to the contrary. Does the operating norms conception of delusion also threaten to ‘take the delusion out of delusion’ when combined with empiricist theories of delusion? There is reason to think that pure empiricist (so-called ‘one-factor’) accounts of delusion do indeed have this consequence. Although experience-based accounts conceive of delusions as grounded in psychological malfunction, they see that malfunction as restricted to experiential mechanisms, broadly construed; on their view, delusion involves no damage to the mechanisms of belief-formation as such. However, the prospects for pure empiricist approaches to delusion look bleak. For one thing, there are many delusions for which any kind of empiricist account is hard to provide. We have in mind here not only the florid and polythematic delusions often seen in schizophrenia, but also such relatively monothematic delusions as delusional jealousy and delusions of persecution. Furthermore, there is good reason to think that even where a delusion is plausibly grounded in an unusual experience, this experience will not provide a full account of that delusion. We can see this by noting that there are individuals who have the unusual experience in question—for example, they lack the normal affective response to the faces of family members—but fail to develop the corresponding delusion (see e.g. Coltheart 2007; Davies et al 2001; but see Hohwy and Rosenberg 2005). In light of this, many theorists have argued that we need to invoke a non-experiential factor—a so-called “second factor”—to explain why it is that the unusual experience prompts the patient to develop (and retain) their delusional belief. Proposed candidates for this so-called second factor include (but are not limited to): the tendency to privilege observational data over background belief (Stone & Young 1997); the possession of a particular attributional style (Kaney & Bentall 1989); a disposition to jump-to-conclusions (Garety 1991); and a preference for personal rather than sub-personal explanations (Langdon & Coltheart 2000). In the present context, the critical question is not so much 1

This isn’t to say that the clause exempting culturally appropriate beliefs cannot be justified by appeal to a purely

epistemic conception of delusion. One might argue that those who depart from community sanctioned belief will typically have ignored evidence that is available to them.

7

what this second-factor might look like, but whether it involves a departure from normal belief-formation. Two-factor theorists who regard the second factor as a deficit of some kind will answer this question in the affirmative, but those who regard the second factor as a pre-morbid bias of some kind need not. However, even those theorists who regard the second factor as a bias rather than a deficit in belief-formation might regard the patient as having an abnormal belief-formation mechanism if the bias in question is significant enough. The upshot of the preceding is that whether or not two-factor accounts of delusions ‘take the delusion out of delusions’ depends on exactly how they are formulated. Some versions of the two-factor approach paint the delusional patient has having a specific abnormality in belief-formation but other versions of the two-factor approach do not.

3. Delusion, self-deception, and the nature of belief We turn now from delusion and self-deception as pathologies of belief to the more fundamental question of whether these states even qualify as beliefs in the first place. As has often been noted, the mental states seen in self-deception and delusion depart in striking ways from paradigmatic beliefs (for delusion see Stone & Young 1987; for selfdeception see Gendler 2007). These departures are manifest in both the practical and theoretical realms: the Capgras patient may fail to inquire about the fate of the missing loved one and the person who self-deceptively insists that she is healthy (despite evidence to the contrary) may find excuses for not engaging in physically demanding work; those who are self-deceived and those who are delusional might be unaware of (or ignore) inconsistencies between the target thought and other claims to which they are committed. Of course, we are all prone to lapses of memory and intentional integration, but the selfdeceived and the delusional often exhibit such tendencies in the extreme. At the limit, the self-deceived individual might appear to both believe (say) that she has cancer and also believe that there is nothing wrong with her, thus generating what Mele has called the “static paradox of self-deception.” Why a paradox? Many accounts of belief take it to be constitutive of belief that one employs the content of what one believes in practical and theoretical reasoning. On such models, failing to employ the content of the apparent belief in thought and action is not just a violation of rationality, but suggests that the agent does not really believe the content in question. A model of S according to which S believes p and also believes not-p at one and the same time will fail to provide explanatory or predictive traction on S’s behaviour: the explanatory and predictive force of ascribing the belief p to S is undercut by the ascription of the belief not-p to them, and vice-versa. 8

There are three main responses to the ‘problem of belief’ in the literature. A radical response—most prominent on the literature on self-deception—is to divide ‘the’ agent into two, and account for the intentional incoherence in the agent’s behaviour by holding that different behaviours are really under the control of different agents or selves (Lockie 2003). In the extreme case, there is one agent who believes p and another who believes notp. This solution is implicit in so-called division analyses of self-deception, according to which there are different ‘parts’ of the mind, and the intentional states of one part might be at odds with those of another. Although often developed in the context of psychoanalytic ideas, the division approach can also be developed in straightforward functionalist terms (Davidson 1985). A second response to the problem takes issue with the assumption that it is not possible for an agent to believe p and believe not-p at one and the same time. According to some approaches to belief, it is possible for an agent to have inconsistent beliefs at one and the same time, as long as the beliefs in question have different triggering conditions (Lewis 1986; Schwitzgebel 2002). The dispositions distinctive of believing p will be activated by one triggering condition, whilst those distinctive of believing not-p will be activated by other triggering conditions. A third response to ‘the problem of belief’ is to hold that the agent takes some attitude other than belief to the content of their delusion. Within the delusions literature, Sass (1994) has suggested that delusional patients sometimes engage in a kind of double bookkeeping, in which they confine their delusional fantasies to a world of make-believe. In a similar vein, Currie (2000) has argued that delusional patients mistake their imaginings for believings: the Capgras patient might believe that he believes that his wife has been replaced by an impostor, but he doesn’t believe this, he merely imagines it. Patton (2003) defends a similar account of self-deception: the self-deceived subject believes that she believes that she does not have cancer, but she is mistaken about her beliefs and she does not have this belief; in fact, she believes that she has cancer. An important variant on this theme is the thought that we must invoke sui generis attitudes in order to account for the way in which delusional subjects hold their delusions (Stephens & Graham 2005). Egan (this volume) explores this proposal in some detail. Whatever the optimal characterization of the mental states seen in delusion and self-deception, there is good reason to think that we must tackle both phenomena together rather than adopt a divide and conquer approach, as has so often been the case.

9

4. Affect, motivation and belief-formation As we indicated above, a central theme in the interface between theories of self-deception and theories of delusion concerns the roles that affect and motivation play in these two phenomena. Section 5 and 6 focus on the role that affect and motivation might play in selfdeceptive and delusional belief-formation respective; first, however, we try to clarify these notions themselves. Affect and motivation are typically contrasted with cognition and perception; the former constitute ‘hot cognition’, the latter constitute ‘cold cognition’. Although intuitively compelling, the contrast between hot and cold cognition is difficult to spell out with any precision. One might attempt to unpack the contrast by appealing to phenomenological considerations. Hot states, one might say, are phenomenologically vivid and intense, whereas cold states are not: they either lack any phenomenal character at all (as many take thoughts to do), or they have a ‘subdued’ phenomenology. But this proposal is problematic: experiences of colour and flavour are phenomenologically vivid but are not, intuitively, ‘hot’, and unconscious affective and motivational states are ‘hot’ without having any phenomenology at all. Phenomenological considerations are likely to bear on the contrast between hot and cold cognition, but we cannot understand the contrast between hot and cold cognition by appealing to phenomenology alone. Another way to approach the distinction between hot and cold cognition is in functional terms. Some states, such as perceptions and beliefs, are in the business of representing what the world is like. We might call these states ‘thetic’, on account of the fact that they are truth-directed. Other states, such as desires and intentions, are in the business of generating and structuring action—of making it the case that the world is a certain way. We might call these states ‘telic’, on account of the fact that they are goal-directed. Perhaps we can equate cold cognition with thetic representation and hot cognition with telic representation. How might this proposal fit in with an intuitive conception of these categories? Dealing with motivational states is straightforward, for motivation belong with desire on the telic side of the equation. One can think of motivational states as desires, or at least, as providing the ‘impetus’ which enables the agent to act on desire. Within the cognitive and perceptual realms we can think of motivations as biasing functions—they specify the level of evidence that the agent requires before being willing to accept a certain proposition, make a certain cognitive move, or (in the case of perception) detect a certain property. Affective states, however, are more problematic. On the one hand they appear to belong with motivation on the telic side of the telic/thetic equation, for they prompt the agent to action. At the same time, however, they are also thetic in that they represent what the 10

world is like. As the slogan has it, we can think of ‘affect as information’ (Clore et al 2001). Affective states represent the evaluative status of stimuli: this person can be trusted but that person cannot; this place is safe but that place is not; this food can be eaten but that food cannot. As Zajonc has noted (1980), affective states also represent the relation that the agent bears to the object of judgment. In the state of fear, one represents oneself as being threatened by the feared object. So, there is something to be said for the idea that affective states are what Millikan (1996) terms ‘pushmi-pullyi representations’: they both (purport to) inform the subject about features of their environment and drive the subject to engage in certain responses to those environmental features. How might hot cognition bear on belief-formation? Hot cognition has traditionally been regarded as an enemy of rationality, and it is not hard to see why. Crimes of passions provide only one example of many in which affective states derail the project of rational belief-formation. (Exactly how affective states derail rationality is unclear—perhaps the only account that can be given of this process is in neurobiological rather than information-processing terms). But to say that affect can derail belief-formation is not to say that it always derails belief-formation, and it is now widely granted that affect contributes to belief-formation, and cognition more generally, in a number of positive ways (Zajonc 1980; see also Adolphs and Spezio this volume). It is less clear how motivational states can make a positive contribution to beliefformation. A motivational state is, roughly, a desire that the world be a certain way. Such states can impact on belief-formation in various ways. Most directly, the desire that p might simply cause the subject to believe that p. More subtly, the desire that p might lead the subject to gather and evaluate evidence in a biased way. Either way, one might be hard-pressed to see how desire could make a positive contribution to doxastic affairs. But perhaps we should not be too hasty here. Judged against epistemic norms there is not much to recommend motivated reasoning, for epistemic justification is constitutively tied to avoiding falsehood and detecting truth. But there are other norms against which to judge belief-formation. For example, one might evaluate the mechanisms of belief-fixation in terms of how well they enhance the agent’s well-being, reproductive fitness, or some such property. Arguably, one is generally better off believing that the world is how it is rather than how one wants it to be, there are some domains—perhaps quite a number of domains—in which false but motivated belief brings with significant benefit at little cost. A life governed by indiscriminately motivated belief might be nasty, brutish and short, but suitably filtered motivational effects on belief might be expected to increase the agent’s well-being in any number of ways. 11

5. Affect and motivation in self-deception What role do affect and motivation play in the formation of self-deceptive states? The classical model of self-deception holds that a person forms the self-deceptive belief that not-p because she finds the belief that p distressing and wants to avoid that distressing experience. Thus, she intentionally makes herself believe that not-p is the case, where this intention is grounded in the affect associated with believing that p. This combination of affect and motivation raises the question of how exactly the content of the relevant desire or intention should be construed. It appears as though the intention to remove the troublesome belief could not succeed unless the agent remained unaware of it, but it is not clear how the agent could remain unaware of an intention which was grounded in an affective response. This leads us to what Mele (this volume and elsewhere) has called the dynamic paradox of self-deception: subjects who are self-deceived in believing that p seem to have managed to hide the intention to believe that not-p from themselves. As we noted above, the division model of self-deception replaces the idea of a single agent attempting to deceive itself with a model on which different parts or components of the agent have different agendas. Motivation still plays an important role here, for the division model retains the idea that the subject comes to believe that not-p because they want to form this belief. But this idea is not unproblematic. The subject’s desire to believe that not-p cannot occur in the same ‘mental compartment’ as the belief that p—for if it did we would be forced to accept that the desire in question makes a difference to what happens in other parts of the subject’s mind—but it is not immediately clear where else the subject’s desire that not-p be the case could be located. A further question concerns the role of affect in the division model. A theorist might invoke affect to explain why the subject’s belief that she has cancer and her belief that she does not have cancer occur in different mental components; perhaps mental division is something that the subject instigates in order to deal with the distressing belief. But pursuing this path threatens to return us to the idea of self-deception as an intentional phenomenon and associated paradoxes. Speaking of the Freudian version of the division model Sartre writes: “it is not sufficient that [the censor] discerns the condemned drives; it must also apprehend them as to be repressed, which implies in it at the very least an awareness of its activity. In a word, how could the censor discern the impulses needing to be repressed without being conscious of discerning them?” (1969: 52-53.) Mele avoids these difficulties by rejecting the idea that the subject believes not-p because she intends to believe it. On his view, self-deceptive belief is merely belief that has been formed in a motivationally biased way. The subject who is self-deceived when she believes that p has formed her belief due to the influence of a desire. Typically, this is the 12

desire that p not be the case. (In some ‘twisted’ cases, the operative desire is actually the desire that p be the case.) What is distinctive about the motivational model is that the motivational state that drives self-deception is directed at the world rather than at one’s own beliefs. Although this model allows that affect might play a role in self-deception, it does not itself give affect a role in explaining why the subject has the desire that p not be the case. It is essential to Mele’s model that the subject desires that a certain state of affairs not be the case, but accounting for the origination of that desire is a further question that Mele does not address. Mele’s account of self-deception has many attractive features, but it also has certain costs. One cost concerns the thought that in many cases of self-deception the subject is aware of the truth ‘deep down’, as it were. The self-deceived subject who says that she does not have cancer is likely to avoid talking to doctors about her condition, refuse to discuss her symptoms, miss appointments at the hospital, and so on. This behaviour is not hard to explain on the classical model of self-deception—the subject avoids talking to doctors or considering the relevant symptoms because she believes that she has cancer—but it is harder to account for on Mele’s account of self-deception as merely motivationally biased belief-formation. One would expect a subject with the desire not to have cancer to take appropriate measures to avoid getting cancer (or to cure herself from it), rather than engage in the kinds of behaviour that the self-deceived tend to exhibit. Two approaches to self-deception appear to have the resources to account for the fact that the subject seems to know the truth at some level: according to one, self-deception involves a failure of self-knowledge (Patten 2003); according to the other, self-deception involves desires to acquire certain beliefs (Nelkin 2002; Funkhouser 2005). The former models attributes to the subject the false, higher-order belief that he believes that not-p, whereas the latter approach attributes to the subject the desire to believe that not-p. Interestingly, both models face problems exactly where Mele’s model is most promising. The ‘failure of self-knowledge’ model can explain why the self-deceived subject seems to know the truth, for on this model the subject does indeed believe the truth. However, this model of self-deception struggles to account for the further thought that self-deception involves epistemic negligence: the self-deceived subject might have made a mistake about her own beliefs, but doxastic error does not entail epistemic irresponsibility. (Of course, the proponent of the failure of self-knowledge model could argue that motivational or affective factors play a role in the formation of the false meta-belief, and it might be possible to develop this idea in such a way that it turns out that the self-deceived subject is not merely wrong but is also epistemically unjustified in her meta-belief.) 13

According to the ‘desire-to-believe’ model, the subject who desires to believe that she does not have cancer misses medical appointments and avoids opportunities to discuss her symptoms because she desires to believe that she doesn’t have cancer. How this proposal is developed depends on whether or not one takes the operative desire to be satisfied. If the desire is satisfied, then it follows that the patient does believe that she does not have cancer. But does she also believe that she does have cancer? The advocate of this version of the desire-to-believe model needs to agree that she does, otherwise they will come up against the static paradox of self-deception. But if she does not believe that she has cancer, then it is not clear how we are to explain why she desires to believe that she doesn’t have cancer. Suppose, on the other hand, that the desire that motivates self-deception is not satisfied. Thus understand the model can avoid the static paradox of self-deception, but it now needs to address the question of whether the subject is aware of her desire to believe that she not have cancer. It seems as though the subject ought to be aware of this desire, yet it also seems that she would not be capable of deceiving herself if she were aware of it. The desire-to-believe model needs to find a way to avoid the two horns of this dilemma. There are three constraints on a theory of self-deception: (i) that it avoid the two classical paradoxes of self-deception; (ii) that it account for the fact that the subject appears to know the truth at some level; and (iii) that it imply that the self-deceived subject is epistemically negligent. All hands are agreed in thinking that motivational and affective factors have an important role to play in meeting these three constraints, but—as we have seen—there is little agreement about exactly how these hot factors conspire with cold factors to generate self-deception.

6. Affect and motivation in delusion We turn now to delusion. The notion that motivation might play a role in explaining delusions is not a new one. As McKay and co-authors point out in their chapter, motivational accounts of delusion date back at least as far as Capgras’ suggestion that the delusion which bears his name is generated by the patient’s need to accommodate feelings of ambivalence about his or her spouse (Capgras & Carette 1924). Although purely ‘hot’ accounts of delusion are no longer widely endorsed, there are “hot” elements within many of the leading accounts of delusions. By way of illustrating this claim, consider the two monothematic delusions with which we introduced the empiricist approach to delusions: the Capgras delusion and delusions of alien control. Theorists sometimes describe the content of the abnormal experience underlying the Capgras delusion as simply an experience of unfamiliarity, but there is much to be said in support of the view that the state underlying this delusion involves a 14

much richer representation of alienation and unfamiliarity, an experience that is charged with negative affect (see Pacherie, this volume). Similarly, the abnormal experience underlying delusions of alien-control is sometimes described as simply the experience of a lack of control, but again it is likely that this experience is also laden with negative affect. What about motivational factors? Although there is little to recommend motivational accounts of the Capgras delusion, such accounts are clearly tempting for a number of other delusions. Consider the delusion of erotomania (de Clerambault’s delusion), in which the patient forms the belief that someone of higher social status is secretly in love with them (de Clerambault 1921/1942; Berrios & Kennedy 2003). This delusion quite obviously cries out for a motivational explanation, but there are also more subtle ways in which motivational factors have been invoked to account for delusions. Consider the account of persecutory delusions developed by Bentall and colleagues (Bentall et al 1994; Bentall et al 1991; Kinderman & Bentall 1996). Bentall and colleagues argue that persecutory delusions have an externalising attributional bias: the patient attributes negative events to other agents (rather than himself) in order to protect and maintain his self-image. Although such accounts are sometimes described as cognitive accounts (see e.g. Garety & Freeman 1999), one can also conceptualise them in motivational terms: having an externalizing attributional bias for negative events is a matter of being motivated to maintain a positive self-image. This motivational factor is more coarsegrained than the kind of motivational factor that seems to account for de Clerambault’s delusion, but it is motivational nonetheless. Whether or not similar biases might account for other delusions is very much an open question (see McKay et al, this volume). It is doubtful that more than a few delusions will succumb to a purely motivational analysis, but motivational factors might provide an important piece of the delusional puzzle where they do apply.

7. Overview of the volume We turn now to an overview of the chapters in the volume. In chapter two Peter Ditto defends an account of motivated reasoning that invokes only those processes employed in non-motivated reasoning. According to Ditto, negative affect changes the degree to which information is processed but not how it is processed. Information inconsistent with one’s preferred (and perhaps expected) conclusion produces negative affect, which in turn produces more intensive cognitive processing. More intensive cognitive processing leads the agent to consider a wider variety of explanations for the phenomenon in question, which in turn leads the agent to be more sceptical about any particular explanation.

15

Although Ditto’s account is supported by evidence from the motivated reasoning literature, it appears to be at odds with much delusional thought. Some delusions—such as grandiose delusions—may be preference consistent for those who suffer from them, but most delusions would seem to be strongly preference inconsistent. The patient with paranoid delusions who believes that he is being pursued by the government presumably wishes to be left in peace. Yet despite this negative affect, the patient seems unwilling (or is perhaps unable) to subject the delusional thought to appropriate rational scrutiny. Ditto acknowledges this problem, and suggests that where preference inconsistent information is extremely threatening it may entirely overwhelm effortful thinking. In chapter three Alfred Mele defends a deflationary account of self-deception, according to which, roughly, motivationally biased belief qualifies as self-deceptive belief. The bias in question can take two forms: the agent might be more inclined to consider data that seem to confirm his hypothesis rather than those which seem to disconfirm it, and the data that seem to confirm one’s hypothesis might appear to be more vivid than those that seem to disconfirm it. Mele points out that emotional factors can also bias beliefformation—for example, anger can lead data to appear more salient to a certain problem than they would otherwise appear to be. Might we account for delusions in terms of such motivational factors? Mele grants that affective factors might be causally implicated in the production of delusional beliefs, but argues that this involvement differs from that seen in self-deception, for in delusions the affective factor does not bias the subject’s treatment of the available evidence. Following Mele’s chapter, Martin Davies provides a careful examination of the relationship between Mele’s account of self-deception and the two-factor model of delusions. Davies sketches the various points at which motivation might enter the aetiology of a delusion, and asks whether the role played by motivation would be enough to produce an example of self-deception. He concludes that the cases of delusion that are most clearly examples of self-deception according to Mele’s account are those in which motivational bias makes a substantial contribution to the second factor in the aetiology of the delusion. A number of the issues raised by Davies’s discussion are also addressed in the chapter by McKay and colleagues. The issue of how affective factors might influence belief-formation is centre stage in the chapter by Michael Spezio and Ralph Adolphs. Drawing on both Damasio’s somatic marker theory and appraisal theory, Spezio & Adolphs argue that little if any cognition is purely cold. On their account, there is reciprocal interaction between the subject’s cognitive assessment of a stimulus and their emotional reaction to it: the emotional reaction modulates the cognitive evaluation of it which in turn modulates the emotional 16

reaction. Spezio & Adolphs focus on beliefs related to the moral and social realms, and have relatively little about how their model of belief-formation might apply to other domains. On the face of things, it is more plausible to think that affect might play a more central role in the formation of beliefs related to some domains—such as our place in the social universe—than it does to other domains. Of course, our beliefs about normal everyday affairs are not affect-neutral—and the mere fact that we have held a belief for some time might lead us to code challenges to it as threatening—but it is very much an open question whether the account developed by Spezio and Adolphs might apply to belief-formation in general. In their respective chapters, Gerrans and Pacherie focus on the question of where and how affective content might impact on the formation of the Capgras delusion. Pacherie explores the prospects of an endorsement approach to the Capgras delusion, according to which much of the content of the delusion is encoded in the patient’s unusual experience (Bayne & Pacherie 2004). Central to her account is the claim that the face recognition system draws on two kinds of information: static information about the stable features of a person’s face, and dynamic information about, for instance, emotional expression. This dynamic information is supposed to give us access to the person’s state of mind. Pacherie explores the hypothesis that the Capgras patient has suffered a deficit in the ability to employ dynamic information, and that as a result there is a sense in which the Capgras patient experiences the person he is looking at as an impostor. An important part of Pacherie’s paper is the idea that the face recognition system represents affective information. In his chapter Gerrans argues against the endorsement accounts of the Capgras delusion. He takes as his point of departure the distinction between qualitative and numerical identity, and argued that there is nothing in the content of a pair of experiences that might determine whether they are of a single object or of qualitatively identical but numerically distinct objects. This bears on the analysis of the Capgras delusion, for the endorsement account takes the Capgras delusion to be grounded in an experience of the target individual as qualitatively identical but numerically distinct from the target. Gerrans argues that this idea is mistaken, and that affective response is downstream from numerical identification rather than prior to it. On Gerrans’ account, affect plays one role in the formation of the Capgras delusion and another in its maintenance. In his chapter, Brian McLaughlin focuses on the role that existential feelings—“feelings that function to locate one in the world”—might play in the formation of delusions. McLaughlin cites feelings of familiarity and unfamiliarity, of significance and insignificance, of comprehension and incomprehension, and of reality and unreality as 17

representative existential feelings. Recent discussions of experience-based accounts of delusions have tended to assume that existential feelings have the same impact on beliefformation as other sorts of experiential states, such as visual experiences. A key aim of McLaughlin’s chapter is to put pressure on this assumption. McLaughlin argues that although we do have the capacity to override existential feelings, “the ability to do so may be only hard won and difficult to exercise”. The contrast, of course, is with perceptual experience, which is easily overridden by the mechanisms of reflective belief-formation. Although there is much to recommend McLaughlin’s proposal, it is not obvious how it might account for belief-formation in the context of depersonalisation, for although depersonalisation involves a profound alternation to existential feelings of various kinds it does not generally lead to delusional belief. In their chapter Ryan McKay, Robyn Langdon and Max Coltheart look to ways in which the two-factor approach to delusion might incorporate motivational elements. As a first step in this project, McKay et al draw attention to delusions in which there seems to be no obvious perceptual or cognitive deficit. However, such cases do not involve a synthesis of motivational and cognitive factors but simply a displacement of cognitive factors by motivational ones. For the synthesis of the two approaches, McKay et al suggest that motivational states might function as either the first-factor or the second factor in a twofactor account. McKay et al develop this suggestion by drawing on Ramachandran’s speculations concerning hemispheric specialization. On Ramachandran’s account, healthy belief-formation involves a subtle balance between a motivationally-driven right hemisphere and an anomaly-detecting left hemisphere, whose job is to make sure that the right hemisphere’s self-deceptive tendencies do not get out of hand. On this model, delusional belief might be expected when the right hemisphere is damaged, but not when the left hemisphere is damaged. As McKay et al point out, this prediction draws some support from data concerning the right-hemisphere dominance for monothematic delusions (see also Coltheart 2007). Our next three chapters engage with anosognosia. Our next three chapters engage with anosognosia. Although anosognosia is not typically regarded as a delusion, it seems to fit the standard definitions, for people with anosognosia insist that nothing is wrong with them in the face of what would seem to constitute obvious and incontrovertible evidence to the contrary (Davies et al. 2005). But perhaps individuals with anosognosia doe not have access to evidence that would make their impairment immediately obvious to them. Some accounts of anosognosia suggest that people with anosognosia lack the kind of direct, on-line perceptual evidence of their incapacity that one would expect them to have, and/or they are suffer from deficits in being able to recall evidence of their impairments. 18

In their chapter Anne Aimola Davies, Martin Davies, Jenni Ogden, Michael Smithson and Rebekah White argue that such ‘single factor’ accounts are unable to provide a fully satisfactory account of anosognosia. Might motivational factors help? Aimola Davies et al provide a thorough review of motivational accounts of anosognosia, concluding that although many of the most influential argument against motivational accounts of anosognosia are poor ones, the case in favour of motivational accounts is weak. Instead, they suggest, we should think of anosognosia in cold, two-factor terms: patients not only lack ‘perceptual’ awareness of the impairment, they also suffer form an impairment in belief evaluation. Aimola Davies and co-authors present a study of anosognosia for motor impairments and their consequences in patients with persisting unilateral neglect, which suggests that this ‘second factor’ might involve specific impairments in working memory and executive functioning. By contrast, in his chapter Neil Levy argues that there is much to recommend a motivational approach to anosognosia. In fact, Levy suggests that anosognosia might qualify as an instance of classical self-deception: the patient believes that there is nothing wrong with her, and she also believes that she is (for example) hemiplegic. If the classical account of self-deception applies to anosognosia, then it cannot be incoherent (as many theorists have argued it is). Further, there is reason to think that the human mind is susceptible to classical self-deception, and hence reason to think that garden-variety examples of self-deception might succumb to a classical analysis. Levy goes on to argue that his analysis undermines the appeal of Mele’s model of self-deception. Mele claims that considerations of simplicity weigh in favour of his model: unless we can find a clear case of classical self-deception, where the subject both believes that p and also believes that not-p, we should think of ordinary self-deception in terms of motivational biases. Further, Mele argues, there are no clear cases of classical self-deception. In response, Levy claims that since anosognosia does provide us with such a clear case of classical selfdeception, Mele cannot appeal to considerations of simplicity (or background plausibility) to support his account of self-deception over the classical account. In her chapter Frédérique de Vignemont examines the relationship between conversion disorder (hysteria) and anosognosia for hemiplegia. As de Vignemont points out, at first sight these two disorders seem to be mirror images of each other: the patient with anosognosia for hemiplegia (falsely) believes that she can perform certain kinds of actions, whereas the patient with conversion disorder (falsely) believes that she is unable to perform certain kinds of actions. But although this is the natural way to characterize conversion disorder, de Vignemont points out that there are certain problems with this characterization. Consider a patient with conversion disorder, who believes that she is 19

unable to talk. Although there is a certain sense in which this patient’s belief is false for no “physical” obstacle prevents her from speaking, there is a sense in which the patient’s belief that she is unable to talk might itself prevent her from being able to speak. With respect to the aetiology of these two disorders, de Vignemont suggests that although both anosognosia and conversion disorder have a motivational basis, this basis takes a rather different form in the two conditions: whereas anosognosia is driven by the drive to be well, conversion disorder involves the activation of a low-level anxiety-driven defensive system. Andy Egan brings the volume to a conclusion with a stimulating discussion of the ways in which delusion and self-deception might fit into our taxonomy of mental states. As we pointed out in section 3, although states of delusion and self-deception are typically classified as beliefs, neither kind of state wears this label with comfort. In his paper Egan joins the ranks of those who reject doxastic accounts of delusion and self-deception. He argues that states of delusion fall somewhere between belief and imagination and states of self-deception fall somewhere between belief and desire. Egan develops his account by exploiting the resources of functional role conceptions of mental states: the role that delusional states play in the agent’s cognitive economy falls between that of belief and imagination, whereas the functional role of self-deceptive states plays falls between that of belief and desire. One of the issues raised by Egan’s account is whether (and how) it might be possible to reconcile functional-role and normative conceptions of mental states. As we have seen, theorizing about delusion (and, to a lesser extent, self-deception) typically begins with the thought that these states are pathological beliefs—they violate certain norms of beliefformation. It is unclear how Egan’s account might accommodate this thought, for nothing can be a pathological belief unless it is also a belief. Perhaps we can accommodate the thought that delusion and self-deception are normatively inappropriate by regarding the states themselves as somehow normatively inappropriate, but obviously work would need to be done to flesh this proposal out. As the various contributions to this volume demonstrate, students of delusion and students of self-deception have much to learn from each other, and students of normal belief-formation have much to learn from both delusion and self-deception.

20

References Alicke, M.D., Vredenburg, D.S., Hiatt. M., Govorun, O. 2001. The “better than myself effect”. Motivation and Emotions, 25: 7-22. American Psychological Association. 2000. Diagnostic and Statistical Manual, Fourth Edition. American Psychiatric Association. Bauer, R.M. 1984. Autonomic recognition of names and faces in prosopagnosia: a neuropsychological application of the guilty knowledge test, Neuropsychologia 22/4: 457-69. Bauer, R M. 1986. The cognitive psychophysiology of prosopagnosia. In H. Ellis, M. Jeeves, F. Newcombe & A. Young (eds.) Aspects of Face Processing. Dordrecht: Martinus Nijhoff (253-67). Bayne, T. & Pacherie, E. 2004. Bottom-up or top-down?: Campbell's rationalist account of monothematic delusions, Philosophy, Psychiatry, & Psychology, 11/1: 1-11. Bentall, R.P. Kaney, S. and Dewey, M.E. 1991. Persecutory delusions: An attribution theory analysis. British Journal of Clinical Psychology 30: 13-23. Bentall, R.P. Kinderman, P. & Kaney, S. 1994. The self, attributional processes and abnormal beliefs: Towards a model of persecutory delusions. Behaviour Research and Therapy 32: 331-341. Berrios, G.E. & Kennedy, N. 2003. Erotomania: A conceptual history. History of Psychiatry, 13: 381-400. Campbell, J. 2001. Rationality, meaning and the analysis of delusion. Philosophy, Psychiatry, and Psychology 8, no 2/3: 89 100. Capgras, J. & Carette, P. 1924. Illusion de sosies et complexe d’Oedipe. Annales MedicoPsychologiques, 82, 48-68. Clore, G.L., Gasper, K. & Garvin, E. 2001. Affect as information. In J Forgas (ed.) Handbook of Affect and Social Cognition. Mahwah NJ: Lawrence Erlbaum Associates. Coltheart, M. 2007. The 33rd Sir Frederick Bartlett Lecture: Cognitive Neuropsychiatry and Delusional Belief, The Quarterly Journal of Experimental Psychology, 60/8: 1041-62. Cotard, J. 1882. Du delire des negations. Archives de Neurologie 4: 152-70. Cross, P. 1977. Not can but will college teachers be improved? New Directions for Higher Education 17: 1-15. Currie, G. 2000 Imagination, delusion, and hallucinations, Mind and Language 15:1, 168-83. 21

Davidson, D. 1985. Deception and Division. In LePore, E. and McLaughlin, B. (Eds.) Actions and Events: Perspectives on the philosophy of Donald Davidson. New York: Basil Blackwell. Davies, M., Coltheart, M., Langdon, R., & Breen, N. 2001. Monothematic delusions: Towards a two-factor account. Philosophy, Psychiatry, & Psychology, 8(2-3), 133-158. Davies, M., Aimola-Davies, M. & Coltheart, M. 2005. Anosognosia and the two-factor theory of delusions. Mind and Language, 20/2: 209-236. De Clerambault, C.G. 1921/1942. Les Psychosis Passionelles. In Oeuvres Psychiatriques (pp. 315-322). Paris: Presses Universitaires de France. de Pauw, K.W. 1994. Psychodynamic approaches to the Capgras delusion: A critical historical review. Psychopathology, 27, 154-160. Ellis, H. D. & Young, A.W. 1990. Accounting for delusional misidentifications. British Journal of Psychiatry, 157: 239-248. Ellis, H. D., Young, A. W., Quayle, A. H. & de Pauw, K. W. 1997. Reduced autonomic response to face in Capgras’ delusion. Proceedings of the Royal Society of London, Series B, 264: 1085-1092. Enoch, M. D., & Trethowan, W. 1991. Uncommon Psychiatric Syndromes (3rd ed.). Oxford: Butterworth-Heinemann. Fine, C., Craigie, J., Gold, I. 2005. Damned if you do; Damned if you don’t. The impasse in cognitive accounts of the Capgras delusion, Philosophy, Psychiatry, & Psychology, 12/2: 143-51. Frith, C. D. 1987. The positive and negative symptoms of schizophrenia reflect impairments in the perception and initiation of action. Psychological Medicine, 17: 631-48. Frith, C. D. 1992. The Cognitive Neuropsychology of Schizophrenia. Hove E. Sussex: Lawrence Erlbaum Associates. Frith, C. D., Blakemore, S.-J. & Wolpert, D. M. 2000a. Abnormalities in the awareness and control of action. Philosophical Transactions of the Royal Society of London B, 355: 17711788. Frith, C. D., Blakemore, S.-J. & Wolpert, D. M. 2000b. Explaining the symptoms of schizophrenia: Abnormalities in the awareness of action. Brain Research Reviews, 31: 357-363.

22

Funkhouser, E. 2005. Do the self-deceived get what they want? Pacific Philosophical Quarterly 86: 295-312. Garety, P. A., Hemsley, D. R., & Wessely, S. 1991. Reasoning in deluded schizophrenic and paranoid patients: Biases in performance on a probabilistic inference task. Journal of Nervous & Mental Disease, 179(4), 194-201. Garety, P. A., Kuipers, E., Fowler, D., Freeman, D., & Bebbington, P. E. 2001. A cognitive model of the positive symptoms of psychosis. Psychological Medicine, 31, 189-195. Gendler, T. S. 2007. Self-Deception as pretense, In J. Hawthorne (ed.) Philosophical Perspectives 21: Philosophy of Mind: 231-58. Gilovich, T. 1991. How we know what isn’t so: The fallibility of human reason in everyday life. New York: The Free Press. Gold, I. and Hohwy, J. 2000. Rationality and schizophrenic delusion. Mind and Language 15/1: 146-67. Hohwy, J. and Rosenberg, R. 2005. Unusual experiences, reality testing and delusions of alien control. Mind and Language, 20/2: 141-62. Kaney, S. and Bentall, R.P. 1989. Persecutory delusions and attributional style. British Journal of Medical Psychology, 62: 191-98. Kinderman, P. and Bentall, R.P. 1996. Self-discrepancies and persecutory delusions: Evidence for a model of paranoid ideation. Journal of Abnormal Psychology, 105: 106113. Langdon, R. & Coltheart, M. 2000. The cognitive neuropsychology of delusions. Mind and Language, 15/1: 184-218. Lewis, D. 1986. On the Plurality of Worlds. Oxford: Blackwell. Lockie, R. 2003. Depth psychology and self-deception. Philosophical Psychology 16: 127-148. Maher, B. 1974. Delusional thinking and perceptual disorder. Journal of Individual Psychology, 30: 98-113. Maher, B. 1988. Anomalous experience and delusional thinking: The logic of explanations. In T.F. Oltmans and B.A. Maher (eds) Delusional Beliefs. New York: Wiley, 15-33. Mele, A. 1987. Recent work on self-deception. American Philosophical Quarterly, 24: 1-17. Millikan, R.G. 1996. Pushmi-pullyu representations. In J. Tomberlin (ed.) Philosophical Perspectives IX, 185-200. Reprinted in Mind and Morals, ed. L. May & M. Friedman (ed). Cambridge, MA: MIT Press, 145-61. 23

MKenna, F. P., Stanier, R.A., Lewis, C. 1991. Factors underlying illusory self-assessment of driving skills in males and females. Accident Analysis and Prevention 23/1: 45-52. Murphy, D. 2006. Psychiatry in the Scientific Image. Cambridge, MA: MIT Press. Myslobodsky, M. ed., The Mythomanias: The Nature of Deception and Self-Deception. Mahwah, NJ: Lawrence Erlbaum. Nelkin, Dana K. 2002. Self-Deception, motivation, and the desire to believe, Pacific Philosophical Quarterly 83: 384–406. Patten, D. 2003. How do we deceive ourselves? Philosophical Psychology 16: 229-246. Promin, E., Gilovich, T. & Ross, L. 2004. Objectivity in the eye of the beholder: Divergent perceptions of bias in self versus others. Psychological Review, 3: 781-799. Sackeim, H. 1983. Self-deception, self-esteem, and depression: The adaptive value of lying to oneself. In J. M. Masling (ed.) Empirical Studies of Psychoanalytical Theories. Hillsdale NJ: L. Erlbaum (101-57). Sackeim, H. 1988. Self-deception: A synthesis. In Self-deception: An adaptive mechanism? ed. J. Lockard & D. Paulhus. Prentice-Hall. Sackeim, H. & Gur, R. 1985. Voice recognition and the ontological status of self-deception. Journal of Personality and Social Psychology 48:1365-68. Sackeim, H. & Gur, R. 1978. Self-deception, self-confrontation, and consciousness. In: Consciousness and self-regulation, vol. 2, ed. G. Schwartz & D. Shapiro. Plenum Press. Sartre, J.P. 1969. Being and nothingness. London: Methuen. Sass, L. 1994. The Paradoxes of Delusion: Wittgenstein, Schreber, and the schizophrenia mind. Ithaca, New York: Cornell University Press. Schwitzgebel, E. 2002. A phenomenal dispositional account of belief. Nous 36/2: 249-75. Stanovich, K. 1999. Who is rational? Studies of Individual Differences in Reasoning. Lawrence Erlbaum. Stephens, G.L. & Graham, G. 2005. The delusional stance. In M. Chung, K.W.M. Fulford & G. Graham (eds.) Reconceiving Schizophrenia. Oxford: OUP. Stone, T. and A. Young. 1997. Delusions and brain injury: The philosophy and psychology of belief. Mind and Language 12: 327-64. Taylor, S.E. 1989. Positive Illusions: Creative self-deception and the healthy mind. Basic Books.

24

Taylor, S.E. and Brown, J.D. 1988. Illusion and well-being: A social psychological perspective on mental health. Psychological Bulletin 103: 193-210. Zajonc, R. 1980. Feeling and thinking: Preferences need no inferences. American Psychologist, 35: 151-175. Zajonc, R. 2000. Feeling and thinking: Closing the debate on the primacy of affect. In J.P. Forgas (ed.) Feeling and Thinking: The Role of Affect in the Social Cognition: New York; Cambridge University Press (31-58).

25