Brain Sciences and Ethics / 脳科学と倫理

セミナー1:リーヴィを読む

中期教育プログラム「脳科学と倫理」セミナー(1)第12回報告

2008.04.21

中期教育プログラム「脳科学と倫理」の2007年度最終報告をします.
セミナー (1) 「Levy "Neuroethics" を読む」 第9章 “The neuroscience of ethics”.

【プレゼンテーション】野口尚彦さん (東京大学大学院総合文化研究科)
Levy's "Neuroethics" 第9章 "The neuroscience of ethics" 前半 PDF (231KB)
【プレゼンテーション】中澤栄輔さん (若手研究員)
Levy's "Neuroethics" 第9章 "The neuroscience of ethics" 後半 PDF (600KB)

【報告】 吉田敬さん (若手研究員)

In these long sessions, we read the chapter 9 “The neuroscience of ethics” (pp. 281-316). This is the last chapter of this book, and thus this report is the last one for this term. Most of the chapters that we have thus far read dealt with the ethics of neuroscience; however, Levy does not scrutinize the neuroscience of ethics. Hence it seems to be appropriate that this chapter is the last chapter, because neuroethics is not only the ethics of neuroscience, but also the neuroscience of ethics. The main topic of this chapter is the neuroscientific challenge to moral intuitions. According to Levy, proponents of neuroscience argue that “our moral intuitions are systematically unreliable, either in general or in some particular circumstances. But if our moral intuitions are systematically unreliable, then morality is in serious trouble, since moral thought is, at bottom, always based upon moral intuition” (p. 281). If this is the case, how is moral philosophy possible? This is the question to be examined in this chapter.

First, Levy gives the definition of intuitions. Referring to Tversky and Kahneman’s famous experiment on the conjunction fallacy, Levy defines intuitions as “spontaneous intellectual seemings.” According to Levy, “[i]ntuitions are spontaneous in the sense that they arise unbidden as soon as we consider the cases that provoke them. They are also, typically, stubborn: once we have them they are relatively hard to shift. Intuitions may be given up as false after reflection and debate, but even then we do not usually lose them, not, at least, all at once” (p. 283). What kind of role do intuitions play in our inquiry of morality? Levy writes that most moral philosophers make use of John Rawls’s idea of reflective equilibrium. First, we have some intuitive response to a variety of uncontroversial moral cases such as torturing babies for fun. Then we try to theorize a general principle to cover these cases. In this regard, intuitions guide our theorizing. True, there are some cases where the general principle is at odds with our intuitions. But even if the principle conflicts with our intuitions, it does not follow that we must abandon the principle—as far as the principle can cover a lot of cases. Some might want to find an analogy between scientific theories and moral theories, although I shall not examine it here.

Anyhow, some utilitarians such as Peter Singer try to avoid relying on intuitions. In their view, intuitions are merely irrational or cultural. According to Levy, Singer suggests that his theory is based on self-evident moral axioms, not intuitions. Levy criticizes Singer because appeal to self-evidence is another kind of intuition. In other words, Singer also relies on intuitions. Levy argues that “[w]hatever the role intuitions play in justifying their principles or their case-by-case judgments, all moral theories seem to be based ultimately upon moral intuition” (p. 287).

Then Levy goes on to examine two challenges to morality. The one is from neuroscience; the other is from social psychology. As to the neuroscientific challenge, Levy refers to Joshua Greene and his colleagues’ famous experiment on the trolley problem. As we have already seen, there are two dilemmas in the trolley problem: impersonal and personal ones. In impersonal dilemmas, subjects need to decide whether they would pull the lever to change the course of the trolley and to save five persons at the sacrifice of one person. In personal dilemmas, the subjects are asked whether they would push a fat man in front of the trolley to stop it and to save five persons. According to Greene and his collaborators, when they face impersonal dilemmas, the subjects’ brain areas relevant to working memory were activated compared with the areas associated with emotion. By contrast, when they consider personal dilemmas, their brain areas related to emotion were more activated than the areas associated with working memory. Greene and others suggest that “the thought of directly killing someone is much more personally engaging than is the thought of failing to help someone, or using indirect means to harm them” (p. 290). Although Greene and his colleagues contend that their study is not prescriptive, but descriptive, Peter Singer derives a normative perspective from it. According to Levy, Singer suggests that the underlying reason for Greene and his colleagues’ conclusion comes from our evolutionary history and that their study casts doubt on moral intuitions. To make his case, Singer even refers to Jonathan Haidt’s social intuitionist model (SIM) of moral judgments, which provides evidence of the irrationality of intuitions. This is the second challenge to morality from social psychology. Although I cannot explain Haidt’s SIM in detail, his point is that our moral judgments are the product of our emotional responses and are influenced by cultural and social factors. Thus our moral judgments are irrational. According to Levy, if Haidt is right, then his SIM might seem to support Singer’s view. But Levy criticizes such a move. Although Singer argues that his theory is a non-intuition-based one, no moral theory can go without intuitions. Hence if intuitions are in doubt, all moral theories are in serious trouble.

Now, Levy asks the following question: does neuroscience really provide evidence against moral intuitions? To answer this question, Levy scrutinizes Greene and Singer’s claim that moral intuitions are not rational, and thus they are not useful as a guide to our actions. According to Levy, their underlying assumption is that some of our moral responses are emotional. Levy criticizes this assumption, referring to Damasio and others’ somatic marker hypothesis that we have already seen in chapter 5. In Levy’s opinion, their study of the Iowa Gambling Task suggests that emotions can sometimes be very reliable guides to actions. Thus Levy argues that “[t]he evidence from neuroscience does not support Greene and Singer’s claim: the mere fact that areas associated with emotions are differentially active in judging personal and impersonal does nothing to show that either set of intuitions is suspect. We have no general reason to discount affectively colored judgments; such judgments can be reliable” (pp. 296-7). Then Levy examines Singer and Greene’s another claim that “our moral intuitions are the product of an evolutionary history which was shaped by non-moral selection pressures” (p. 301). Levy accepts Greene and Singer’s claim; however, he argues that evolutionary history does not show that emotions are not justified. According to Levy, Singer believes that universal benevolence is badly necessary for ethics, but evolution cannot give us emotions that motivate universal benevolence. Hence universal benevolence must be based in reason, not emotion. Against Singer’s view, Levy points out that our concern for conspecifics is what we obtained from evolution and that there is plenty of evidence that suggests this.

As to the second challenge to morality from social psychology, Levy focuses on Haidt’s study that I mentioned above. According to Haidt, moral intuitions are irrational, and thus they are not reliable guides to morality. This is based on his work on “moral dumbfounding.” Levy explains moral dumbfounding as follows:

Haidt’s work on “moral dumbfounding”—the phenomenon when someone reports an intuition that an action is wrong, but is unable to justify that response—actually demonstrates that dumbfounding is in inverse proportion to the socio-economic status (SES) of subjects (p. 307).

From this, Levy argues that although Singer uses Haidt’s view to make his own case, the case is the opposite to Singer’s view. According to Levy, Singer claims that our moral intuitions are immune to revision; however, the above quote suggests that we can change them through education. Otherwise, there is no difference of dumbfounding between the higher and the lower statuses of subjects. Hence Singer is mistaken.

In closing, Levy returns to his view of the extended mind thesis and examines the relation between moral theories and society. Comparing moral theories with scientific theories, Levy argues for the division of labor in thinking about moral problems. He writes as follows:

We humans are able to accumulate knowledge at a rapid pace because we engage in systematic divisions of labor. We extend—or , if you like, embed—our minds in the social scaffolding of tools and symbols, and allow our justifications for our beliefs to be distributed as well. For many of our beliefs, the justifications are spread across many groups of specialists, with no one person being in possession of all the relevant information. We are absolute reliant upon one another, not just materially, but also in what we can know (p. 315).

This view might be commonsensible to many of us; however, it is a very important point to be repeated and remembered. Even if none of us know the way to morality, we might be able to find ways through cooperation based on the division of labor. This is what we can do.

Now, we have thus far examined Levy’s book. In general, the book is very informative and presents us with many problems in neuroethics. For those who are interested in neuroethics, the book will be a good introduction to the field—even if it has problems that I have thus far mentioned. If my reports could draw your attention to neuroethics, I cannot be happier. Although this is the last seminar report for this term, we shall post reviews of articles and/or books in the following weeks. Please stay tuned. Thank you very much for your reading!

PDF版をダウンロード⇒
【レポート】Levy's "Neuroethics" 第9章 "The neuroscience of ethics" PDF (20.6KB)



↑ページの先頭へ