Raw Thought

by Aaron Swartz

Should our cognitive biases have moral weight?

In a classic piece of psychology, Kahneman and Tversky ask people what to do about a fatal disease that 600 people have caught. One group is asked whether they would administer a treatment that would definitely save 200 people’s lives or one with a 33% chance of saving 600 people. The other group is asked whether they would administer a treatment under which 400 people would definitely die or one where there’s a 33% chance that no one will die.

The two questions are the same: saving 600 people means no one will die, saving just 200 means the other 400 will die. But people’s responses were radically different. The vast majority of people chose to save 200 people for sure. But an equally large majority chose to take the chance that no one will die. In other words, just changing how you describe the option — saying that it saves lives rather than saying it leaves people to die — changes which option most people will pick.

In the same way that Festinger, et. al. showed that our intuitions are biased by our social situation, Kahneman and Tversky demonstrated that humans suffer from consistent cognitive biases as well. In a whole host of examples, they showed people behaving in a way we wouldn’t hesitate to think was irrational — like changing their position on whether to administer a treatment based on what it was called. (I think a similar problem affects our intuitions about killing versus letting die.)

This is a major problem for people like Frances Kamm, who think our moral philosophy must rely on our intuitions. If people consistently and repeatedly treat things differently based on what they’re called, are we forced to give that moral weight? Is it OK to administer a treatment when it’s described as saving people, but not when it’s described as not saving enough? Surely moral rules should meet some minimal standard of rationality.

This problem affects a question close to Kamm’s work: what she calls the Problem of Distance in Morality (PDM). Kamm says that her intuition consistently finds that moral obligations attach to things that are close to us, but not to thinks that are far away. According to her, if we see a child drowning in a pond and there’s a machine nearby which, for a dollar, will scoop him out, we’re morally obligated to give the machine a dollar. But if the machine is here but the scoop and child are on the other side of the globe, we don’t have to put a dollar in the machine.

But, just as with how things are called, our intuitions about distance suffer from cognitive biases. Numerous studies have shown that the way we think about things nearby is radically different from the way we think about things far away. In one study, Indiana University students did better on a creativity test when they were told the test was devised by IU students studying in Greece than when they were told it was devised by IU students studying in Indiana.

It’s a silly example, but it makes the point. If our creativity depends on whether someone mentions Greece or Purdue, it’s no surprise our answers to moral dilemmas depend on whether they take place in the US or China. But surely these differences have no more moral validity than the ones that result from Tversky’s experiment — they’re just an unfortunate quirk of how we’re wired. Rational reflection — not faulty intuitions — should be the test of a moral theory.

You should follow me on twitter here.

January 8, 2010

Comments

I agree, and have held that belief for a while. However while holding these beliefs I’ve run into cases where intuition encapsulates certain nuances rational reflection can miss. Or perhaps more interesting it does a much better job encapsulating definitive human drives for community, survival and cooperation.

The intuition to give relatively more moral on things close to you is one force that keeps communities together, and it could be argued these local communities are essential to the human condition.

I don’t have the best words to describe this, but if you’ll allow me to wave my hands a bit: many mostly internally consistent and rational world views won’t properly explain the odd drives and wants of the human mind. They start to break down as you get close to it. If the rational world view you hold is fiercely conflicting with lots of human intuition, maybe its not taking the right axioms on as its base.

posted by Alex G on January 8, 2010 #

You should be interested in exploring LeDoux’s work on emotions (particularly fear) and amygdala. Here’s a relevant passage (quoted from a site that is currently down):

””” The most significant of LeDoux’ experimentation with regard to fear is that the sensory input to the brain is split at the thalamus into two streams – one to the amygdala and one to the neo-cortex. The input stream to the amygdala is quicker – 12 milliseconds as opposed to 25 milliseconds to the neo-cortex. Less information goes to the amygdala quicker – it operates as a quick scan to check for danger “”“

Feelings come prior to thoughts. No wonder intuitions/feelings have greater hold on us today?

Curiously this blog post had no mention of the word “feeling” or “emotion” (the word “cognition” typically excludes “affection” which - for various reasons - does not attract so much scientific attention).

posted by srid on January 8, 2010 #

No one in the study you described actually changed their position based on the question. Rather, different questions were posed to different groups. A subtle difference, but it points to a larger issue: intuition doesn’t necessarily lead to irrational conclusions. Quite often, intuition’s conclusions are exactly the same as those rational reflection would suggest.

Your title question, “should our cognitive biases have moral weight?” seems to have become conflated with a very different question: “should our cognitive biases have more moral weight than our rational reflection?” Answering no to the latter doesn’t really give us an answer to the former.

posted by Scott Reynen on January 8, 2010 #

Hmm… a little far fetched and hurried conclusion…For a couple of reasons: 1.None of these studies have setup a situation of where the subjects can do the so called rational reflection… Till here it has been assumed…. both what is rational reflection and what it takes for a human to reflect rationally b4 making a decision.

  1. Even if we all agree on Rational reflection, presuming that it is not the natural/default decision making method 4 humans(pretty standard assumption AFAIK), we need to consider the reaction time available in the situation and the time for a decision made on Rational reflection. Can’t cite a paper right now off the hand.. @Srid: Am sure that “feeling/intuitions” is considered an oversimplification..

P.S: Forgive me if i appear to be adopting a condescending tone.. I don’t intend to..

posted by Anand Jeyahar on January 8, 2010 #

Aaron -

I think the Kahneman and Tversky example is flawed, because wrong choices here do not stem from errors in moral intuition, but rather from errors in our ability to handle probabilities.

It is not that people here are not trying to be rational, it is just that they are not terribly good at it in situations that involve assessing probabilistic outcomes.

Tom

posted by Tom on January 8, 2010 #

It’s definitely hard to know what to do with intuitions. I think there’s an interesting paradox here; we have unreliable intuitions, so we rightly shy away from moral intuitionism, but we also have unreliable calculation of moral terms, especially when rushed, which should lead us a little way back towards acting on a set of intuitions and principles. (But not the deontological kind of principle.)

I think even the least-intuitionist person — say, a utilitarian, cognitivist, naturalist — is likely to end up answering a question like “Should I steal from this person?” with “Well, I’m opposed to that generally, so if I think I should then maybe something weird’s going on and I should think about it more..”.

So, I think such a strong rejection of intuitions might be a little hasty. I’d love to read more about how we could properly use principles and intuition inside consequentialism, if anyone knows of further reading..

posted by Chris Ball on January 8, 2010 #

You worded it poorly. If “400 people will definitely die”, that implies 200 might not die. Not that 200 people will definitely live.

posted by matt on January 8, 2010 #

I think Matt is right, there is something wrong with your wording here. Your first statement implies that 200 or more will survive whereas your supposedly equivalent third statement implies that up to 200 will survive.

posted by engels on January 8, 2010 #

Searching for cognitive and oral is different than searching for cognitive and moral. (Broken key on laptop and searching GReader for post after passing over it.

Interesting how many hits for cognitive and moral besides your post.

posted by jcwinnie on January 10, 2010 #

‘Problem of Distance’ is not just a phenomenon influencing morality, but all decisions. In the Kamm example, the machine being close by eliminates the problem of physical effort, but the psychological effort of thinking about events occurring outside my immediate circle of interest (not necessarily influence) increases cognitive load, and we conserve mental energy just as we conserve physical energy.

Failure to conserve mental energy results in mental failure modes such as (pathological) anxiety and clinical depression, increasingly common in society. It’s cliche’ to depict the clinically depressed individual as being so overly affected by events outside their immediate concern that they become literally stuck in their own lives, and part of their therapy is to learn how to become more selfish and think about their own happiness more (and thinking about other people’s problems less) without guilt.

What I’m getting at is that rational reflection, moral or otherwise, is costly, and in so far as making people more rational/moral/intelligent, it would probably be best to attack the problem from the perspective of thought as scarce units of mental currency and developing efficient means of allocating these units for the greatest good. A utilitarian augmented intelligence system if you will.

posted by haig on January 12, 2010 #

You can also send comments by email.

Name
Site
Email (only used for direct replies)
Comments may be edited for length and content.

Powered by theinfo.org.