Raw Thought

by Aaron Swartz

What are the optimal biases to overcome?

This is a bonus post for my series Raw Nerve. It originally appeared in somewhat different form on Less Wrong.

I’ve noticed that some people have complimented my series Raw Nerve by saying it’s a great explanation of cognitive biases. Which always amuses me, since the series grew out of frustrations I had with the usual way that term gets used. There’s a group of people (call them the cognitive bias community) who say the way to be more rational — to get better at making decisions that get you what you want — is to work at overcoming your biases. But if you’re overcoming biases, surely there are some lessons that will help you more than others.

You might start with the most famous ones, which tend to be the ones popularized by Kahneman and Tversky. But K&T were academics. They weren’t trying to help people be more rational, they were trying to prove to other academics that people were irrational. The result is that they focused not on the most important biases, but the ones that were easiest to prove.

Take their famous anchoring experiment, in which they showed the spin of a roulette wheel affected people’s estimates about African countries. The idea wasn’t that roulette wheels causing biased estimates was a huge social problem; it was that no academic could possibly argue that this behavior was somehow rational. They thereby scored a decisive blow for psychology against economists claiming we’re just rational maximizers.

Most academic work on irrationality has followed in K&T’s footsteps. And, in turn, much of the stuff done by the wider cognitive bias community has followed in the footsteps of this academic work. So it’s not hard to believe that cognitive bias types are good at avoiding these biases and thus do well on the psychology tests for them. (Indeed, many of the questions on these tests for rationality come straight from K&T experiments!)

But if you look at the average person and ask why they aren’t getting what they want, very rarely do you conclude their biggest problem is that they’re suffering from anchoring, framing effects, the planning fallacy, commitment bias, or any of the other stuff in these tests. Usually their biggest problems are far more quotidian and commonsensical, like procrastination and fear.

One of the things that struck me was watching Eliezer Yudkowsky, one of the most impressive writers on the topic of cognitive biases, try to start a new nonprofit. For years, the organization he founded struggled until recently, when Luke Muehlhauser was named executive director. Eliezer readily agrees that Luke has done more to achieve Eliezer’s own goals for the organization than Eliezer ever did.

But why? Why is Luke so much better at getting what Eliezer wants than Eliezer is? It’s surely not because Luke is so much better at avoiding the standard cognitive biases! Luke often talks about how he’s constantly learning new rationality techniques from Eliezer.

No, it’s because Luke did what seems like common sense: he bought a copy of Nonprofits for Dummies and did what it recommends. As Luke himself says, it wasn’t lack of intelligence or resources or willpower that kept Eliezer from doing these things, “it was a gap in general rationality.”

So if you’re interested in closing the gap, it seems like the skills to prioritize aren’t things like commitment effect and the sunk cost fallacy, but stuff like “figure out what your goals really are”, “look at your situation objectively and list the biggest problems”, “when you’re trying something new and risky, read the For Dummies book about it first”, etc. That’s the stuff I’m interested in writing about.

You should follow me on twitter here.

August 29, 2012

Comments

Maybe some things that are clearly biases, when studied in isolated/reductionist experiments, are even rational.

Say there’s a simple folk ‘rule of thumb’ that’s right 90% of the time, but results in a wrong/’biased’ response 10% of the time. However, the cost of being wrong is small, and the marginal benefit of the sensors/rules/neurons/etc for distinguishing that 10% is less than devoting that same decisionmaking-machinery to some other purpose.

So there are errors/biases everywhere… but they’re roughly the right errors/biases given the complexity-budget and circumstances (childhood/evolution/founding-constraints) under which they arose. Or at least, close enough to the ‘right’ errors that they’re only well-teased out in modern, rapid-cultural-change, laboratory-study conditions. The ‘bias’ is locally irrational, but globally as efficient as any other strategy implementable within the same constaints.

Outside laboratory-like conditions, can we reliably tell the difference between ‘bias’ and ‘tacit knowledge’? And conversely, won’t some imperfect/overconfident laboratory conditions miscategorize ‘tacit knowledge’ or ‘globally efficient reasoning given a firm cognitive budget’ as ‘bias’?

posted by Gordon Mohr on August 29, 2012 #

Your third article in the Raw Nerve series highlight the benefits of fighting confirmation bias (we tend to look for evidence to confirm we don;t need to change) and the fundamental attribution error (when we fail it is situational, when others fail it is a flaw in their character), although you do not name these specifically.

posted by on September 2, 2012 #

You can also send comments by email.

Name
Site
Email (only used for direct replies)
Comments may be edited for length and content.

Powered by theinfo.org.