Raw Thought

by Aaron Swartz

Consciousness Clarified

You ever notice how when you learn a new word you begin seeing it used everywhere? Lately I’ve been feeling that way about consciousness. I knew the word before, obviously, but lately I’ve clarified my thoughts about what it is and sloppy usage of the term sticks out like a sore thumb.

“Consciousness”, the dictionary kindly explains, is “the state or condition of being conscious.” And we all basically know what it means to be conscious. You poke someone awake and ask “Are you conscious?” You get hit on the head by a large rock and you get knocked unconscious. Being conscious, in short, means being awake, being aware of your surroundings, seeing colors and feeling pinches and hearing songs.

Now there’s something weird about being conscious — something so weird, in fact, that I’ve found many people are bizarrely tempted to deny it. Consciousness is what the philosopher John Searle calls “ontologically subjective”. That is, when you see the color red, while it’s true that all sorts of complicated things happen in your eyes and brains, a particular experience — the one we call “seeing red” — happens only to you. If aliens with the most powerful viewing technology possible beamed down to earth and peeked inside your brain, they’d still have no idea what the color red looked like. They’d see that a certain wavelength of light triggered certain electrical impulses in certain centers, but they’d never see red. It’s just not there.

Now we don’t know for sure what causes consciousness (it’s an ongoing research project) but whatever the answer is, it must be caused by something. Yet this obvious fact is continually missed by laypeople who make bizarre comments like “as soon as computers become self-aware, they might become conscious”.1 This is as absurd as saying that as soon as computers are told about food, they might start digesting things.

Consciousness isn’t some vague property of things that look smart to us. It has a real, physical meaning: feeling things. I suppose it’s logically possible that a talking robot might start feeling things, but the chances seem awfully remote.


  1. Example: This week’s New York Times Magazine suggests “a robot might exhibit the first glimmers of consciousness, ‘namely, the reflexive ability of a mind to examine itself over its own shoulder.’” 

You should follow me on twitter here.

August 1, 2007

Comments

Now we don’t know for sure what causes consciousness (it’s an ongoing research project) but whatever the answer is, it must be caused by something. Yet this obvious fact is continually missed by laypeople

And not just laypeople: this is precisely the debate.

One side defines consciousness as the part of our minds which is not merely machine, thus by definition is something a computer could never have. The other side claims there is no such thing, that our entire experience of consciousness lies on the physical side of the mythical mind-body divide.

You seem to come down on the first side of this, by the usual method of stating that it is entirely obvious that consciousness is ‘something else’. You owe it to yourself to read Dennett, who I think at least makes it plausible to all that this need not be so.

posted by improbable on August 1, 2007 #

p.s. I might add that I find the use of the word consciousness for any robot we could build soon about as idiotic as you do.

There must be some grey area, encompassing newborn children and sheepdogs and dolphins. But certainly not slugs and jellyfish and lobsters.

posted by improbable on August 1, 2007 #

Have you been reading Nicholas Humphrey’s Seeing Red? He makes a big deal between the sensation of seeing red (“redding”, he sometimes calls it) and the perception of red. He has shown that some people with the usual bizarre kind of brain damage that these people love can perceive objects without the experience of seeing them (blindsight), so the two are different. He sees the evolutionary role of sensation as being to make us care about what we perceive - patients who can perceive without sensation tend not to make the effort to do so after a while.

Anyway, I’m with you on the clarification thing, but with “improbable” on the something else thing - some of the time at least.

posted by tom s. on August 1, 2007 #

Aaron, please do yourself a favor and immediately start reading Douglas R. Hofstadter’s new book I Am a Strange Loop. It is the culmination of his decades of work exploring the nature of consciousness. The highly oversimplified version of his view is that consciousness arises from a representational systems of patterns, including a pattern of “self” and an ability to recursively cross levels of abstraction in these patterns.

Read the book already, it’s great.

posted by Ian on August 1, 2007 #

“Consciousness isn’t some vague property of things that look smart to us. It has a real, physical meaning: feeling things.”

I’m not really sure what you’re trying to say here. It is surely “real” and “physical” but since everything in our universe is real and physical, that’s a tautology. When you get punched in the head, the physical changes in the fist, skin, skull and nerves cause physical changes in your brain which the illusion that is “you” interprets as pain and shock and so on.

Saying “I feel pain” is useful shorthand for describing the physical state of the universe, in particular: the cells in your brain are arranged and firing in some particular way. Nobody’s going to argue with that, just as nobody argues that “fist” is a useful shorthand for describing the bundle of bones, tissues, and blood at the end of someone’s arm balled up in a certain way.

Since this shorthand concept “consciousness” seems to be an emergent property of a sufficiently-sophisticated agent whose model of the world includes both representations of others’ models and of its own, I would frankly be pretty surprised if a robot could ever be built that was honest-to-goodness intelligent, without its also being conscious. Then again I have a pretty high bar for “intelligent” (passing one Turing test doesn’t cut it).

posted by Jamie McCarthy on August 1, 2007 #

“If aliens with the most powerful viewing technology possible beamed down to earth and peeked inside your brain, they’d still have no idea what the color red looked like.”

Or, maybe we just don’t know enough about it yet. The conclusions that you’re jumping to seem preposterous to me; why should we think that consciousness is ontologically subjective except for our own ignorance?

I for one expect consciousness to join planetary orbits and the origin of species in the list of great, formerly mysterious phenomena that we now understand. In the light of recent history, it seems like folly to say that anything is impossible to understand.

By the way, another concern I have with the views you’ve expressed previously is that they seem to imply that some special property of the universe was sitting around for fifteen billions years, waiting for us. Maybe that will turn out to be true, but it would be a reversal of the trend that we’ve seen in most major scientific discoveries.

posted by David McCabe on August 2, 2007 #

Add http://www.esgs.org/uk/art/sands.htm to your “must read” list.

It could make the dark things clear.

Love.

posted by William Loughborough on August 2, 2007 #

Jamie,

Not everything that one might talk about is real and physical. Real numbers, for instance, are neither real nor physical. Yet we talk about them and describe their properties. So Aaron’s statement is not tautological, though I think it is wrong for other reasons: A “feeling” is about as unreal and unphysical as something can get.

Also, wrt to your oblique assertion (a la Dennet) that consciousness is an illusion: This has always struck me as preposterous. An illusion is what happens when one’s senses are mislead into perceiving that which is not. In other words, an illusion implies an observer. So, you see, calling consciousness an illusion just begs the question.

posted by Mark on August 2, 2007 #

Searle is right, consciousness is “ontologically subjective”. That is it’s most important characteristic. Interesting that you picked up on it :) This doesn’t explain consciousness but it gives us a place to start. The brain processes observed from the outside are not observed (felt) the same as when you are that brain. Lots of processes are like that … even motion through space, which looks quite different if you the thing moving. If you think about it that way, it’s not quite so surprising.

posted by Seth Russell on August 2, 2007 #

improbable: I think I provided a pretty clear idea of what consciousness was without excluding the possibility of computers having it. Dennett’s position seems, like most of his positions, completely unreasonable.

Mark: We believe things are consciousness both because of what they do and because of the mechanism they use to do it. We know that dogs are conscious because we know they have similar brains to us. Similarly, if I call your telephone and make your telephone say “I’m conscious! I see red!” you still won’t think your telephone is conscious.

Jamie: In the section you quote, I wasn’t trying to say that consciousness was a physical fact as opposed to taking place in some other dimension. I meant it was a thing like “digestion” as opposed to a vague notion like “intelligence”. People often treat “conscious” like “intelligent”: “perhaps someday this computer will be intelligent” is a reasonable statement; “perhaps someday this computer will be conscious” is not so much.

Jamie claims that ‘Saying “I feel pain” is useful shorthand for describing … cells in your brain … firing in some particular way’. This is close to the Dennett/behaviorist position and I find it completely bizarre. It’s true that every time one feels pain, a certain set of cells in your brain are firing in a pattern, but the pain is caused by that firing, it’s not simply that firing. If you pinch yourself, you won’t just cause some neurons to fire — you’ll feel a pain in your arm. I don’t have patience to spell this out in more detail, but if you still don’t believe me, I recommend reading Searle’s discussion of behaviorism in The Rediscovery of the Mind and, more briefly, in Mind: A Brief Introduction and, briefer still, in The Mystery of Consciousness.

Meanwhile, David claims that if we knew more, we wouldn’t think of sight as ontologically subjective. I’m not sure what David expects us to learn. I can’t think of anything we could possibly find that would communicate to aliens what the experience of red is like. And there’s certainly no evidence that any such magical thing exists — it seems pretty clear that our brains are neuron firings and the like.

posted by Aaron Swartz on August 2, 2007 #

We know that dogs are conscious

We do?

“Perhaps someday this computer will be intelligent” is a reasonable statement; “perhaps someday this computer will be conscious” is not so much.

I don’t see how you can say that until we thoroughly understand what consciousness is and what causes it! Consciousness is possible in this universe. But we don’t know what causes it. So how can you be so sure that computers can’t possess it?

I have no idea what we might learn. Neither do you. Keep an open mind. And after clarifying for us that experience is not the same as the neurological systems that cause the experience, I find your last statement pretty bizarre. I’m not waiting for some whole new aspect of the universe to be revealed. I’m just saying that the future hasn’t happened until it happens.

Instead, wasn’t it you who was saying that “something physical” about “the brain itself” causes consciousness? Your assertion that neurons can experience consciousness, but transistors or marbles* cannot, seems to imply an élan de conscience much more than anything I have said here.

By the way, when you email somebody your reply to them, you really should put it into second-person.

  • http://www.youtube.com/watch?v=GcDshWmhF4A

posted by David McCabe on August 3, 2007 #

David is attacking a straw man. I never said computers or marbles can’t possess consciousness.

posted by Aaron Swartz on August 3, 2007 #

You’re partially right; I did incorrectly remember what you said before. Slightly.

http://www.aaronsw.com/weblog/searle

By the way, I’m getting packet loss and slow responses from your server. Might want to look into it.

posted by David McCabe on August 3, 2007 #

Why do you need to bring aliens into the picture? For the subjective experience you refer to as “seeing red”, other humans can’t experience it either. Imagine aliens that have the technology to take human eyes and incorporate them into their own anatomies. Does that give them the ability to have the same subjective experience as humans?

posted by ThomasW on August 3, 2007 #

None of this seems that hard to me. A mental model is a (necessarily very) abridged map of some collection of probabilities in time-space. Living creatures evolved to the point where they needed good mental models of not only their environment (to avoid the quicksand), not only the other creatures’ mental models (to guess what the lion is going to do next), but every human around them including themselves (to guess who’s going to help or screw over whom). Just as the creatures are benefited by sensing pain, they are benefited by sensing that they are a continually-existing singular conscious entity moving forward in time. It seems clear that this belief in our own consciousness is an evolutionary advantage in a socially-rich environment of long-lived agents: my ancestors without it probably couldn’t have been as concerned about making long-term goals with the aim of survival and reproduction.

It’s hard to have a fair discussion about this. Every response to this thread has talked about “I,” “we,” “you,” which presuppose there is something real and physical about the referents of those pronouns. Just as “pi” refers to the concept of a ratio of an ideal circle’s circumference to its diameter and does not actually exist in our universe, “I” refers to a very complex collection of symbol-manipulating mental models being run like a tinkertoy computer by the physical matter inside my skull. It’s shorthand that we can’t really live without because we are these things. Pi is an emergent property of elementary geometry, and I am an emergent property of physics. Once we all agree on that point we can start having a meaningful conversation about consciousness.

I’ll put the Searle books on my reading list, but I doubt I’ll be all that impressed; the Chinese room thought-experiment is founded on a severe, I would say laughable, misunderstanding of the relationship between brains and minds. My reaction to reading about it for the first time was “surely this isn’t what passes for philosophy about the mind nowadays, it’s so stupid” and so I’ve never been interested in reading more Searle. (I think I first read of it in “The Mind’s I”; I would have been 11 or 12.)

posted by Jamie McCarthy on August 3, 2007 #

Why do you need to bring aliens into the picture? For the subjective experience you refer to as “seeing red”, other humans can’t experience it either. Imagine aliens that have the technology to take human eyes and incorporate them into their own anatomies. Does that give them the ability to have the same subjective experience as humans?

posted by EKoL on August 4, 2007 #

Aaron,

if seeing red is something more than neurons firing, then why is that my patient last week with a stroke in the occipital lobe can’t see in an as yet not entirely defined region of his visual field? He was alert and oriented to time person and place and has no evidence of pathology between from the film of tear overlaying his eye all the way back through the optic nerve, the optic chiasm, the radiations, and into the occipital lobe?

The information isn’t being processed. What is not being informed? His soul? His consciousness? Wernicke’s area is still intact: he has the word red. Broca’s area is intact: he can construct sentences and use the word red appropriately. The biological deficit is in a part of the brain further from the eye, logically, than the banks of the calcarine sulcus, specifically a huge area of white matter in the occipital lobe.

The biological “it’s neurons” model has proven better than any other model, since Cahal, in explaining psychiatric and neurologic illnesses (like strokes that involve virtually every element of ‘consciousness’) and the effectiveness of treatments that are used. It explains the stages of development. It is more thoroughly tested than any other model and has held up to innumerable experiments.

If you maintain that ‘red’ is not in the neurons, you will need to explain why hormones and psychoactive chemicals effect thought processes, explain the repetitive pattern of thought changes that follow spatially similar strokes. You will need to explain color blindness. And you will need to advance an alternative explanation.

I think, if anything, the word is misleading. Like ‘Open Source’ software, ‘conciousness’ is an umbrella term of convenience, it’s scope makes it inherently resistant to testable definition.

posted by Niels Olson on August 5, 2007 #

I just ran across this while reading your scifoo post. Ask your fellow scifoo camper Jeff Hawkins about red. Here’s his TED talk

posted by Niels Olson on August 5, 2007 #

And another TED talk: Dan Dennett

posted by Niels Olson on August 5, 2007 #

Jamie accurately points out that living creatures “need[] good mental models” of their environment, themselves, and others. But then he says this amounts simply in a “belief in our own consciousness”. I tried to show that consciousness isn’t simply a belief, it’s a real physical process that has to be addressed.

And, for what it’s worth, those Searle books are vastly superior to the Chinese Room, which Searle never saw as fit enough to publish in a real book (my sense is that he’s a tad embarrassed of it).

posted by Aaron Swartz on August 6, 2007 #

Imagine aliens that have the technology to take human eyes and incorporate them into their own anatomies. Does that give them the ability to have the same subjective experience as humans?

Of course not. Human eyes just convert light into electrical impulses, like web cams. Plugging a web cam into a computer doesn’t give it any particular subjective experience.

posted by Aaron Swartz on August 6, 2007 #

You can also send comments by email.

Name
Site
Email (only used for direct replies)
Comments may be edited for length and content.

Powered by theinfo.org.