Way back in 1641, René Descartes dropped Meditations on First Philosophy, in which he famously claimed he was certain that he existed. But towards the end of his little book, he also made some pretty influential claims about the nature of the mind, saying it was totally separate from material substance. I think his arguments for this are bullshit, and I want to try to explain why.
Part One: Reconstruction of Braitenberg
I've been reading Valentino Braitenberg's little book called "Vehicles". With the slogan "Let the Problem of the Mind Dissolve In Your Mind," Braitenberg (a neuroscientist by trade) takes the reader down a charming little garden path towards the much-feared physicalist view, namely that consciousness is the product of physical interactions between atoms and not distinct in some metaphysical way from matter.
The way he does this is by first introducing the simplest forms of sensory vehicles. They have a sensor on the front, and a motor in the back, and when their sensor is activated, they go faster. In this way, they behave in a manner which is a response to environment.
Next, Braitenberg describes a class of Vehicles with two sensors and two motors, one on each side. This is a marked upgrade from the previous class, for now our Vehicles can turn towards or away from a certain stimulus. If the stimulus is stronger on one side than another, the corresponding motor will cause a turn towards or away from the source of the stimulus, depending on how the vehicle was designed. Braitenberg, playing with us a little, ventures to call this behavior aggression and fear, since the vehicles in this class either run directly towards the source of their stimulus, building up speed until they come to a head-on collision with their target (presumably aggressive behavior), or they run directly away from this source, fast at first and then slower until they cannot sense it anymore.
There are many more classes of Vehicles which Braitenberg describes and I will not bother to explain here, but they all function in understandable ways, and produce behaviors which we are tempted to understand in psychological terms: love, exploration, and a variety of arbitrary values and special tastes dictated by the arrangement and nature of mechanical sensors.
More complicated vehicles (think simplistic computers) are capable of what Braitenberg calls logic, and when one introduces a form of evolution we can understand how Vehicles might come to be complex and well adapted; furthermore, his entirely mechanistic Vehicles can come to form what really looks awfully similar to what we understand as associations, abstractions, and internal representations of the outside world.
Now, it is entirely reasonable that these vehicles would have evolved to have many of these qualities, since they are all advantageous in some way in the process of survival and reproduction, which inevitably ends up in a sort of arms race with other Vehicles. As evolution proceeds, we encounter Vehicles with the capability to make predictions based on past experience (this builds on associations), and finally vehicles which make decisions based on pleasure. At each stage, Braitenberg gives a physical explanation for the new trait/behavior.
I might have missed something here, but you get the idea. Braitenberg claims that anything a human can do, a vehicle can do. Therefore, there's no evidence to suppose we are anything fundamentally different.
Part Two: Reconstruction of Descartes
Descartes wrote in his sixth meditation that he believes that mind and body are fundamentally and metaphysically distinct. He makes two main arguments for this.
First, he says that we can be certain of our own existence and we can never really be certain of the existence of material things in the same way. Braitenberg is looking at mindedness from a third person perspective, from behaviorist perspective, but the first person experience is fundamentally a separate issue since it is known to be real in an immediate way. After all, couldn't the entire material world be an illusion? If it were so, then we would still know for certain that we exist, says Descartes. Therefore, mind could exist independent of body, and therefore, they are something metaphysically distinct, different, separate.
Alright. That's wonderful, Descartes thinks. In fact, he claims he's proven that he's right. He's content to say that given that argument, there can be no doubt as to the nature of mind as separate from body. But he also gives another argument, just to drive the point home. He says that body, material substance, is fundamentally different from mind because it is extended. By this he means that it has shape and size, whereas thoughts are completely different. To depart a little bit from Descartes' simplistic understanding of physics (by the standards of our time), he's saying that any physical thing has physical properties, but thoughts do not, and therefore they must be of a totally different nature.
Part Three: My Destruction of the Exntension Argument
Here's the part where instead of summarizing other people, I say something original. Kinda.
Alright, so some people are pretty convinced by Descartes arguments. Sure, he hasn't solved the mind-body problem - we still don't know, for example, how these two apparently distinct substances are related and manage to interact with each other (Descartes totally believed in neuroscience and was even willing to dissect live animals like dogs with absolutely no anesthesia because he believed that unlike humans, they were Braitenbergian machines and had no real mind). But don't his arguments kinda disprove physicalism? He certainly thought so, and Descartes was a smart guy, no doubt about it.
Well, he was good at math. In his Meditations on First Philosophy, I'm not so sure - some parts are pretty atrocious. Here, while it wasn't immediately obvious to me that Descartes was being dumb, I feel like it wouldn't have been that hard for him to come up with these issues and at least say something to address them. But he didn't. Who knows why.
Descartes' extension argument is kinda decimated by a basic understanding of the way the world works. Sure material things have weight, extension (volume), etc. Sure, thoughts don't really have that. But atoms and molecules can form PATTERNS which don't have extension! Does a square have extension? Well a physical one does, but the abstraction of a square doesn't.
Okay, so Descartes would say that these abstractions of a square, things without extension, can only exist in the mind. But machines can count! They can also recognize squares. So, let's say we have a Braitenbergian machine which can see lines and measure angles. It has a logical circuit which goes "DING DING DING" whenever it counts four sides to a shape and notices that the angles are all 90 degrees. Then this machine knows what a square is! Or even if you say "well, the machine doesn't know anything," then you still have to exist that the abstraction of the square exists independant of any mind. And that abstract square, like a thought, has no extension.
Now consider a physical neural network with nodes and connectors. Something I often find helpful to imagine is an absolutely giant neural network composed of reservoirs of water connected by channels of varying widths, with water flowing in between them when gates are opened or closed by the water pressure within the system. This probably isn't realistic, and if it doesn't help you to think of it this way, don't - but the advantage here as that we are unlikely to ascribe some kind of innate thinking capacity to the system of reservoirs.
Anyway, consider the system. Well, it stands to reason that while this system has extension and all sorts of physical properties, the connections and patterns which it forms, the large scale rules which govern its processes, they don't have extension. The big water system could easily be translated and represented as some computer model with none of the same physical relationships and it would still function and "think" in precisely the same way.
So, thoughts don't have extension, but who cares. The hypothesis that they are emergent from the patterns of arrangements of things with extension is a perfectly good hypothesis as far as I am concerned.
Part Four: My Destruction of the Certainty Argument
Alright, so Descartes made his little certainty argument a while back. Let me copy/paste that here:
First, he says that we can be CERTAIN of our own existence and we can never really be certain of the existence of material things in the same way. Braitenberg is looking at mindedness from a third person perspective, from behaviorist perspective, but the first person experience is fundamentally a separate issue since it is known to be real in an immediate way. After all, couldn't the entire material world be an illusion? If it were so, then we would still know for certain that we exist, says Descartes. Therefore, mind could exist independent of body, and therefore, they are something metaphysically distinct, different, separate.
Alright, so maybe the extension thing is silly, but what about this? If mind can exist without body, then mind must be separate from body! What's there to say?
Whell whell whellllll, my friend Descartes, you're claiming that the first-person conceivability of the mind existing without the body implies the possibility of the mind existing without the body. I think that the difference in certainty can be explained quite directly. Consider the uncertainty Descartes feels about the outside world. "Well," he says, "I only experience the outside world through my senses, which I experience as ideas in my mind. Therefore I have no direct evidence that the world truly exists: my senses might be lying to me." We're on board with that! Hurrah! But your mistrust comes from an inability to experience things as a consciousness in any other way then cognition. Your language of existence is the language of the mind, of your own mind, connected by neurons, says Braitenberg. Therefore you cannot have complete trust in the translation mechanisms (senses) because you cannot understand how they work - as bridges from the outside world to the language of the mind, they necessarily are not entirely understandable by you, the being who only understands mind-language. Therefore, for all we know, the translators might actually be generators, and we are hallucinating everything we experience. We can never be sure.
The reservoir system might have a light sensor which opens certain gates when it detects the sun, allowing water to pass through. It cannot be said to be directly aware of the sun, and it certainly cannot understand the light sensor, which operates on different principles. Therefore it can never know with certainty whether the sensor malfunctioned or was tricked.
In contrast, your own mind speaks your own language. You interact with it directly through the same pathways by which you exist. This first person experience has a degree of certainty which translated reality will never have simply due to its direct nature. Therefore, the certainty gap is one of perspective, not necessarily a difference in metaphysical nature between mind and body; and to claim that an uncertainty difference caused by perspective is grounds for the concrete ability of one thing to exist without the other is like saying that since we can see the light of the sun even when we aren't looking directly at the sun, the sun's light must be able to exist independently of the sun's existence. It's nonsense.
Part Five: But Can My Subjective Experience Really Emerge From Neurons Interacting? That's So Hard to Imagine!!
I'm going to be Ronald, an older middle-aged mechanic who is absurdly good at fixing cars but also is a secret student of philosophy.
Well my God, you really are trying hard to prove that everything is just cells and electricity, that at the end of the day we're all just machines, like the cars I fix. What makes you so committed to this argument? Can't you just use your common sense?
Look at things this way. You said yourself that the mind could just as well be made out of some big reservoirs with channels of water running around and gates opening and closing. Now remember your experience of the color RED. Close your eyes and feel what it feels like to see red. Or even better, look at a red thing.
Are you really trying to tell me that this big moat contraption would be able to have a qualitative experience like "seeing red" in the same way that we have this experience? Sure, it could have a censor and maybe let a lake drain when the censor detects red - fair enough. But if that was what went on in my mind when I see red, I would experience it as such. My introspection would yield awareness not of qualitative thoughts and feelings but of lots of neurons in different states. And it wouldn't really be "awareness" in the way that I have awareness - it would be just an internal representation of an internal state, existing as a complex physical object, like how cars have sensors which report to a central computer.
Okay Ronald, thanks for your constructive criticism. I appreciate how your skepticism pushes me to make my own argument more clear and rigorous. In the end, you are just a useful idiot, unwittingly serving my physicalist ends. *insert maniacal laugh*
Imagine what it would be like if you did, upon introspection, notice a whole bunch of neurons firing and tangling up with each other to form crazy patterns. That would be super cool and probably beautiful, but it honestly wouldn't be that useful for purposes of survival. It would be like a high-level programming language trying to interpret itself as raw binary. Wouldn't that remove much of the utility of the binary? To mean things?
Therefore, when we introspect, we do not see our individual neurons firing - that would be way too much useless information, so we didn't evolve that way! Instead, we have evolved to notice what actually matters: the meaning of it all. So I don't need to know what neurons are firing when I see red; all I need to know is that I see red.
Please write to me with any objections/complaints/accusations.