Menu
Philosophical & Theoretical Essays
Are There Any Gods?
Spirit, Soul, and Mind
Ethics without Gods
Religion, Hypnosis, and Music
Atheism: Its Logical and Philosophical Foundations
Atheism and Humanism: Veridical and Ethical Dimensions of Self-description
Of Free Will and Flush Toilets
Fallacies for the Faith
ARE THERE ANY GODS?
TRADITIONAL ARGUMENTS EXAMINED,
WITH A CONSIDERATION OF
THE PROBLEM OF EVIL
I. The first-cause argument
According to the first-cause argument, everything in this world has to have a cause, and that cause in turn must have had its cause. If we follow this chain of causation back far enough, we must come ultimately to a First Cause, which must be God.
Critique: To say that "God is the cause of the world" immediately raises the question "What is the cause of that god?" If everything must indeed have a cause, then a god also must have a cause. If, however, anything can exist without a cause, it might just as well be the universe as God. If we are going to speculate that something can exist without a cause, it’s better for it to be something we actually know exists (the universe) rather than something we can’t even detect (a god).
Even If we allow the argument that "God" is the cause of the universe, that doesn't accomplish anything for the world's religions, since "God" is simply a synonym for "cause," and there is no way to know anything more about it other than it created the universe. In fact, we can't even know if "it" is an "it." It might have gender, as did the gods and goddesses of antiquity, or it might be some sort of animal (as were other deities of the past). It might be singular or plural. There is no more evidence that a single god created the world than that the world was created by a divine committee of some sort.
To conclude from the first-cause argument that a god exists is misleading, since the most that could be concluded if the argument were valid is that a causeexists. Furthermore, itshould be noted that modern physics has seriously undermined the concept of causation. In quantum mechanics and related areas, statistical concepts have replaced the concept of cause and effect. If the concept of cause is not needed in certain areas of physics, it also may not be needed at the level of the universe. Physicists tell us that “virtual particles” are popping into and out of existence all the time. Why not unverses also?
Finally, it is more plausible to suppose the universe has always existed (in some form or other) than that the only thing that has existed forever is something we can't even detect (a god). How long the "cosmic egg" existed before it exploded in the "Big Bang" that formed our universe is unknown and perhaps unknowable. To say that the Big Bang had to have had a cause and that cause is God gets us back to our first question: What was the cause of the cause of the Big Bang? If anything can exist without cause, it might Just as well be the Big Bang.
Critique: To say that "God is the cause of the world" immediately raises the question "What is the cause of that god?" If everything must indeed have a cause, then a god also must have a cause. If, however, anything can exist without a cause, it might just as well be the universe as God. If we are going to speculate that something can exist without a cause, it’s better for it to be something we actually know exists (the universe) rather than something we can’t even detect (a god).
Even If we allow the argument that "God" is the cause of the universe, that doesn't accomplish anything for the world's religions, since "God" is simply a synonym for "cause," and there is no way to know anything more about it other than it created the universe. In fact, we can't even know if "it" is an "it." It might have gender, as did the gods and goddesses of antiquity, or it might be some sort of animal (as were other deities of the past). It might be singular or plural. There is no more evidence that a single god created the world than that the world was created by a divine committee of some sort.
To conclude from the first-cause argument that a god exists is misleading, since the most that could be concluded if the argument were valid is that a causeexists. Furthermore, itshould be noted that modern physics has seriously undermined the concept of causation. In quantum mechanics and related areas, statistical concepts have replaced the concept of cause and effect. If the concept of cause is not needed in certain areas of physics, it also may not be needed at the level of the universe. Physicists tell us that “virtual particles” are popping into and out of existence all the time. Why not unverses also?
Finally, it is more plausible to suppose the universe has always existed (in some form or other) than that the only thing that has existed forever is something we can't even detect (a god). How long the "cosmic egg" existed before it exploded in the "Big Bang" that formed our universe is unknown and perhaps unknowable. To say that the Big Bang had to have had a cause and that cause is God gets us back to our first question: What was the cause of the cause of the Big Bang? If anything can exist without cause, it might Just as well be the Big Bang.
II. The argument from natural law
According to the argument from natural law, the existence of the laws of nature implies the existence of a law-giver—a god. Without this god creating and enforcing these laws, the world would be chaos. Without the law of gravity, for example, planets would be moving all over the place. A god is needed to keep the universe orderly.
Critique: First of all, this argument confuses human laws with natural laws. While it is true that there would be no human laws if there had been no law-givers such as Hammurabi or the United States Congress, so-called natural laws do not require a law-giver. "Natural laws" are simply statements of how in fact the universe is seen to act.
Secondly, the term "natural law" is used less and less now in science, being replaced by the term "theory.” Although we still speak of Newton's laws, those laws have been largely replaced by Einstein's theory. Since natural laws are simply descriptions of how in fact things operate, there is no need to suppose those things are being coerced into acting that way. Furthermore, we have already seen that in certain areas of science these so-called laws are actually statistical averages of the sort that derive from the "laws" of chance. A law-giver Is no more needed to explain them than to explain the fact that when we throw dice, double sixes can be expected about one out of every thirty-six throws.
Finally, even if we supposed that things in nature behave the way they do because they are obeying divine commands, we must then deal with the question "Why did God issue just those natural laws and not different ones?" The Nobel Prize-winning philosopher and logician Bertrand Russell explains:
"If you say that he did it simply from his own good pleasure, and without any reason, you then find that there Is something which is not subject to law, and so your train of natural law is interrupted. If you say, as more orthodox theologians do, that In all the laws which God issues he had a reason for giving those laws rather than others—the reason, of course, being to create the best universe, although you would never think it to look at it—if there were a reason for the laws which God gave, then God himself was subject to law, and therefore you do not get any advantage by introducing God as an Intermediary. You have really a law outside and anterior to the divine edicts, and God does not serve your purpose, because he is not the ultimate law-giver.
Critique: First of all, this argument confuses human laws with natural laws. While it is true that there would be no human laws if there had been no law-givers such as Hammurabi or the United States Congress, so-called natural laws do not require a law-giver. "Natural laws" are simply statements of how in fact the universe is seen to act.
Secondly, the term "natural law" is used less and less now in science, being replaced by the term "theory.” Although we still speak of Newton's laws, those laws have been largely replaced by Einstein's theory. Since natural laws are simply descriptions of how in fact things operate, there is no need to suppose those things are being coerced into acting that way. Furthermore, we have already seen that in certain areas of science these so-called laws are actually statistical averages of the sort that derive from the "laws" of chance. A law-giver Is no more needed to explain them than to explain the fact that when we throw dice, double sixes can be expected about one out of every thirty-six throws.
Finally, even if we supposed that things in nature behave the way they do because they are obeying divine commands, we must then deal with the question "Why did God issue just those natural laws and not different ones?" The Nobel Prize-winning philosopher and logician Bertrand Russell explains:
"If you say that he did it simply from his own good pleasure, and without any reason, you then find that there Is something which is not subject to law, and so your train of natural law is interrupted. If you say, as more orthodox theologians do, that In all the laws which God issues he had a reason for giving those laws rather than others—the reason, of course, being to create the best universe, although you would never think it to look at it—if there were a reason for the laws which God gave, then God himself was subject to law, and therefore you do not get any advantage by introducing God as an Intermediary. You have really a law outside and anterior to the divine edicts, and God does not serve your purpose, because he is not the ultimate law-giver.
III. The argument from the “anthropic principle”
An argument derived from the natural-law argument, this argument claims that the laws of nature are exactly those needed to create and sustain human beings. If the force of gravity, say, or the charge of the electron were slightly different, human life would be impossible. If water did not expand when it freezes, life itself would be impossible. If the laws of nature were slightly different, we would not be here. In fact, however, the laws of nature are exactly those needed for human existence. They must have been selected by a divine creator especially for the purpose of creating human beings.
Critique: This argument supposes, without evidence, that the so-called laws of nature couldbe different. As far as we can tell, the laws of nature are the way they are of necessity—they can’t be different. (Of course this cannot be proved, but we have no evidence to make us think otherwise.) Furthermore, it raises the question, "Why did God want to create humans?" As with the natural-law argument, If God wanted to create humans for no reason other than his own good pleasure, we have something not subject to natural law. If, however, he had a reason for the human-compatible laws he made, then once again we have a law outside and beyond God. Creating a god as the author of the anthropic principle does not really answer the question "Why are the laws of nature compatible with human life?"
Furthermore, it is not possible to show that sentient life of some sort could not exist In at least some other imaginable universes with different laws, and so the uniqueness of the human situation is lessened somewhat. However this may be, it must be observed that even though the laws of nature are compatible with human life, they seem to have produced humanity more by accident than by purpose. After all, somewhere between twelve and twenty billion years have elapsed since the time of the Big Bang, but humans of our kind have only existed during the last 250,000 years or so. If the purpose of the universe was to create human beings, it's creator seems to have been extremely lacking in motivation!
Ultimately, there is a striking similarity between the claim that all the laws of nature are the way they are just so wecan be where we are and the story of two men lost in the midst of a giant shopping mall. After wandering about for a while, they come upon an information kiosk with a map of the mall. They search the map and find an arrow labled “YOU ARE HERE.” One guy says to the other, “You know? This is amazing. Every time I get lost in a mall and find one of these signs, the arrow is always absolutely correct. I always amexactly where it says at the moment!”
Critique: This argument supposes, without evidence, that the so-called laws of nature couldbe different. As far as we can tell, the laws of nature are the way they are of necessity—they can’t be different. (Of course this cannot be proved, but we have no evidence to make us think otherwise.) Furthermore, it raises the question, "Why did God want to create humans?" As with the natural-law argument, If God wanted to create humans for no reason other than his own good pleasure, we have something not subject to natural law. If, however, he had a reason for the human-compatible laws he made, then once again we have a law outside and beyond God. Creating a god as the author of the anthropic principle does not really answer the question "Why are the laws of nature compatible with human life?"
Furthermore, it is not possible to show that sentient life of some sort could not exist In at least some other imaginable universes with different laws, and so the uniqueness of the human situation is lessened somewhat. However this may be, it must be observed that even though the laws of nature are compatible with human life, they seem to have produced humanity more by accident than by purpose. After all, somewhere between twelve and twenty billion years have elapsed since the time of the Big Bang, but humans of our kind have only existed during the last 250,000 years or so. If the purpose of the universe was to create human beings, it's creator seems to have been extremely lacking in motivation!
Ultimately, there is a striking similarity between the claim that all the laws of nature are the way they are just so wecan be where we are and the story of two men lost in the midst of a giant shopping mall. After wandering about for a while, they come upon an information kiosk with a map of the mall. They search the map and find an arrow labled “YOU ARE HERE.” One guy says to the other, “You know? This is amazing. Every time I get lost in a mall and find one of these signs, the arrow is always absolutely correct. I always amexactly where it says at the moment!”
IV. The argument from design
This is very similar to the anthropic principle argument. Not only the laws of nature were created just for humans, nature itself and the structures of the organisms in it carry the marks of a divine designer and were created for us. Just as a watch betrays the existence of a designing watchmaker, so too the human eye and the world of nature in general bespeak the existence of a divinely talented designer.
Critique: Ever since the time of Charles Darwin, we have understood that environments were not created to be compatible with the life-forms living in them. Rather, living things have evolved adaptations that allow them to live in the environments in which they are found. Thus, land vertebrates had aquatic ancestors that evolved adaptations that allowed them to live on land. Whales, on the other hand, had terrestrial ancestors that evolved adaptations to live in the water again!
The principles of mutation and natural selection are able to accomplish everything theologians once thought only a divine watchmaker could do—including the creation of the human eye. Since evolutionary processes are in effect a "blind watchmaker," we are not surprised to learn that the human eye is not "designed" as well as could be imagined if real intelligence were behind its creation. Not only are our retinas on backwards (the photoreceptor cells face toward the center of our heads instead of frontwards), the eyes of many of the rest of us are poorly shaped and we have to wear glasses. Although we may suppose our noses are divinely shaped so they can support the spectacles needed for clear eyesight, it is suspicious that the supposed Intelligence that created noses suitable for spectacles did not at the same time provide the spectacles. Once again, to quote Bertrand Russell:
"When you come to look into this argument from design, it is a most astonishing thing that people can believe that this world, with all the things that are in it, with all its defects, should be the best that omnipotence and omniscience have been able to produce in millions of years. I really cannot believe it. Do you think that, if you were granted omnipotence and omniscience and millions of years in which to perfect your world, you could produce nothing better than the Ku Klux Klan or the Fascists?"
Critique: Ever since the time of Charles Darwin, we have understood that environments were not created to be compatible with the life-forms living in them. Rather, living things have evolved adaptations that allow them to live in the environments in which they are found. Thus, land vertebrates had aquatic ancestors that evolved adaptations that allowed them to live on land. Whales, on the other hand, had terrestrial ancestors that evolved adaptations to live in the water again!
The principles of mutation and natural selection are able to accomplish everything theologians once thought only a divine watchmaker could do—including the creation of the human eye. Since evolutionary processes are in effect a "blind watchmaker," we are not surprised to learn that the human eye is not "designed" as well as could be imagined if real intelligence were behind its creation. Not only are our retinas on backwards (the photoreceptor cells face toward the center of our heads instead of frontwards), the eyes of many of the rest of us are poorly shaped and we have to wear glasses. Although we may suppose our noses are divinely shaped so they can support the spectacles needed for clear eyesight, it is suspicious that the supposed Intelligence that created noses suitable for spectacles did not at the same time provide the spectacles. Once again, to quote Bertrand Russell:
"When you come to look into this argument from design, it is a most astonishing thing that people can believe that this world, with all the things that are in it, with all its defects, should be the best that omnipotence and omniscience have been able to produce in millions of years. I really cannot believe it. Do you think that, if you were granted omnipotence and omniscience and millions of years in which to perfect your world, you could produce nothing better than the Ku Klux Klan or the Fascists?"
V. The argument from moral necessity
One form of this argument claims that if there were no god, there would be no difference between right and wrong. All would be permitted. A righteous god is needed to preserve morals.
Critique: Strictly speaking, this cannot prove the existence of anything, let alone a god. It can only frighten people into believing in a god for fear of losing their moral systems. The presence or absence of morality is logically unrelated to the question of whether one or seventy gods exist.
But there are further problems with the argument. If indeed there is a difference between right and wrong, is that difference due to a god's arbitrary fiat (command), or is there some inherent principle by which right and wrong can be distinguished? If it is simply a god's fiat or whim that makes right right and wrong wrong, then for the god him/her/its-self there is no difference between right and wrong, and it is meaningless to say that the god in question is good. If, however, a god's commandments are good independent of the fact that he/she/it made them, then it was not through that god that right and wrong came into being: they are logicallyindependent and separate from the god. There is then no reason to look to a god to understand good and bad. Goodand badwould then be principles that we can figure out without the need of a god.
Critique: Strictly speaking, this cannot prove the existence of anything, let alone a god. It can only frighten people into believing in a god for fear of losing their moral systems. The presence or absence of morality is logically unrelated to the question of whether one or seventy gods exist.
But there are further problems with the argument. If indeed there is a difference between right and wrong, is that difference due to a god's arbitrary fiat (command), or is there some inherent principle by which right and wrong can be distinguished? If it is simply a god's fiat or whim that makes right right and wrong wrong, then for the god him/her/its-self there is no difference between right and wrong, and it is meaningless to say that the god in question is good. If, however, a god's commandments are good independent of the fact that he/she/it made them, then it was not through that god that right and wrong came into being: they are logicallyindependent and separate from the god. There is then no reason to look to a god to understand good and bad. Goodand badwould then be principles that we can figure out without the need of a god.
VI. The argument from prophecy
This argument claims that prophets in the Old and New Testaments of the Christian Bible made scores of prophecies that were fulfilled centuries later, long after the prophets had died. Only if a god had given them supernatural foresight could they have done that. Thus, the great accuracy of his prophets proves the existence of the Christian god.
Critique: This argument is based upon the astonishing assumption that when the prophets were speaking to the people of their times, they weren't really speaking to them! Thus, when Jesus said "I tell you this: the present generation will live to see it all. Heaven and earth will pass away ... " [Matt24:34-35], he wasn't speaking to his own generation. (Imagine their disappointment!) When he said "I tell you this: there are some of those standing here who will not taste death before they have seen the kingdom of God already come in power" [Mark 9:1], he wasn't really talking to the people standing before him! Jesus apparently was only fooling those people into thinking he was talking to them, if we are to believe the argument from prophecy.
But there are other problems with the argument from prophecy, foremost of which is the simple fact that prophecy frequently fails. Not only was Jesus wrong in the two prophecies just quoted, we have prophecy failing in the Old Testament as well. Ezekiel [chapter 29] prophesied that Nebuchadnezzar, King of Babylon, would conquer Egypt and destroy and disperse its people; that Egypt would remain uninhabited for forty years; and that the Nile would dry up. Of course, none of this came true.
Worse than the failed prophecies are the falsified prophecies—prophecies written after the events in question had already become history. Scholars have shown, for example, that the Book of Daniel is a forgery written centuries after the time of the Babylonian captivity (597–538 B.C.E.), probably around the year 165 B.C.E. The prophecies In Daniel that actually came true were already history to its author.
The divine inspiration of the Judaeo-Christian scriptures—and thus the existence of a divinity inspiring them—is further called into question by the great numbers of contradictions in those scriptures. Thus, Daniell: 1:2 says that it was King Jehoiakim who was carried off into captivity by Nebuchadnezzar; 2 Kings 24:6-12 says Jehoiakim was already dead and it was his son, Jehoiachin, who was taken captive.
Critique: This argument is based upon the astonishing assumption that when the prophets were speaking to the people of their times, they weren't really speaking to them! Thus, when Jesus said "I tell you this: the present generation will live to see it all. Heaven and earth will pass away ... " [Matt24:34-35], he wasn't speaking to his own generation. (Imagine their disappointment!) When he said "I tell you this: there are some of those standing here who will not taste death before they have seen the kingdom of God already come in power" [Mark 9:1], he wasn't really talking to the people standing before him! Jesus apparently was only fooling those people into thinking he was talking to them, if we are to believe the argument from prophecy.
But there are other problems with the argument from prophecy, foremost of which is the simple fact that prophecy frequently fails. Not only was Jesus wrong in the two prophecies just quoted, we have prophecy failing in the Old Testament as well. Ezekiel [chapter 29] prophesied that Nebuchadnezzar, King of Babylon, would conquer Egypt and destroy and disperse its people; that Egypt would remain uninhabited for forty years; and that the Nile would dry up. Of course, none of this came true.
Worse than the failed prophecies are the falsified prophecies—prophecies written after the events in question had already become history. Scholars have shown, for example, that the Book of Daniel is a forgery written centuries after the time of the Babylonian captivity (597–538 B.C.E.), probably around the year 165 B.C.E. The prophecies In Daniel that actually came true were already history to its author.
The divine inspiration of the Judaeo-Christian scriptures—and thus the existence of a divinity inspiring them—is further called into question by the great numbers of contradictions in those scriptures. Thus, Daniell: 1:2 says that it was King Jehoiakim who was carried off into captivity by Nebuchadnezzar; 2 Kings 24:6-12 says Jehoiakim was already dead and it was his son, Jehoiachin, who was taken captive.
VII. The problem of evil
This is actually an argument againstthe existence of a god who is good, omnipotent, and omniscient at the same time. It was first stated clearly by the Greek philosopher Epicurus, who lived 341-270 B.C.E.
Either God wants to abolish evil, and cannot;
Or he can, but does not want to;
Or he cannot, and does not want to.
If he wants to, but cannot, he is impotent.
If he can, but does not want to, he is wicked.
If he neither can, nor wants to,
He is both powerless and wicked.
But if (as they say) God can abolish evil,
And God really wants to do it,
Why is there evil In the world?
Put simply, this argument rules out the possibility that if there Is a god it is simultaneously good as well as omnipotent and omniscient. This effectively rules out the god of Christianity, but perhaps does not rule out the gods of certain other religions
Either God wants to abolish evil, and cannot;
Or he can, but does not want to;
Or he cannot, and does not want to.
If he wants to, but cannot, he is impotent.
If he can, but does not want to, he is wicked.
If he neither can, nor wants to,
He is both powerless and wicked.
But if (as they say) God can abolish evil,
And God really wants to do it,
Why is there evil In the world?
Put simply, this argument rules out the possibility that if there Is a god it is simultaneously good as well as omnipotent and omniscient. This effectively rules out the god of Christianity, but perhaps does not rule out the gods of certain other religions
VIII. The Argument Against Proving a Negative
BELIEVER: How do you know there is no god? What proof do you have?
ATHEIST: How do you know there is no Easter bunny? How do you know there is no Santa Claus? Have you disproved the existence of Thor and Osiris?
BELIEVER: Be serious! Those are just myths made up by men. I'm talking about God!
ATHEIST: Well, the burden of proof is on you to prove that a deity exists. I don't have to prove a negative. The burden of proof is always on the person who alleges the existence of something.
BELIEVER: I don't buy that. You have to prove that my God does not exist.
ATHEIST: Your god? Singular? How do you know there aren't lots of gods? Have you disproved the existence of goddesses?
BELIEVER: Don't be silly! I'm talking about the existence of God—the creator of the universe.
ATHEIST: Ah! Now we're getting somewhere! You're talking about me!
BELIEVER: Since when are you God?
ATHEIST: Since just a bit more than an infinite length of time. Of course, I created youjust three minutes ago.
BELIEVER: That's crazy! I'm fifty-seven years old!
ATHEIST: Of course you think you are: I created those memories in you, and I altered everyone else's memories also, to make it appear that you were around before three minutes ago.
BELIEVER: I suppose you created my birth certificate too! What evidence do you have to support such an absurd idea?
ATHEIST: Ah! So you're beginning to understand that the burden of proof is on the person who makes the claim of a god's existence. Don't you think you should try to disprove the claim that I am god?
BELIEVER: Well, maybe. If you're god, why don't you perform a miracle?
ATHEIST: Good question. Unfortunately, I don't do miracles anymore. I could if I wanted to, but I've decided that from now on, people have to believe in me through faith. Being God, I've just now read your mind and I see you're thinking that you might be able to torture me into confessing that I'm not God. Well, scrap that idea! I might very well decide to pretend to be in pain and confess all sorts of silly things. But believe me, I would punish you for eternity after you die!
BELIEVER: Hey, that's not a legitimate argumentation. There's nothing I could ever do to disprove your claim of divinity. You could always wriggle out of it by claiming you'll show me after I'm dead!
ATHEIST: Very true! You're learning how impossible it is to prove a negative. But you're learning one even more important lesson.
BELIEVER: What's that?
ATHEIST: You're learning that it is stupid to argue about propositions that can't be tested even in the imagination. For every test you could imagine to try, I could come up with a way to evade your net—in just the same way as the preachers tell me yourgod doesn't want to get involved in mytests. My claim to divinity can't be tested. Your claims of the divinity of Jehovah or Jesus can't be tested either. If I call upon your god to strike me with lightning if I'm wrong, I guarantee nothing will happen. Your god won't get involved any more than I will. Claims that can't be tested even in the imagination are meaningless. They can't even be false. We don't need to waste our time trying to disprove them. You aren't going to waste your time trying to disprove my claim to divinity, and no sane person will waste time trying to disprove the existence of your untestable god. Of course, when you accidentally make a claim about your divinity nominee that is testable, sane people might take the time to show you how the test results turn out to be negative. But in general, no one is going to waste time trying to prove that Jehovah and I are not gods.
ATHEIST: How do you know there is no Easter bunny? How do you know there is no Santa Claus? Have you disproved the existence of Thor and Osiris?
BELIEVER: Be serious! Those are just myths made up by men. I'm talking about God!
ATHEIST: Well, the burden of proof is on you to prove that a deity exists. I don't have to prove a negative. The burden of proof is always on the person who alleges the existence of something.
BELIEVER: I don't buy that. You have to prove that my God does not exist.
ATHEIST: Your god? Singular? How do you know there aren't lots of gods? Have you disproved the existence of goddesses?
BELIEVER: Don't be silly! I'm talking about the existence of God—the creator of the universe.
ATHEIST: Ah! Now we're getting somewhere! You're talking about me!
BELIEVER: Since when are you God?
ATHEIST: Since just a bit more than an infinite length of time. Of course, I created youjust three minutes ago.
BELIEVER: That's crazy! I'm fifty-seven years old!
ATHEIST: Of course you think you are: I created those memories in you, and I altered everyone else's memories also, to make it appear that you were around before three minutes ago.
BELIEVER: I suppose you created my birth certificate too! What evidence do you have to support such an absurd idea?
ATHEIST: Ah! So you're beginning to understand that the burden of proof is on the person who makes the claim of a god's existence. Don't you think you should try to disprove the claim that I am god?
BELIEVER: Well, maybe. If you're god, why don't you perform a miracle?
ATHEIST: Good question. Unfortunately, I don't do miracles anymore. I could if I wanted to, but I've decided that from now on, people have to believe in me through faith. Being God, I've just now read your mind and I see you're thinking that you might be able to torture me into confessing that I'm not God. Well, scrap that idea! I might very well decide to pretend to be in pain and confess all sorts of silly things. But believe me, I would punish you for eternity after you die!
BELIEVER: Hey, that's not a legitimate argumentation. There's nothing I could ever do to disprove your claim of divinity. You could always wriggle out of it by claiming you'll show me after I'm dead!
ATHEIST: Very true! You're learning how impossible it is to prove a negative. But you're learning one even more important lesson.
BELIEVER: What's that?
ATHEIST: You're learning that it is stupid to argue about propositions that can't be tested even in the imagination. For every test you could imagine to try, I could come up with a way to evade your net—in just the same way as the preachers tell me yourgod doesn't want to get involved in mytests. My claim to divinity can't be tested. Your claims of the divinity of Jehovah or Jesus can't be tested either. If I call upon your god to strike me with lightning if I'm wrong, I guarantee nothing will happen. Your god won't get involved any more than I will. Claims that can't be tested even in the imagination are meaningless. They can't even be false. We don't need to waste our time trying to disprove them. You aren't going to waste your time trying to disprove my claim to divinity, and no sane person will waste time trying to disprove the existence of your untestable god. Of course, when you accidentally make a claim about your divinity nominee that is testable, sane people might take the time to show you how the test results turn out to be negative. But in general, no one is going to waste time trying to prove that Jehovah and I are not gods.
SPIRIT, SOUL, AND MIND
Whenever I peruse a dictionary, I am struck by the amazing number of words which refer to nothing at all in the real world. Many of the words are obviously fabulous: leprechaun, unicorn, gremlin, Philosopher's Stone, Zeus, elf, Fountain of Youth, ghost, etc. Others, though referring equally to non-existent things, are less obviously fabulous: The Mean Sun, The Average Citizen, vital force, spirit, soul, and —in at least some of its accepted meanings —mind.
Why the human species has invented so many words which refer to nothing in reality is a most interesting question for scientific investigation, and it probably would require a complete book to elucidate properly. In this article I shall only attempt to deal with a few such words, specifically, the words spirit, soul, and mind.
It is a striking fact that nearly all languages of the world, extinct as well as extant, have —or have had —words which could be rendered as 'spirit' or 'soul' in English. At first glance, it would seem that this is a good argument in favor of the real existence of souls and spirits. For, would it not be improbable that so many different peoples and languages could be mistaken? If many different unrelated languages have independently invented words for soul, is that not a good reason to believe they did so because there really is such a thing?
I think not. The first clue to the solution of this puzzle comes from etymology, the study of word origins.
While the origin of the English word soul is obscure, the word almost certainly had its origin in a word which meant 'breath' or 'wind' or 'air', or something like that. The word spirit--generally a synonym for soul--comes from the Latin spiritus, and clearly meant 'breath' originally. Spiritual and respiratory both derive from the same root!
Moreover, if we check in the Greek and Hebrew bibles to see which words are translated as 'soul', etc., in the King James Version, we will find many whose literal meaning is 'breath' or 'wind'. For example, the Hebrew word neshamah (literally meaning 'breath') is twice rendered as 'spirit', once as 'soul'. The Hebrew-Aramaic word ruach (lit., 'wind') is rendered 240 times as 'spirit', six times as 'mind.' The word nephesh (lit., 'breath') is rendered 'soul' 428 times, 'mind' 15 times, 'ghost' twice, and 'life' 119 times.
Turning to the Greek Bible, we find pneuma (lit., 'breath') rendered as 'ghost' 91 times (including the rendering 'Holy Ghost'), 292 times as 'spirit'. The reader will recognize the same root in the word pneumonia, a word referring to a disease of the organs of breath. And finally, in this somewhat pedantic parade of words, we may note the important word psyche. As expected, its literal meaning is 'breath.' As we might have guessed, it is rendered as 'soul' 58 times, 'mind' three times, and 'life' 40 times.
The fact that nearly all words now meaning 'soul', 'spirit', 'life', etc., trace their origins to words meaning 'breath' or 'wind' leads me to conclude that the derived meanings were an outgrowth of the inability of primitive people to solve a basic biological puzzle, namely, what constitutes the difference between a live body and a dead one?
To the ancient authors of the Bible—men who still thought they were living on a flat earth beneath a solid sky (firmament)--the solution seemed deceptively simple: living things breathe, dead things do not. At first, only animals (from Latin anima, meaning 'breath' or 'breeze' originally) were considered fully alive. The case of plants was viewed with confusion for a long time. Some authorities considered them live, others did not. The ancients did not realize that 'souls' were really only a gaseous mixture of nitrogen and oxygen, contaminated with varying amounts of water vapor, carbon dioxide, noble gases, and—depending upon what one ate and whether or not one brushed after every meal—varying amounts of aromatic substances!
In the Genesis Second Creation Myth, the animating power of breath is clearly depicted. Yahweh, after having molded Adam from the dust, has to breathe into him the breath of life in order for him to become a living soul. Breath is life.
The manner in which breath became equated with life is not difficult to discern. A person newly dead, say, of a heart attack, anatomically is not much different from what he was like before he died. He still has five fingers per hand, a tongue in his mouth, a brain in his head, and a heart in his breast. The ancients, unconscious of the microcosmic fever of chemical marriages and divorces that we call metabolism, could see only one obvious difference: the lack of breath in the dead.
When a man expired (lit., 'breathed out'), his spirit (lit., 'breath') left his body, and he died. When a man sneezed, his spirit was forcefully ejected from his body, and one had to say "God bless you" or make a magical gesture, such as the sign of the cross, very quickly, before evil spirits could come to take over the momentarily spiritually vacant carcass. Demonic "possession" was the result, quite simply, of inhaling one or more of the evil breaths thought to hover in the air around us. For early Christians, the Devil's breath was everywhere.
Of course, not all possession was necessarily evil. People could "inspired"—that is, the breath of a god could take over their bodies to deliver words of wisdom or apocalyptic admonitions. Indeed, the origin of the Christian church itself was thought to have originated in an act of mass possession by a Holy Ghost ("Holy Breath" in Greek text!). In Acts 4:31 we read that when the Apostles and others "had ended their prayer, the building where they were assembled rocked, and all were filled with the Holy Spirit [breath] and spoke the word of God with boldness." (Given the close association of words with breath—thought to be life itself—is it any wonder that religions of all kinds have always focused on the magical significance of words?)
Lest anyone still think the link between breath and the foundations of Christianity be doubtful, attention is drawn to the tale running through John 20:22. Jesus has come back to visit the Disciples to tell them that he is sending them out to forgive or not forgive the sins of the world. "Then he [Jesus] breathed on them, saying, 'Receive the Holy Spirit!'" Right from the beginning, Christianity was based upon warm breath—which in time became hot air.
Modern biologists, unlike the ancient makers of myths, know that all the phenomena of living systems can be reduced to physical and chemical terms. They have no evidence of any 'vital force' or mystical spirit—and no need to seek for such. They recognize the fully alive body and the newly dead body to be but two arbitrary points along a continuum of decreasing organization.
So much for spirit, soul, and ghost. Originally denoting breath or wind, they are words which have acquired a host of mystical connotations as pre scientific people attempted to account for the difference between life and death. But what of the word mind? Does it refer to anything real? Or is it, too, a fabulous entity?
Unlike the analysis of spirit and soul, the analysis of mind is not at all simple. This is so largely through the grammatical accident that in all the European languages, ancient as well as modern, the word mind is a noun.
We tend to think of nouns as substantive: table, chair, and plumb-bob are all nouns, and all are substantial. There are many words, however, which though grammatically nouns, are not at all substantial. Words like beauty, truth, and velocity would be examples. Unfortunately, our thinking tends to be hedged around by the grammar and hidden assumptions of the language with which we think. And so it happens again and again that abstract nouns come to be thought of as representing things just as substantial as those represented by common nouns. And thus we have the basic confusion necessary to found philosophical systems such as Plato's—whose perfect triangularity exists in triangle-heaven, and so on.
Because mind was a noun, it was conceived to be a thing. Because it was thought to be a thing, it was thought to have existence apart from the brain. Because it has independent existence, it was thought capable of survival after the death of the body. And millions thought that to be good reason to invest millions in that greatest of all businesses, religion.
Neurobiological studies show all these ideas to be quite worthless. Mind is a process, a dynamic relation, and not a thing. If we change the processes of the brain, we change the mind. The psychedelic drugs have taught us that fact, if nothing else. The history of western philosophy and religion, as well as science, would have been quite different if the word mind had developed as a verb instead of as a noun.
To wonder where the mind goes after the brain decays is as silly as asking where the 70-miles-per-hour have gone after a speeding auto has crashed into a tree. Just as the relative motion of an auto can be altered only within certain limits and still represent the process called "speeding," so too we can alter the functioning of the brain only so much before the process called "mind" or "thinking" becomes altered out of existence.
Now that scientists recognize mind as a process rather than a thing, they are making rapid advances in understanding the specific brain dynamics that correspond to the various subjective states collectively known as mind. Certain drugs are known, for example, that affect certain neural paths and centers in the brain to produce the psychic state known as euphoria. Others affect other circuits and produce depression or sleep. We can implant electrodes in the brain and cause the subject to ''hear'' bells and symphonies that aren't "there" at all. We can be made to "see" figures and lights without using our eyes at all, by stimulating the visual cortex at the back of the brain.
We can cause to appear the emotions of rage, sexuality, sorrow, religious awe, etc., by altering the dynamic functions of the brain in appropriate ways. We are beginning to understand how neural circuits compete with each other to give us the illusion of "free will." Indeed, we are on the verge of being able to write equations relating the physicochemical states of the nervous system with the subjective, mental states described by psychologists and other mystics. In short, we are learning to study subjective states objectively.
Whether or not we shall be any more responsible in the application of this new knowledge than we were in the application of fire, dynamite, and atomic energy remains to be seen. Even the un-average person plays ill the part of Prometheus. Unless we, collectively the new Prometheus, judge wisely what to do with our new psychobiological powers, like Prometheus we may find ourselves chained to rocks, our vitals torn by eagles. Or worse.
Why the human species has invented so many words which refer to nothing in reality is a most interesting question for scientific investigation, and it probably would require a complete book to elucidate properly. In this article I shall only attempt to deal with a few such words, specifically, the words spirit, soul, and mind.
It is a striking fact that nearly all languages of the world, extinct as well as extant, have —or have had —words which could be rendered as 'spirit' or 'soul' in English. At first glance, it would seem that this is a good argument in favor of the real existence of souls and spirits. For, would it not be improbable that so many different peoples and languages could be mistaken? If many different unrelated languages have independently invented words for soul, is that not a good reason to believe they did so because there really is such a thing?
I think not. The first clue to the solution of this puzzle comes from etymology, the study of word origins.
While the origin of the English word soul is obscure, the word almost certainly had its origin in a word which meant 'breath' or 'wind' or 'air', or something like that. The word spirit--generally a synonym for soul--comes from the Latin spiritus, and clearly meant 'breath' originally. Spiritual and respiratory both derive from the same root!
Moreover, if we check in the Greek and Hebrew bibles to see which words are translated as 'soul', etc., in the King James Version, we will find many whose literal meaning is 'breath' or 'wind'. For example, the Hebrew word neshamah (literally meaning 'breath') is twice rendered as 'spirit', once as 'soul'. The Hebrew-Aramaic word ruach (lit., 'wind') is rendered 240 times as 'spirit', six times as 'mind.' The word nephesh (lit., 'breath') is rendered 'soul' 428 times, 'mind' 15 times, 'ghost' twice, and 'life' 119 times.
Turning to the Greek Bible, we find pneuma (lit., 'breath') rendered as 'ghost' 91 times (including the rendering 'Holy Ghost'), 292 times as 'spirit'. The reader will recognize the same root in the word pneumonia, a word referring to a disease of the organs of breath. And finally, in this somewhat pedantic parade of words, we may note the important word psyche. As expected, its literal meaning is 'breath.' As we might have guessed, it is rendered as 'soul' 58 times, 'mind' three times, and 'life' 40 times.
The fact that nearly all words now meaning 'soul', 'spirit', 'life', etc., trace their origins to words meaning 'breath' or 'wind' leads me to conclude that the derived meanings were an outgrowth of the inability of primitive people to solve a basic biological puzzle, namely, what constitutes the difference between a live body and a dead one?
To the ancient authors of the Bible—men who still thought they were living on a flat earth beneath a solid sky (firmament)--the solution seemed deceptively simple: living things breathe, dead things do not. At first, only animals (from Latin anima, meaning 'breath' or 'breeze' originally) were considered fully alive. The case of plants was viewed with confusion for a long time. Some authorities considered them live, others did not. The ancients did not realize that 'souls' were really only a gaseous mixture of nitrogen and oxygen, contaminated with varying amounts of water vapor, carbon dioxide, noble gases, and—depending upon what one ate and whether or not one brushed after every meal—varying amounts of aromatic substances!
In the Genesis Second Creation Myth, the animating power of breath is clearly depicted. Yahweh, after having molded Adam from the dust, has to breathe into him the breath of life in order for him to become a living soul. Breath is life.
The manner in which breath became equated with life is not difficult to discern. A person newly dead, say, of a heart attack, anatomically is not much different from what he was like before he died. He still has five fingers per hand, a tongue in his mouth, a brain in his head, and a heart in his breast. The ancients, unconscious of the microcosmic fever of chemical marriages and divorces that we call metabolism, could see only one obvious difference: the lack of breath in the dead.
When a man expired (lit., 'breathed out'), his spirit (lit., 'breath') left his body, and he died. When a man sneezed, his spirit was forcefully ejected from his body, and one had to say "God bless you" or make a magical gesture, such as the sign of the cross, very quickly, before evil spirits could come to take over the momentarily spiritually vacant carcass. Demonic "possession" was the result, quite simply, of inhaling one or more of the evil breaths thought to hover in the air around us. For early Christians, the Devil's breath was everywhere.
Of course, not all possession was necessarily evil. People could "inspired"—that is, the breath of a god could take over their bodies to deliver words of wisdom or apocalyptic admonitions. Indeed, the origin of the Christian church itself was thought to have originated in an act of mass possession by a Holy Ghost ("Holy Breath" in Greek text!). In Acts 4:31 we read that when the Apostles and others "had ended their prayer, the building where they were assembled rocked, and all were filled with the Holy Spirit [breath] and spoke the word of God with boldness." (Given the close association of words with breath—thought to be life itself—is it any wonder that religions of all kinds have always focused on the magical significance of words?)
Lest anyone still think the link between breath and the foundations of Christianity be doubtful, attention is drawn to the tale running through John 20:22. Jesus has come back to visit the Disciples to tell them that he is sending them out to forgive or not forgive the sins of the world. "Then he [Jesus] breathed on them, saying, 'Receive the Holy Spirit!'" Right from the beginning, Christianity was based upon warm breath—which in time became hot air.
Modern biologists, unlike the ancient makers of myths, know that all the phenomena of living systems can be reduced to physical and chemical terms. They have no evidence of any 'vital force' or mystical spirit—and no need to seek for such. They recognize the fully alive body and the newly dead body to be but two arbitrary points along a continuum of decreasing organization.
So much for spirit, soul, and ghost. Originally denoting breath or wind, they are words which have acquired a host of mystical connotations as pre scientific people attempted to account for the difference between life and death. But what of the word mind? Does it refer to anything real? Or is it, too, a fabulous entity?
Unlike the analysis of spirit and soul, the analysis of mind is not at all simple. This is so largely through the grammatical accident that in all the European languages, ancient as well as modern, the word mind is a noun.
We tend to think of nouns as substantive: table, chair, and plumb-bob are all nouns, and all are substantial. There are many words, however, which though grammatically nouns, are not at all substantial. Words like beauty, truth, and velocity would be examples. Unfortunately, our thinking tends to be hedged around by the grammar and hidden assumptions of the language with which we think. And so it happens again and again that abstract nouns come to be thought of as representing things just as substantial as those represented by common nouns. And thus we have the basic confusion necessary to found philosophical systems such as Plato's—whose perfect triangularity exists in triangle-heaven, and so on.
Because mind was a noun, it was conceived to be a thing. Because it was thought to be a thing, it was thought to have existence apart from the brain. Because it has independent existence, it was thought capable of survival after the death of the body. And millions thought that to be good reason to invest millions in that greatest of all businesses, religion.
Neurobiological studies show all these ideas to be quite worthless. Mind is a process, a dynamic relation, and not a thing. If we change the processes of the brain, we change the mind. The psychedelic drugs have taught us that fact, if nothing else. The history of western philosophy and religion, as well as science, would have been quite different if the word mind had developed as a verb instead of as a noun.
To wonder where the mind goes after the brain decays is as silly as asking where the 70-miles-per-hour have gone after a speeding auto has crashed into a tree. Just as the relative motion of an auto can be altered only within certain limits and still represent the process called "speeding," so too we can alter the functioning of the brain only so much before the process called "mind" or "thinking" becomes altered out of existence.
Now that scientists recognize mind as a process rather than a thing, they are making rapid advances in understanding the specific brain dynamics that correspond to the various subjective states collectively known as mind. Certain drugs are known, for example, that affect certain neural paths and centers in the brain to produce the psychic state known as euphoria. Others affect other circuits and produce depression or sleep. We can implant electrodes in the brain and cause the subject to ''hear'' bells and symphonies that aren't "there" at all. We can be made to "see" figures and lights without using our eyes at all, by stimulating the visual cortex at the back of the brain.
We can cause to appear the emotions of rage, sexuality, sorrow, religious awe, etc., by altering the dynamic functions of the brain in appropriate ways. We are beginning to understand how neural circuits compete with each other to give us the illusion of "free will." Indeed, we are on the verge of being able to write equations relating the physicochemical states of the nervous system with the subjective, mental states described by psychologists and other mystics. In short, we are learning to study subjective states objectively.
Whether or not we shall be any more responsible in the application of this new knowledge than we were in the application of fire, dynamite, and atomic energy remains to be seen. Even the un-average person plays ill the part of Prometheus. Unless we, collectively the new Prometheus, judge wisely what to do with our new psychobiological powers, like Prometheus we may find ourselves chained to rocks, our vitals torn by eagles. Or worse.
ETHICS WITHOUT GODS
Introduction
One of the first questions Atheists are asked by true believers and doubters alike is, "If you don't believe in God, there's nothing to prevent you from committing crimes, is there? Without the fear of hell-fire and eternal damnation, you can do anything you like, can't you?"
It is hard to believe that even intelligent and educated people could hold such an opinion, but they do. It seems never to have occurred to them that the Greeks and Romans, whose gods and goddesses were something less than paragons of virtue, nevertheless led lives not obviously worse than those of the Baptists of Alabama! Moreover, pagans such as Aristotle and Marcus Aurelius—although their systems may not be suitable for us today—managed to produce ethical treatises of great sophistication, a sophistication rarely if ever equaled by Christian moralists.
The answer to the questions posed above is, of course, "Absolutely not!" The behavior of Atheists is subject to the same rules of sociology, psychology, and neurophysiology that govern the behavior of all members of our species—religionists included. Moreover, despite protestations to the contrary, we may assert as a general rule that when religionists practice ethical behavior, it isn't really due to their fear of hell-fire and damnation, nor is it due to their hopes of heaven. Ethical behavior—regardless of who the practitioner may be—results always from the same causes and is regulated by the same forces, and has nothing to do with the presence or absence of religious belief. The nature of these causes and forces is the subject of this essay.
It is hard to believe that even intelligent and educated people could hold such an opinion, but they do. It seems never to have occurred to them that the Greeks and Romans, whose gods and goddesses were something less than paragons of virtue, nevertheless led lives not obviously worse than those of the Baptists of Alabama! Moreover, pagans such as Aristotle and Marcus Aurelius—although their systems may not be suitable for us today—managed to produce ethical treatises of great sophistication, a sophistication rarely if ever equaled by Christian moralists.
The answer to the questions posed above is, of course, "Absolutely not!" The behavior of Atheists is subject to the same rules of sociology, psychology, and neurophysiology that govern the behavior of all members of our species—religionists included. Moreover, despite protestations to the contrary, we may assert as a general rule that when religionists practice ethical behavior, it isn't really due to their fear of hell-fire and damnation, nor is it due to their hopes of heaven. Ethical behavior—regardless of who the practitioner may be—results always from the same causes and is regulated by the same forces, and has nothing to do with the presence or absence of religious belief. The nature of these causes and forces is the subject of this essay.
Psychobiological Foundations
As human beings, we are social animals. Our sociality is the result of evolution, not choice. Natural selection has equipped us with nervous systems which are peculiarly sensitive to the emotional status of our fellows. Among our kind, emotions are contagious, and it is only the rare psychopathic mutants among us who can be happy in the midst of a sad society. It is in our nature to be happy in the midst of happiness, sad in the midst of sadness. It is in our nature, fortunately, to seek happiness for our fellows at the same time as we seek it for ourselves. Our happiness is greater when it is shared.
Nature also has provided us with nervous systems which are, to a considerable degree, imprintable. To be sure, this phenomenon is not as pronounced or as ineluctable as it is, say, in geese—where a newly hatched gosling can be "imprinted" to a toy train and will follow it to exhaustion, as if it were its mother. Nevertheless, some degree of imprinting is exhibited by humans. The human nervous system appears to retain its capacity for imprinting well into old age, and it is highly likely that the phenomenon known as "love-at-first-sight" is a form of imprinting. Imprinting is a form of attachment behavior, and it helps us to form strong interpersonal bonds. It is a major force which helps us to break through the ego barrier to create "significant others" whom we can love as much as ourselves. These two characteristics of our nervous system—emotional suggestibility and attachment imprintability—although they are the foundation of all altruistic behavior and art, are thoroughly compatible with the selfishness characteristic of all behaviors created by the process of natural selection. That is to say, to a large extent behaviors which satisfy ourselves will be found, simultaneously, to satisfy our fellows, and vice-versa.
This should not surprise us when we consider that among the societies of our nearest primate cousins, the great apes, social behavior is not chaotic, even if gorillas do lack the Ten Commandments! The young chimpanzee does not need an oracle to tell it to honor its mother and to refrain from killing its brothers and sisters. Of course, family squabbles and even murder have been observed in ape societies, but such behaviors are exceptions, not the norm. So too it is in human societies, everywhere and at all times.
The African apes—whose genes are ninety-eight to ninety-nine percent identical to ours‑go about their lives as social animals, cooperating in the living of life, entirely without the benefit of clergy and without the commandments of Exodus, Leviticus, or Deuteronomy. It is further cheering to learn that sociobiologists have even observed altruistic behavior among troops of baboons. More than once—in troops attacked by leopard—aged, post reproduction-age males have been observed to linger at the rear of the escaping troop and to engage the leopard in what often amounts to a suicidal fight. As the old male delays the leopard's pursuit by sacrificing his very life, the females and young escape and live to fulfill their several destinies. The heroism which we see acted out, from time to time, by our fellow men and women, is far older than their religions. Long before the gods were created by the fear-filled minds of our less courageous ancestors, heroism and acts of self-sacrificing love existed. They did not require a supernatural excuse then, nor do they require one now.
Given the general fact, then, that evolution has equipped us with nervous systems biased in favor of social, rather than antisocial, behaviors, is it not true, nevertheless, that antisocial behavior does exist, and it exists in amounts greater than a reasonable ethicist would find tolerable? Alas, this is true. But it is true largely because we live in worlds far more complex than the Paleolithic world in which our nervous systems originated. To understand the ethical significance of this fact, we must digress a bit and review the evolutionary history of human behavior.
Nature also has provided us with nervous systems which are, to a considerable degree, imprintable. To be sure, this phenomenon is not as pronounced or as ineluctable as it is, say, in geese—where a newly hatched gosling can be "imprinted" to a toy train and will follow it to exhaustion, as if it were its mother. Nevertheless, some degree of imprinting is exhibited by humans. The human nervous system appears to retain its capacity for imprinting well into old age, and it is highly likely that the phenomenon known as "love-at-first-sight" is a form of imprinting. Imprinting is a form of attachment behavior, and it helps us to form strong interpersonal bonds. It is a major force which helps us to break through the ego barrier to create "significant others" whom we can love as much as ourselves. These two characteristics of our nervous system—emotional suggestibility and attachment imprintability—although they are the foundation of all altruistic behavior and art, are thoroughly compatible with the selfishness characteristic of all behaviors created by the process of natural selection. That is to say, to a large extent behaviors which satisfy ourselves will be found, simultaneously, to satisfy our fellows, and vice-versa.
This should not surprise us when we consider that among the societies of our nearest primate cousins, the great apes, social behavior is not chaotic, even if gorillas do lack the Ten Commandments! The young chimpanzee does not need an oracle to tell it to honor its mother and to refrain from killing its brothers and sisters. Of course, family squabbles and even murder have been observed in ape societies, but such behaviors are exceptions, not the norm. So too it is in human societies, everywhere and at all times.
The African apes—whose genes are ninety-eight to ninety-nine percent identical to ours‑go about their lives as social animals, cooperating in the living of life, entirely without the benefit of clergy and without the commandments of Exodus, Leviticus, or Deuteronomy. It is further cheering to learn that sociobiologists have even observed altruistic behavior among troops of baboons. More than once—in troops attacked by leopard—aged, post reproduction-age males have been observed to linger at the rear of the escaping troop and to engage the leopard in what often amounts to a suicidal fight. As the old male delays the leopard's pursuit by sacrificing his very life, the females and young escape and live to fulfill their several destinies. The heroism which we see acted out, from time to time, by our fellow men and women, is far older than their religions. Long before the gods were created by the fear-filled minds of our less courageous ancestors, heroism and acts of self-sacrificing love existed. They did not require a supernatural excuse then, nor do they require one now.
Given the general fact, then, that evolution has equipped us with nervous systems biased in favor of social, rather than antisocial, behaviors, is it not true, nevertheless, that antisocial behavior does exist, and it exists in amounts greater than a reasonable ethicist would find tolerable? Alas, this is true. But it is true largely because we live in worlds far more complex than the Paleolithic world in which our nervous systems originated. To understand the ethical significance of this fact, we must digress a bit and review the evolutionary history of human behavior.
A Digression
Today, heredity can control our behavior in only the most general of ways, it cannot dictate precise behaviors appropriate for infinitely varied circumstances. In our world, heredity needs help.
In the world of a fruit fly, by contrast, the problems to be solved are few in number and highly predictable in nature. Consequently, a fruit fly's brain is largely "hard-wired" by heredity. That is to say, most behaviors result from environmental activation of nerve circuits which are formed automatically by the time of emergence of the adult fly. This is an extreme example of what is called instinctual behavior. Each behavior is coded for by a gene or genes which predispose the nervous system to develop certain types of circuits and not others, and where it is all but impossible to act contrary to the genetically predetermined script.
The world of a mammal—say a fox—is much more complex and unpredictable than that of the fruit fly. Consequently, the fox is born with only a portion of its neuronal circuitry hard-wired. Many of its neurons remain "plastic" throughout life. That is, they may or may not hook up with each other in functional circuits, depending upon environmental circumstances. Learned behavior is behavior which results from activation of these environmentally conditioned circuits. Learning allows the individual mammal to learn—by trial and error—greater numbers of adaptive behaviors than could be transmitted by heredity. A fox would be wall-to-wall genes if all its behaviors were specified genetically.
With the evolution of humans, however, environmental complexity increased out of all proportion to the genetic and neuronal changes distinguishing us from our simian ancestors. This partly was due to the fact that our species evolved in a geologic period of great climatic flux —the Ice Ages—and partly was due to the fact that our behaviors themselves began to change our environment. The changed environment in turn created new problems to be solved. Their solutions further changed the environment, and so on. Thus, the discovery of fire led to the burning of trees and forests, which led to destruction of local water supplies and watersheds, which led to the development of architecture with which to build aqueducts, which led to laws concerning water-rights, which led to international strife, and on and on.
Given such complexity, even the ability to learn new behaviors is, by itself, inadequate. If trial and error were the only means, most people would die of old age before they would succeed in rediscovering fire or reinventing the wheel. As a substitute for instinct and to increase the efficiency of learning, mankind developed culture. The ability to teach—as well as to learn—evolved, and trial-and-error learning became a method of last resort.
By transmission of culture—passing on the sum total of the learned behaviors common to a population—we can do what Darwinian genetic selection would not allow: we can inherit acquired characteristics. The wheel once having been invented, its manufacture and use can be passed down through the generations. Culture can adapt to change much faster than genes can, and this provides for finely tuned responses to environmental disturbances and upheavals. By means of cultural transmission, those behaviors which have proven useful in the past can be taught quickly to the young, so that adaptation to life—say on the Greenland ice cap—can be assured.
Even so, cultural transmission tends to be rigid: it took over one hundred thousand years to advance to chipping both sides of the hand-ax! Cultural mutations, like genetic mutations, tend more often than not to be harmful, and both are resisted—the former by cultural conservatism, the latter by natural selection. But changes do creep in faster than the rate of genetic change, and cultures slowly evolve. Even that cultural dinosaur known as the Catholic Church—despite its claim to be the unchanging repository of truth and "correct" behavior—has changed greatly since its beginning.
Incidentally, it is at this hand-ax stage of cultural evolution at which most of the religions of today are still stuck. Our inflexible, absolutist moral codes also are fixated at this stage. The Ten Commandments are the moral counterpart of the "here's-how-you-rub-the-sticks-together" phase of technological evolution. If the only type of fire you want is one to heat your cave and cook your clams, the stick-rubbing method suffices. But if you want a fire to propel your jet-plane, some changes have to be made.
So, too, with the transmission of moral behavior. If we are to live lives which are as complex socially as jet-planes are complex technologically, we need something more than the Ten Commandments. We cannot base our moral code upon arbitrary and capricious fiats reported to us by persons claiming to be privy to the intentions of the denizens of Sinai or Olympus. Our ethics can be based neither upon fictions concerning the nature of humankind nor upon fake reports concerning the desires of the deities. Our ethics must be firmly planted in the soil of scientific self-knowledge. They must be improvable and adaptable.
Where then, and with what, shall we begin?
In the world of a fruit fly, by contrast, the problems to be solved are few in number and highly predictable in nature. Consequently, a fruit fly's brain is largely "hard-wired" by heredity. That is to say, most behaviors result from environmental activation of nerve circuits which are formed automatically by the time of emergence of the adult fly. This is an extreme example of what is called instinctual behavior. Each behavior is coded for by a gene or genes which predispose the nervous system to develop certain types of circuits and not others, and where it is all but impossible to act contrary to the genetically predetermined script.
The world of a mammal—say a fox—is much more complex and unpredictable than that of the fruit fly. Consequently, the fox is born with only a portion of its neuronal circuitry hard-wired. Many of its neurons remain "plastic" throughout life. That is, they may or may not hook up with each other in functional circuits, depending upon environmental circumstances. Learned behavior is behavior which results from activation of these environmentally conditioned circuits. Learning allows the individual mammal to learn—by trial and error—greater numbers of adaptive behaviors than could be transmitted by heredity. A fox would be wall-to-wall genes if all its behaviors were specified genetically.
With the evolution of humans, however, environmental complexity increased out of all proportion to the genetic and neuronal changes distinguishing us from our simian ancestors. This partly was due to the fact that our species evolved in a geologic period of great climatic flux —the Ice Ages—and partly was due to the fact that our behaviors themselves began to change our environment. The changed environment in turn created new problems to be solved. Their solutions further changed the environment, and so on. Thus, the discovery of fire led to the burning of trees and forests, which led to destruction of local water supplies and watersheds, which led to the development of architecture with which to build aqueducts, which led to laws concerning water-rights, which led to international strife, and on and on.
Given such complexity, even the ability to learn new behaviors is, by itself, inadequate. If trial and error were the only means, most people would die of old age before they would succeed in rediscovering fire or reinventing the wheel. As a substitute for instinct and to increase the efficiency of learning, mankind developed culture. The ability to teach—as well as to learn—evolved, and trial-and-error learning became a method of last resort.
By transmission of culture—passing on the sum total of the learned behaviors common to a population—we can do what Darwinian genetic selection would not allow: we can inherit acquired characteristics. The wheel once having been invented, its manufacture and use can be passed down through the generations. Culture can adapt to change much faster than genes can, and this provides for finely tuned responses to environmental disturbances and upheavals. By means of cultural transmission, those behaviors which have proven useful in the past can be taught quickly to the young, so that adaptation to life—say on the Greenland ice cap—can be assured.
Even so, cultural transmission tends to be rigid: it took over one hundred thousand years to advance to chipping both sides of the hand-ax! Cultural mutations, like genetic mutations, tend more often than not to be harmful, and both are resisted—the former by cultural conservatism, the latter by natural selection. But changes do creep in faster than the rate of genetic change, and cultures slowly evolve. Even that cultural dinosaur known as the Catholic Church—despite its claim to be the unchanging repository of truth and "correct" behavior—has changed greatly since its beginning.
Incidentally, it is at this hand-ax stage of cultural evolution at which most of the religions of today are still stuck. Our inflexible, absolutist moral codes also are fixated at this stage. The Ten Commandments are the moral counterpart of the "here's-how-you-rub-the-sticks-together" phase of technological evolution. If the only type of fire you want is one to heat your cave and cook your clams, the stick-rubbing method suffices. But if you want a fire to propel your jet-plane, some changes have to be made.
So, too, with the transmission of moral behavior. If we are to live lives which are as complex socially as jet-planes are complex technologically, we need something more than the Ten Commandments. We cannot base our moral code upon arbitrary and capricious fiats reported to us by persons claiming to be privy to the intentions of the denizens of Sinai or Olympus. Our ethics can be based neither upon fictions concerning the nature of humankind nor upon fake reports concerning the desires of the deities. Our ethics must be firmly planted in the soil of scientific self-knowledge. They must be improvable and adaptable.
Where then, and with what, shall we begin?
Back to Ethics
Plato showed long ago, in his dialogue Euthyphro, that we cannot depend upon the moral fiats of a deity. Plato asked if the commandments of a god were "good" simply because a god had commanded them, or because the god recognized what was good and commanded the action accordingly. If something is good simply because a god has commanded it, anything could be considered good. There would be no way of predicting what in particular the god might desire next, and it would be entirely meaningless to assert that "God is good." Bashing babies with rocks would be just as likely to be "good" as would the principle "Love your enemies." (It would appear that the "goodness" of the god of the Christian “Old Testament” is entirely of this sort.)
On the other hand, if a god's commandments are based on a knowledge of the inherent goodness of an act, we are faced with the realization that there is a standard of goodness independent of the god and we must admit that he cannot be the source of morality. In our quest for the good, we can bypass the god and go to his source!
Given, then, that gods a priori cannot be the source of ethical principles, we must seek such principles in the world in which we have evolved. We must find the sublime in the mundane. What precept might we adopt?
The principle of "enlightened self-interest" is an excellent first approximation to an ethical principle which is both consistent with what we know of human nature and is relevant to the problems of life in a complex society. Let us examine this principle.
First we must distinguish between "enlightened" and "unenlightened" self-interest. Let's take an extreme example for illustration. Suppose you lived a totally selfish life of immediate gratification of every desire. Suppose that whenever someone else had something you wanted, you took it for yourself.
It wouldn't be long at all before everyone would be up in arms against you, and you would have to spend all your waking hours fending off reprisals. Depending upon how outrageous your activity had been, you might very well lose your life in an orgy of neighborly revenge. The life of total but unenlightened self-interest might be exciting and pleasant as long as it lasts—but it is not likely to last long.
The person who practices "enlightened" self-interest, by contrast, is the person whose behavioral strategy simultaneously maximizes both the intensityand duration of personal gratification. An enlightened strategy will be one which, when practiced over a long span of time, will generate ever greater amounts and varieties of pleasures and satisfactions.
How is this to be done?
It is obvious that more is to be gained by cooperating with others than by acts of isolated egoism.One man with a rock cannot kill a buffalo for dinner. But a group of men or women, with lots of rocks, can drive the beast off a cliff and—even after dividing the meat up among them—will still have more to eat than they would have had without cooperation.
But cooperation is a two-way street. If you cooperate with several others to kill buffaloes, and each time they drive you away from the kill and eat it themselves, you will quickly take your services elsewhere, and you will leave the ingrates to stumble along without the Paleolithic equivalent of a fourth-for-bridge. Cooperation implies reciprocity.
Justice has its roots in the problem of determining fairness and reciprocity in cooperation.If I cooperate with you in tilling your field of corn, how much of the corn is due me at harvest time? When there is justice, cooperation operates at maximal efficiency, and the fruits of cooperation become ever more desirable. Thus, enlightened self-interest entails a desire for justice. With justice and with cooperation, we can have symphonies. Without it, we haven't even a song.
Let us bring this essay back to the point of our departure. Because we have the nervous systems of social animals, we are generally happier in the company of our fellow creatures than alone. Because we are emotionally suggestible, as we practice enlightened self-interest we usually will be Wise to choose behaviors which will make others happy and willing to cooperate and accept us—for their happiness will reflect back upon us and intensify our own happiness. On the other hand, actions which harm others and make them unhappy—even if they do not trigger overt retaliation which decreases our happiness—will create an emotional milieu which, because of our suggestibility, will make us less happy.
Because our nervous systems are imprintable, we are able not only to fall in love at first sight, we are able to love objects and ideals as well as people, and we are able to love with variable intensities. Like the gosling attracted to the toy train, we are pulled forward by the desire for love. Unlike the gosling's "love," however, our love is to a considerable extent shapable by experience and is capable of being educated. A major aim of enlightened self-interest, surely, is to give and receive love, both sexual and nonsexualAs a general—though not absolute—rule, we must choose those behaviors which will be likely to bring us love and acceptance, and we must eschew those behaviors which will not.
Another aim of enlightened self-interest is to seek beauty in all its forms, to preserve and prolong its resonance between the world outside and that within. Beauty and love are but different facets of the same jewel: love is beautiful, and we love beauty.
The experience of love and beauty, however, is apassive function of the mind. How much greater is the joy which comes from creating beauty. How delicious it is to exercise actively our creative powers to engender that which can be loved. Paints and pianos are not necessarily prerequisites for the exercise of creativity: Whenever we transform the raw materials of existence in such a way that we leave them better than they were when we found them, we have been creative.
The task of moral education, then, is not to inculcate by rote great lists of do's and don'ts, but rather to help people to predict the consequences of actions being considered.What are the long-term as well as immediate rewards and draw-backs of the acts? Will an act increase or decrease one's chances of experiencing the hedonic triad of love, beauty, and creativity?
Thus it happens, when the Atheist approaches the problem of finding natural grounds for human morals and establishing a nonsuperstitious basis for behavior, that it appears as though nature has already solved the problem to a great extent. Indeed, it appears as though the problem of establishing a natural, humanistic basis for ethical behavior is not much of a problem at all. It is in our natures to desire love, to seek beauty, and to thrill at the act of creation. The labyrinthine complexity we see when we examine traditional moral codes does not arise of necessity: it is largely the result of vain attempts to accommodate human needs and nature to the whimsical totems and taboos of the demons and deities who emerged with us from our cave-dwellings at the end of the Paleolithic Era—and have haunted our houses ever since.
On the other hand, if a god's commandments are based on a knowledge of the inherent goodness of an act, we are faced with the realization that there is a standard of goodness independent of the god and we must admit that he cannot be the source of morality. In our quest for the good, we can bypass the god and go to his source!
Given, then, that gods a priori cannot be the source of ethical principles, we must seek such principles in the world in which we have evolved. We must find the sublime in the mundane. What precept might we adopt?
The principle of "enlightened self-interest" is an excellent first approximation to an ethical principle which is both consistent with what we know of human nature and is relevant to the problems of life in a complex society. Let us examine this principle.
First we must distinguish between "enlightened" and "unenlightened" self-interest. Let's take an extreme example for illustration. Suppose you lived a totally selfish life of immediate gratification of every desire. Suppose that whenever someone else had something you wanted, you took it for yourself.
It wouldn't be long at all before everyone would be up in arms against you, and you would have to spend all your waking hours fending off reprisals. Depending upon how outrageous your activity had been, you might very well lose your life in an orgy of neighborly revenge. The life of total but unenlightened self-interest might be exciting and pleasant as long as it lasts—but it is not likely to last long.
The person who practices "enlightened" self-interest, by contrast, is the person whose behavioral strategy simultaneously maximizes both the intensityand duration of personal gratification. An enlightened strategy will be one which, when practiced over a long span of time, will generate ever greater amounts and varieties of pleasures and satisfactions.
How is this to be done?
It is obvious that more is to be gained by cooperating with others than by acts of isolated egoism.One man with a rock cannot kill a buffalo for dinner. But a group of men or women, with lots of rocks, can drive the beast off a cliff and—even after dividing the meat up among them—will still have more to eat than they would have had without cooperation.
But cooperation is a two-way street. If you cooperate with several others to kill buffaloes, and each time they drive you away from the kill and eat it themselves, you will quickly take your services elsewhere, and you will leave the ingrates to stumble along without the Paleolithic equivalent of a fourth-for-bridge. Cooperation implies reciprocity.
Justice has its roots in the problem of determining fairness and reciprocity in cooperation.If I cooperate with you in tilling your field of corn, how much of the corn is due me at harvest time? When there is justice, cooperation operates at maximal efficiency, and the fruits of cooperation become ever more desirable. Thus, enlightened self-interest entails a desire for justice. With justice and with cooperation, we can have symphonies. Without it, we haven't even a song.
Let us bring this essay back to the point of our departure. Because we have the nervous systems of social animals, we are generally happier in the company of our fellow creatures than alone. Because we are emotionally suggestible, as we practice enlightened self-interest we usually will be Wise to choose behaviors which will make others happy and willing to cooperate and accept us—for their happiness will reflect back upon us and intensify our own happiness. On the other hand, actions which harm others and make them unhappy—even if they do not trigger overt retaliation which decreases our happiness—will create an emotional milieu which, because of our suggestibility, will make us less happy.
Because our nervous systems are imprintable, we are able not only to fall in love at first sight, we are able to love objects and ideals as well as people, and we are able to love with variable intensities. Like the gosling attracted to the toy train, we are pulled forward by the desire for love. Unlike the gosling's "love," however, our love is to a considerable extent shapable by experience and is capable of being educated. A major aim of enlightened self-interest, surely, is to give and receive love, both sexual and nonsexualAs a general—though not absolute—rule, we must choose those behaviors which will be likely to bring us love and acceptance, and we must eschew those behaviors which will not.
Another aim of enlightened self-interest is to seek beauty in all its forms, to preserve and prolong its resonance between the world outside and that within. Beauty and love are but different facets of the same jewel: love is beautiful, and we love beauty.
The experience of love and beauty, however, is apassive function of the mind. How much greater is the joy which comes from creating beauty. How delicious it is to exercise actively our creative powers to engender that which can be loved. Paints and pianos are not necessarily prerequisites for the exercise of creativity: Whenever we transform the raw materials of existence in such a way that we leave them better than they were when we found them, we have been creative.
The task of moral education, then, is not to inculcate by rote great lists of do's and don'ts, but rather to help people to predict the consequences of actions being considered.What are the long-term as well as immediate rewards and draw-backs of the acts? Will an act increase or decrease one's chances of experiencing the hedonic triad of love, beauty, and creativity?
Thus it happens, when the Atheist approaches the problem of finding natural grounds for human morals and establishing a nonsuperstitious basis for behavior, that it appears as though nature has already solved the problem to a great extent. Indeed, it appears as though the problem of establishing a natural, humanistic basis for ethical behavior is not much of a problem at all. It is in our natures to desire love, to seek beauty, and to thrill at the act of creation. The labyrinthine complexity we see when we examine traditional moral codes does not arise of necessity: it is largely the result of vain attempts to accommodate human needs and nature to the whimsical totems and taboos of the demons and deities who emerged with us from our cave-dwellings at the end of the Paleolithic Era—and have haunted our houses ever since.
Religion, Hypnosis, and Music: An Evolutionary Perspective
Darwin's theory of evolution by means of natural selection is extraordinarily powerful in its ability to explain details of the world around us: Why do giraffes have long necks? Why is the kiwi flightless? Why do humans have an appendix, five fingers, erector muscles at the base of each hair, and rudimentary muscles with which to wiggle the ears? Why do male creationists have nipples? Why are certain butterflies brightly colored, and why do bird, sing?
Answers to these and to thousands of other equally puzzling questions have, from 1859 onward, formed a part of the enduring legacy left by the great British naturalist who, by plowing under the 'Garden of Eden,' completed the work begun by Copernicus when he pulled down the 'heavenly firmament.'
Although the scientific answers to these and similar questions had been familiar to me since high school days, there were other questions which appeared to me to be unanswerable in Darwinian terms, questions which required many years and much thought before I could reconcile them with Darwin's theory.
Take religion, for instance. If religion is all a pack of lies—a muddle of myths—why would natural selection allow religion to survive?How could natural selection allow behavior that has nothing at all to do with the real world to develop in the first place? Could Survival of the Falsest be a corollary derivable from Survival of the Fittest?
And then there is the puzzle of hypnosis. Why are many people and some animals hypnotizable? Where is the fitness in being susceptible to hypnotic suggestion and manipulation? After having experimented with hypnosis for many years, and after having performed a great variety of experiments with both humans and animals, I was shocked to discover that hypnotizability is not simply a 'weakness' in the sense that a person might be lacking in physical or mental strength. Many of the most brilliant and physically fit persons I have known have proven to be highly hypnotizable, whereas certain psychotics and mentally retarded individuals have been, for all practical purposes, unhypnotizable.
Without regard to race, sex, or IQ, three out of every five people one meets on the street are hypnotizable. Why would such seeming vulnerability slip through the screen of natural selection and take up residence in the nervous system of the most powerful animal the planet has known?
My third evolutionary puzzle was music. Why should humans have invented music? While music and musical ability are not in any obvious way harmful (and, therefore, not characters likely to be eliminated by Natural selection), neither are these traits obviously useful in the sense that they increase human fitness for survival. Consequently, there would appear to be no good reason for them to have evolved.
Human music is not the equivalent of bird song. It does not function as a means of marking territory, and it is of little more that marginal value in attracting mates. No matter to what height of esthetic triumph Beethoven may transport us with his Ninth Symphony, it is not easy to see any obvious way in which fugues and four-part choruses can have helped us climb the great phylogenetic tree to reach our present perch.
After pondering these three questions for many years, I gradually came to the realization that they were closely interrelated. All three shared a common explanation. All could be explained in terms of what biologists call group fitness.
Unlike individual fitness--that bundle of qualities which affects the survivability of individual plants or animals and their offspring--group fitness affects the survivability of small or medium-sized groups of closely related individuals. Such groups often are little more than greatly extended families, and they tend to be genetically quite homogenous.
Whether we like it or not, there was a long time ago when religion was actually a 'good' thing. That is to say, religion increased group fitness. Let me try to explain.
In the course of human evolution, the accumulation of genetic mutations proved to be too slow a process for the shaping of the adaptive behaviors needed to cope with environmental changes. That is to say, instinct--behavior largely determined by heredity—was not good enough to give primitive hominids and hominins the behavioral repertoires needed in their increasingly complex and confusing world. By means too complicated to discuss here, our ancestors all but abandoned the instinct-driven behavior of their brutish brethren and created, as its substitute, culture.
By means of culture, very complex patterns of behavior can be created. They can be created to deal with infinitely varied environmental challenges, and they can be created quickly. Although we may often bemoan the seeming snail-pace at which our own culture abandons what we now consider maladaptive behaviors, there is no doubt that cultural change is many orders of magnitude faster than genetic change.
Back to religion: How does religion fit into all this talk about tribes and culture? Quite simply. Religion in small groups may be very effective in increasing group cohesion. It may help to mark the boundaries between in-group and out-group, the line between us and them. As Jerry Falwell and the Ayatollah Khomeini have shown, religion deftly applied can convert individually weak little insects into a mighty hoard of army ants. It can fuse individual organisms into a sort of Nietzchean super-organism.
At the tribal stage of human social evolution, religion helped to create group behaviors which enhanced the survival potential of the in-group at the expense of out-groups. Consider the dietary taboos of the so-called Old Testament.
We read in Deut. 14:21, "Ye shall not eat of anything that dieth of itself: Thou shalt give it unto the stranger that is in thy gates, that he may eat it; or thou mayest sell it unto an alien." Since an animal dying by itself is likely to be diseased, we shouldn't eat it. Give it—better yet, sell it—to one of THEM. With luck, there may soon be one less of THEM, and our group will have gained a numerical advantage of one more unit!
This truly 'old-time religion' developed at the end of the last Ice Age, when the tribe was the largest human grouping maintaining any degree of coherence. The religion of the Old Testament is a cultural fossil held over from the Pleistocene Epoch, and it reflects an atmosphere of intense intergroup competition. Petrified like the bones in a paleontologist's cabinet, the greatest ideas of the Ice Age still can be found on display between Genesis and Malachi!
Humans are gregarious, social creatures. They and their ancestors for a very long time have been herd animals. Like all herd animals, they must be sensitive to the moves and signals of their fellow flock-members. Just as a buffalo defensive stampede would be useless if only one animal stampeded, so too our hominin ancestors had to be able to act in concert against threats from predators and other enemies. To do this, they had to be able to perceive and internalize the desires and motivations of their fellows in the pack. Not yet in possession of language to effect such communication, our ancestors had to be suggestible. In our ancestors, as is generally the case with herd animals today, the emotions and intentions of the leaders of the herd were communicated to the rest of the flock by 'body language,' and by the power of nonverbal suggestion.
Suggestion, whether verbal or not, is, of course, the foundation of hypnosis.
Hypnotism had been the tool of shamans and medicine men from the very beginning. The ability to be hypnotized, i.e. suggestibility, was part of our heritage as gregarious, social animals. All the priests had to do was harness it and, therewith, harness the entire tribe at once. Once hypnotized, the entire tribe could be sent our to do battle as though it were a superorganism, as if the individuals were but individual cells in a great body—sharing a common gene pool and being governed by a single head.
And battle they did—and still do. "And the Lord said unto him, 'Surely I will be with thee, and thou shalt smite the Midianites as one man." [Judges 6:16] "Kill a commie for Christ!" "Impeach Earl Warren!" "Stop that wicked woman who has expelled our God from the classrooms!"
If my readers think the term 'hypnosis' can be applied to religion only in a metaphorical sense, they should hasten to the nearest tabernacular, faith-healing, full-gospel-assembly, fire-baptized, holy-rolling, Pentecost-remembering revival meeting. They will see hypnosis in action, replete with people falling on the ground, jerking and twitching and babbling. They will be able to observe how the contagion spreads from the leaders to the followers. They will observe the anesthetic power of hypnosis, as real cripples—not just the shills—throw down their crutches and prance around to the tune of crunching bone-joints.
Make no mistake about it. The hypnosis used by preachers is real hypnosis. The priests were the first to control it, and to this day they and their politician brethren are the most skilled practitioners of the art.
How do they do it? There are many different ways of inducing a hypnotic state of consciousness, and generally the fakirs use many methods simultaneously. For neurochemical reasons which are still not entirely clear, fastingis a useful means of preconditioning the nervous system to make it more malleable and suggestible. Although lowering of blood sugar probably has much to do with it, it is likely that hormone-like substances lknown as endogenous opioids are also involved. As the name implies, these chemicals are internally produced opiate-like substances which resemble motphine in their action.
Although Karl Marx was speaking metaphorically when he wrote that "Religion is the opiate of the masses," his words may prove to be literally true as well. There is considerable evidence that hypnosis and 'transcendental' meditation can increase the production of certain of these opioids by the brain. The hallucinations so often accompanying religious experiences may very well be a result of opioid intoxication and verbal suggestions implanted by the guru guiding the religious 'trip.'
Another method of inducing hypnosis is long repeated prayer. When people 'pray for a sign,' they repeat over and over what it is they want to see or hear. Sooner or later, if their nervous systems are even slightly normal, they should be able to generate vivid experiences fulfilling their wishes. Only wealthy men who say god speaks to them are frauds. Poor people who say this are simply self-deluded.
Although we are accustomed to think of prayer as a type of cosmic begging, it is likely that this type of prayer was a late evolutionary development. The original purpose of prayer, I believe, was to induce trance and, thereby, to effect hallucinatory communication with the 'spirit world.'
Many faith-healing practitioners of hypnosis induce trancelike receptiveness in their prey by physically stunning them. They 'lay on hands.' Starting with their hands on the crown of the victim's head, they utter their hypnotic suggestions (i.e. 'prayers') while gradually moving their hands down the side of the person's head. Finally, when their hands are on the person's neck and ears, they will suddenly put pressure on the nerve-rich cavity behind the ear and on the carotid sinus farther down the neck. This stuns and disorients the victim, and he or she becomes very imprintable. The verbal suggestions of the healer become implanted within as little as two or three seconds.
Of course, this does not always work. If the person being 'healed' has a weak cardiovascular system, or if the 'healer' presses on the carotid sinus too long, cardiac arrest may result and his god cheats the evangelist out of the poor bloke's money. At least one notorious faith healer of ourday has given up the practice because of this embarrassing and expensive side-effect. The reader must realize, this method of inducing hypnosis is extremely dangerous, and no competent practitioner will employ it. Only religionists still flirt with it.
But there is a much safer waythan nerve-pinching to reduce the faithful to submission: music. Carefully selected hymns can be incredibly powerful tools with which to induce trance. Perhaps the most infamous of these hymns is the one called Just As I Am. By the time Billy Graham and his ilk have brought the crowd to the point of singing this war-horse, the resistance of the audience has already been worn down considerably. And by the time that everyone locks arms and starts singing "I come .. , I come," only a few can resist the call to rush forth and shoot up on Jeezus.
The evolutionary roots of music can be seen very dearly in such phenomena as American Indian war dances and religious chants. Music did not begin with harmony and stringed instruments. It began with rhythm, with monotonously repeated, rhythmic words and sounds. Drumming surely represents the beginning of instrumental music, and to this day the most primitive forms of music emphasize drums. So too, singing grew out of chanting—the rhythmic repetition of magic words and phrases.
How does music relate to evolutionary fitness?
Consider the Indian war dance. The drumming, chanting, and dancing produce a sensory environment suitable for the induction of hypnotic trance. Once all the warriors are hypnotized, they can act in concert (no pun intended) to rush forth and wipe out the genetic competition. They will not know fear; they will not hesitate; and they will give without hesitation their last full measure to the enterprise. Perhaps the most important part of all this is that all will follow orders reflexively, and there will be a minimum of disorder. The competitive advantages of such behavior are obvious.
Thus, music evolved as a means of inducing hypnotic trance. Hypnotic susceptibility, although older than the human species itself, was elaborated by natural selection as a means of increasing intragroup cohesion and as a means of producing highly ordered, efficient competitive behavior at the intergroup level. As cultural transmission of learned behavior replaced genetic transmission of instinctive behavior, religion emerged as the system deciding the ends for which hypnosis would be applied. The actual mythical content of the individual religions probably did not make much difference: Zeus and Yahweh and Baal are all imaginary, and there is no obvious reason to recommend one over another. However the structure of the cultural organizations behind the various deities was of great importance. It is obvious that the wizards who pulled the strings in the temple of Yahweh had a much more effective way of running the land of Oz than did those who hid behind the curtains in the temples of Zeus and Baal!
Approaching the end of our story, we see that religion, hypnosis, and music are intimately and unexpectedly interrelated in their evolutionary origins. The three originated together, and all three were critically important factors in making us the creatures we are today. All three are 'natural' phenomena, and can be reconciled with the theory of evolution as we understand it today.
We must remember, however, that things are not automatically to be adjudged good or desirable simply because they are natural. To do so is to fall into the natural-law fallacy so dear to the Catholic Church. To say that something is natural implies nothing more than "that's the way things are at the moment." It does not say we have to keep things that way. In many cases we are free to decide to travel 'unnatural,' newly created paths.
Religion is like the human appendix: although it was functional in our distant ancestors, it is of no use today. Just as the appendix today is a focus of physical disease, so too religion today is a focus of social disease. Although religion was a force accelerating human evolution during the Ice Ages, it is now an atavism of negative value.
Religion still promotes tribal divisions, even though we must recognize that all 'tribes' must henceforth work together for a common cause or all shall surely perish together. No single tribe will survive unless all tribes survive. The divisions created by religions must be eliminated.
The disappearance of religion will be as great a tragedy as the disappearance of smallpox. We will all survive its passing without difficulty and without tears.
But what of music and hypnosis? Are they also atavisms? Are they now tainted because of their former religious associations? I think not.
Music clearly has emerged from its religious cradle and has transported us all to a realm of human emotion and esthetic fulfillment more heavenly than any heaven imagined by the creators of that celestial hunk of real estate!
Music has been set free of its fetters. It may now soar with the human intellect into any esthetic empyrean that intellect may choose to create. The finale of Beethoven's Ninth Symphony can help us to feel more intensely the universal brotherhood of mankind as we hurtle along on the cosmic journey of this spaceship we call earth.
And what of hypnosis? Is it only a tool of unethical control? Must it be forsworn because Hitler and Jim Jones used it?
Unlike the case concerning music, the answer to this question is not quite as easy to formulate. We cannot deny that even today hypnotic suggestion can be used for evil purposes. But to be forewarned is to be fore-armed We must always keep in mind that as suggestible creatures we are potentially vulnerable to manipulation by unscrupulous persons. But we should not forget that many of the features that most deeply define our humanity derive from the same neuronal circuitry that makes us suggestible.
For what are sympathy and empathy, if not elaborations of our suggestibility? Because we are suggestible, because our emotions are contagious, we can walk into the funeral of a total stranger and quickly feel the same sense of grief and loss as the mourners. We can also see a strange child take its first steps in a public park and feel the same excitement and exhilaration as do its parents.
Because we are suggestible, we can feel sympathy. Because we can feel the same pains as our fellow beings, we will not be uncaring of their plight. We will avoid causing pain in others because our suggestible natures make possible the reflection of that pain back upon us. We are happiest when making others happy, and we do not need mythic systems to make us do good and eschew evil: our nervous systems are hard-wired by evolution to help us do that.
Because our individual happiness is so sensitive to the emotional milieu in which we find ourselves, enlightened self-interest is all we need.With that we shall create an ethical system more true to our natures. We shall strive to cast off the irrelevant totems and taboos of our religious past, that we may emerge into a satisfying new world of ethical fulfillment.
Let us not pray.
This essay originally appeared in American Atheist in October of 1984.
Answers to these and to thousands of other equally puzzling questions have, from 1859 onward, formed a part of the enduring legacy left by the great British naturalist who, by plowing under the 'Garden of Eden,' completed the work begun by Copernicus when he pulled down the 'heavenly firmament.'
Although the scientific answers to these and similar questions had been familiar to me since high school days, there were other questions which appeared to me to be unanswerable in Darwinian terms, questions which required many years and much thought before I could reconcile them with Darwin's theory.
Take religion, for instance. If religion is all a pack of lies—a muddle of myths—why would natural selection allow religion to survive?How could natural selection allow behavior that has nothing at all to do with the real world to develop in the first place? Could Survival of the Falsest be a corollary derivable from Survival of the Fittest?
And then there is the puzzle of hypnosis. Why are many people and some animals hypnotizable? Where is the fitness in being susceptible to hypnotic suggestion and manipulation? After having experimented with hypnosis for many years, and after having performed a great variety of experiments with both humans and animals, I was shocked to discover that hypnotizability is not simply a 'weakness' in the sense that a person might be lacking in physical or mental strength. Many of the most brilliant and physically fit persons I have known have proven to be highly hypnotizable, whereas certain psychotics and mentally retarded individuals have been, for all practical purposes, unhypnotizable.
Without regard to race, sex, or IQ, three out of every five people one meets on the street are hypnotizable. Why would such seeming vulnerability slip through the screen of natural selection and take up residence in the nervous system of the most powerful animal the planet has known?
My third evolutionary puzzle was music. Why should humans have invented music? While music and musical ability are not in any obvious way harmful (and, therefore, not characters likely to be eliminated by Natural selection), neither are these traits obviously useful in the sense that they increase human fitness for survival. Consequently, there would appear to be no good reason for them to have evolved.
Human music is not the equivalent of bird song. It does not function as a means of marking territory, and it is of little more that marginal value in attracting mates. No matter to what height of esthetic triumph Beethoven may transport us with his Ninth Symphony, it is not easy to see any obvious way in which fugues and four-part choruses can have helped us climb the great phylogenetic tree to reach our present perch.
After pondering these three questions for many years, I gradually came to the realization that they were closely interrelated. All three shared a common explanation. All could be explained in terms of what biologists call group fitness.
Unlike individual fitness--that bundle of qualities which affects the survivability of individual plants or animals and their offspring--group fitness affects the survivability of small or medium-sized groups of closely related individuals. Such groups often are little more than greatly extended families, and they tend to be genetically quite homogenous.
Whether we like it or not, there was a long time ago when religion was actually a 'good' thing. That is to say, religion increased group fitness. Let me try to explain.
In the course of human evolution, the accumulation of genetic mutations proved to be too slow a process for the shaping of the adaptive behaviors needed to cope with environmental changes. That is to say, instinct--behavior largely determined by heredity—was not good enough to give primitive hominids and hominins the behavioral repertoires needed in their increasingly complex and confusing world. By means too complicated to discuss here, our ancestors all but abandoned the instinct-driven behavior of their brutish brethren and created, as its substitute, culture.
By means of culture, very complex patterns of behavior can be created. They can be created to deal with infinitely varied environmental challenges, and they can be created quickly. Although we may often bemoan the seeming snail-pace at which our own culture abandons what we now consider maladaptive behaviors, there is no doubt that cultural change is many orders of magnitude faster than genetic change.
Back to religion: How does religion fit into all this talk about tribes and culture? Quite simply. Religion in small groups may be very effective in increasing group cohesion. It may help to mark the boundaries between in-group and out-group, the line between us and them. As Jerry Falwell and the Ayatollah Khomeini have shown, religion deftly applied can convert individually weak little insects into a mighty hoard of army ants. It can fuse individual organisms into a sort of Nietzchean super-organism.
At the tribal stage of human social evolution, religion helped to create group behaviors which enhanced the survival potential of the in-group at the expense of out-groups. Consider the dietary taboos of the so-called Old Testament.
We read in Deut. 14:21, "Ye shall not eat of anything that dieth of itself: Thou shalt give it unto the stranger that is in thy gates, that he may eat it; or thou mayest sell it unto an alien." Since an animal dying by itself is likely to be diseased, we shouldn't eat it. Give it—better yet, sell it—to one of THEM. With luck, there may soon be one less of THEM, and our group will have gained a numerical advantage of one more unit!
This truly 'old-time religion' developed at the end of the last Ice Age, when the tribe was the largest human grouping maintaining any degree of coherence. The religion of the Old Testament is a cultural fossil held over from the Pleistocene Epoch, and it reflects an atmosphere of intense intergroup competition. Petrified like the bones in a paleontologist's cabinet, the greatest ideas of the Ice Age still can be found on display between Genesis and Malachi!
Humans are gregarious, social creatures. They and their ancestors for a very long time have been herd animals. Like all herd animals, they must be sensitive to the moves and signals of their fellow flock-members. Just as a buffalo defensive stampede would be useless if only one animal stampeded, so too our hominin ancestors had to be able to act in concert against threats from predators and other enemies. To do this, they had to be able to perceive and internalize the desires and motivations of their fellows in the pack. Not yet in possession of language to effect such communication, our ancestors had to be suggestible. In our ancestors, as is generally the case with herd animals today, the emotions and intentions of the leaders of the herd were communicated to the rest of the flock by 'body language,' and by the power of nonverbal suggestion.
Suggestion, whether verbal or not, is, of course, the foundation of hypnosis.
Hypnotism had been the tool of shamans and medicine men from the very beginning. The ability to be hypnotized, i.e. suggestibility, was part of our heritage as gregarious, social animals. All the priests had to do was harness it and, therewith, harness the entire tribe at once. Once hypnotized, the entire tribe could be sent our to do battle as though it were a superorganism, as if the individuals were but individual cells in a great body—sharing a common gene pool and being governed by a single head.
And battle they did—and still do. "And the Lord said unto him, 'Surely I will be with thee, and thou shalt smite the Midianites as one man." [Judges 6:16] "Kill a commie for Christ!" "Impeach Earl Warren!" "Stop that wicked woman who has expelled our God from the classrooms!"
If my readers think the term 'hypnosis' can be applied to religion only in a metaphorical sense, they should hasten to the nearest tabernacular, faith-healing, full-gospel-assembly, fire-baptized, holy-rolling, Pentecost-remembering revival meeting. They will see hypnosis in action, replete with people falling on the ground, jerking and twitching and babbling. They will be able to observe how the contagion spreads from the leaders to the followers. They will observe the anesthetic power of hypnosis, as real cripples—not just the shills—throw down their crutches and prance around to the tune of crunching bone-joints.
Make no mistake about it. The hypnosis used by preachers is real hypnosis. The priests were the first to control it, and to this day they and their politician brethren are the most skilled practitioners of the art.
How do they do it? There are many different ways of inducing a hypnotic state of consciousness, and generally the fakirs use many methods simultaneously. For neurochemical reasons which are still not entirely clear, fastingis a useful means of preconditioning the nervous system to make it more malleable and suggestible. Although lowering of blood sugar probably has much to do with it, it is likely that hormone-like substances lknown as endogenous opioids are also involved. As the name implies, these chemicals are internally produced opiate-like substances which resemble motphine in their action.
Although Karl Marx was speaking metaphorically when he wrote that "Religion is the opiate of the masses," his words may prove to be literally true as well. There is considerable evidence that hypnosis and 'transcendental' meditation can increase the production of certain of these opioids by the brain. The hallucinations so often accompanying religious experiences may very well be a result of opioid intoxication and verbal suggestions implanted by the guru guiding the religious 'trip.'
Another method of inducing hypnosis is long repeated prayer. When people 'pray for a sign,' they repeat over and over what it is they want to see or hear. Sooner or later, if their nervous systems are even slightly normal, they should be able to generate vivid experiences fulfilling their wishes. Only wealthy men who say god speaks to them are frauds. Poor people who say this are simply self-deluded.
Although we are accustomed to think of prayer as a type of cosmic begging, it is likely that this type of prayer was a late evolutionary development. The original purpose of prayer, I believe, was to induce trance and, thereby, to effect hallucinatory communication with the 'spirit world.'
Many faith-healing practitioners of hypnosis induce trancelike receptiveness in their prey by physically stunning them. They 'lay on hands.' Starting with their hands on the crown of the victim's head, they utter their hypnotic suggestions (i.e. 'prayers') while gradually moving their hands down the side of the person's head. Finally, when their hands are on the person's neck and ears, they will suddenly put pressure on the nerve-rich cavity behind the ear and on the carotid sinus farther down the neck. This stuns and disorients the victim, and he or she becomes very imprintable. The verbal suggestions of the healer become implanted within as little as two or three seconds.
Of course, this does not always work. If the person being 'healed' has a weak cardiovascular system, or if the 'healer' presses on the carotid sinus too long, cardiac arrest may result and his god cheats the evangelist out of the poor bloke's money. At least one notorious faith healer of ourday has given up the practice because of this embarrassing and expensive side-effect. The reader must realize, this method of inducing hypnosis is extremely dangerous, and no competent practitioner will employ it. Only religionists still flirt with it.
But there is a much safer waythan nerve-pinching to reduce the faithful to submission: music. Carefully selected hymns can be incredibly powerful tools with which to induce trance. Perhaps the most infamous of these hymns is the one called Just As I Am. By the time Billy Graham and his ilk have brought the crowd to the point of singing this war-horse, the resistance of the audience has already been worn down considerably. And by the time that everyone locks arms and starts singing "I come .. , I come," only a few can resist the call to rush forth and shoot up on Jeezus.
The evolutionary roots of music can be seen very dearly in such phenomena as American Indian war dances and religious chants. Music did not begin with harmony and stringed instruments. It began with rhythm, with monotonously repeated, rhythmic words and sounds. Drumming surely represents the beginning of instrumental music, and to this day the most primitive forms of music emphasize drums. So too, singing grew out of chanting—the rhythmic repetition of magic words and phrases.
How does music relate to evolutionary fitness?
Consider the Indian war dance. The drumming, chanting, and dancing produce a sensory environment suitable for the induction of hypnotic trance. Once all the warriors are hypnotized, they can act in concert (no pun intended) to rush forth and wipe out the genetic competition. They will not know fear; they will not hesitate; and they will give without hesitation their last full measure to the enterprise. Perhaps the most important part of all this is that all will follow orders reflexively, and there will be a minimum of disorder. The competitive advantages of such behavior are obvious.
Thus, music evolved as a means of inducing hypnotic trance. Hypnotic susceptibility, although older than the human species itself, was elaborated by natural selection as a means of increasing intragroup cohesion and as a means of producing highly ordered, efficient competitive behavior at the intergroup level. As cultural transmission of learned behavior replaced genetic transmission of instinctive behavior, religion emerged as the system deciding the ends for which hypnosis would be applied. The actual mythical content of the individual religions probably did not make much difference: Zeus and Yahweh and Baal are all imaginary, and there is no obvious reason to recommend one over another. However the structure of the cultural organizations behind the various deities was of great importance. It is obvious that the wizards who pulled the strings in the temple of Yahweh had a much more effective way of running the land of Oz than did those who hid behind the curtains in the temples of Zeus and Baal!
Approaching the end of our story, we see that religion, hypnosis, and music are intimately and unexpectedly interrelated in their evolutionary origins. The three originated together, and all three were critically important factors in making us the creatures we are today. All three are 'natural' phenomena, and can be reconciled with the theory of evolution as we understand it today.
We must remember, however, that things are not automatically to be adjudged good or desirable simply because they are natural. To do so is to fall into the natural-law fallacy so dear to the Catholic Church. To say that something is natural implies nothing more than "that's the way things are at the moment." It does not say we have to keep things that way. In many cases we are free to decide to travel 'unnatural,' newly created paths.
Religion is like the human appendix: although it was functional in our distant ancestors, it is of no use today. Just as the appendix today is a focus of physical disease, so too religion today is a focus of social disease. Although religion was a force accelerating human evolution during the Ice Ages, it is now an atavism of negative value.
Religion still promotes tribal divisions, even though we must recognize that all 'tribes' must henceforth work together for a common cause or all shall surely perish together. No single tribe will survive unless all tribes survive. The divisions created by religions must be eliminated.
The disappearance of religion will be as great a tragedy as the disappearance of smallpox. We will all survive its passing without difficulty and without tears.
But what of music and hypnosis? Are they also atavisms? Are they now tainted because of their former religious associations? I think not.
Music clearly has emerged from its religious cradle and has transported us all to a realm of human emotion and esthetic fulfillment more heavenly than any heaven imagined by the creators of that celestial hunk of real estate!
Music has been set free of its fetters. It may now soar with the human intellect into any esthetic empyrean that intellect may choose to create. The finale of Beethoven's Ninth Symphony can help us to feel more intensely the universal brotherhood of mankind as we hurtle along on the cosmic journey of this spaceship we call earth.
And what of hypnosis? Is it only a tool of unethical control? Must it be forsworn because Hitler and Jim Jones used it?
Unlike the case concerning music, the answer to this question is not quite as easy to formulate. We cannot deny that even today hypnotic suggestion can be used for evil purposes. But to be forewarned is to be fore-armed We must always keep in mind that as suggestible creatures we are potentially vulnerable to manipulation by unscrupulous persons. But we should not forget that many of the features that most deeply define our humanity derive from the same neuronal circuitry that makes us suggestible.
For what are sympathy and empathy, if not elaborations of our suggestibility? Because we are suggestible, because our emotions are contagious, we can walk into the funeral of a total stranger and quickly feel the same sense of grief and loss as the mourners. We can also see a strange child take its first steps in a public park and feel the same excitement and exhilaration as do its parents.
Because we are suggestible, we can feel sympathy. Because we can feel the same pains as our fellow beings, we will not be uncaring of their plight. We will avoid causing pain in others because our suggestible natures make possible the reflection of that pain back upon us. We are happiest when making others happy, and we do not need mythic systems to make us do good and eschew evil: our nervous systems are hard-wired by evolution to help us do that.
Because our individual happiness is so sensitive to the emotional milieu in which we find ourselves, enlightened self-interest is all we need.With that we shall create an ethical system more true to our natures. We shall strive to cast off the irrelevant totems and taboos of our religious past, that we may emerge into a satisfying new world of ethical fulfillment.
Let us not pray.
This essay originally appeared in American Atheist in October of 1984.
Atheism: Its Logical and Philosophical Foundations
A lecture given at the 26th National Convention of American Atheists in San Francisco, Saturday, 22 April 2000.
Not even within the narrow confines of American Atheists am I thought of as a philosopher, nor do I think of myself as a philosopher. Nevertheless, as a professional Atheist I have had to deal with philosophical issues repeatedly during my lifetime so far—and I am certain I will continue to have to do so during whatever time remains for me. (As you may remember from my lecture last year on the subject of the prospects for physical immortality, I plan to be around for quite a while yet.)
Confessedly, I am an amateur in the field of philosophy. Even so, I wish to convey to you all what I hope will be at least a practical understanding of the logical and philosophical foundations of Atheism which you can use in your own discussions with theists.
Both my formal and informal study of philosophy took place after I became an Atheist at the age of eighteen. Whether it was due to lack of experience or dullness of wit, I found myself being convinced and taken in by each philosopher I studied. Again and again, I found myself thinking, "This is right! This really makes sense. This is the philosopher for me!"—until I read the refutations by the next philosopher in turn. Through all of this, a practical feeling of "common sense" began to develop in me, and I began to identify with Omar Khayyám after each new philosophical encounter:
Myself when young did eagerly frequent
Doctor and Saint, and heard great argument
About it and about: but ever more
Came out by the same Door where in I went.
And so, after several years of intense study of symbolic logic and reading most of the philosophers you probably know about—as well as some you probably haven't heard of—I gave up on philosophy as fruitless: nothing is ever settled in philosophy as compared to science.
After all, the road to scientific progress is well-marked out by milestones with which we are all familiar. Despite the cavils of young-earth creationists, it is a fact that the earth is billions of years old. This is a hard-won discovery of science. It is not going to be reversed by the next geologist who picks up a spade.
It is a fact, that life is not a special creation, but the process of material forces acting over eons to cause life forms to change—to evolve. The fact of evolution is established not only at the organismal level but at the chemical level as well. DNA is a fact. The genetic code is a fact. Natural Selection is a fact. Evolutionary theory is a milestone on the road of scientific progress. It is not going to be disproved by a "creation scientist," even if, anomalously, he should have a Harvard degree.
It is a fact also that there are living things too small to see with the unaided eye. The microbial world is a reality, even if it was not known to the ancient Greek philosophers or the scientifically illiterate blokes who wrote the Hebrew Bible. It is a fact that some of these microbes help to produce the oxygen we breathe, the nitrate fertilizers needed for growth of plants, and the leavened bread of which we are all so fond. It is a fact that many microbes can cause diseases. No Christian Scientist disciple of Mary Faker Eddy is ever going to win an argument against an anthrax bacillus! Microbiology is another milestone—like which there is nothing similar in philosophy.
In dismay, I gave up my pursuit of any solid-as-stone philosophy, even as the ancient alchemists eventually gave up their pursuit of the philosopher's stone. I settled on the Logical Positivists and their descendants the Logical Empiricists—philosophers whose work most easily fit in with my pursuit of science. Bertrand Russell provided the general frame of reference for my thinking, but it was A. J. Ayer's Language Truth and Logic, with its theory of verifiability, that became my vade mecum--both in my pursuit of science and in my disputes with theists. When Karl Popper refined the principle of verifiability into the principle of falsifiability, I incorporated the nuance without inconvenience.
These philosophers had practical utility for my career in science, and so I pretty much stuck with them, but I despaired of finding justification for my Atheism in philosophy. I was an Atheist because of the evidence of the world: science, history, psychology, and biblical criticism provided all the justification I needed.
I developed my ideas almost completely independently of various philosophers who, unknown to me, were publishing widely read works that said—usually more clearly, as it turns out—many of the same things that I was laboriously bringing to mental birth in my own mind. Unknown to me at the time, there was the work of Antony Flew, and Kai Nielson's distinction between meaningful and meaningless religious talk. There was George H. Smith'sAtheism: The Case Against God (1979) and Michael Martin's Atheism: A Philosophical Justification (1990). These works contrasted two types of Atheism, variously termed weak vs. strong, negative vs. positive, or implicit vs. explicit Atheism. It was belated reading of these philosophers that caused me to return to the contemplation of the philosophical and logical foundations of Atheism.
Here is what I have gleaned from these and other authors—patched together with some of my own threads of thought.
Not even within the narrow confines of American Atheists am I thought of as a philosopher, nor do I think of myself as a philosopher. Nevertheless, as a professional Atheist I have had to deal with philosophical issues repeatedly during my lifetime so far—and I am certain I will continue to have to do so during whatever time remains for me. (As you may remember from my lecture last year on the subject of the prospects for physical immortality, I plan to be around for quite a while yet.)
Confessedly, I am an amateur in the field of philosophy. Even so, I wish to convey to you all what I hope will be at least a practical understanding of the logical and philosophical foundations of Atheism which you can use in your own discussions with theists.
Both my formal and informal study of philosophy took place after I became an Atheist at the age of eighteen. Whether it was due to lack of experience or dullness of wit, I found myself being convinced and taken in by each philosopher I studied. Again and again, I found myself thinking, "This is right! This really makes sense. This is the philosopher for me!"—until I read the refutations by the next philosopher in turn. Through all of this, a practical feeling of "common sense" began to develop in me, and I began to identify with Omar Khayyám after each new philosophical encounter:
Myself when young did eagerly frequent
Doctor and Saint, and heard great argument
About it and about: but ever more
Came out by the same Door where in I went.
And so, after several years of intense study of symbolic logic and reading most of the philosophers you probably know about—as well as some you probably haven't heard of—I gave up on philosophy as fruitless: nothing is ever settled in philosophy as compared to science.
After all, the road to scientific progress is well-marked out by milestones with which we are all familiar. Despite the cavils of young-earth creationists, it is a fact that the earth is billions of years old. This is a hard-won discovery of science. It is not going to be reversed by the next geologist who picks up a spade.
It is a fact, that life is not a special creation, but the process of material forces acting over eons to cause life forms to change—to evolve. The fact of evolution is established not only at the organismal level but at the chemical level as well. DNA is a fact. The genetic code is a fact. Natural Selection is a fact. Evolutionary theory is a milestone on the road of scientific progress. It is not going to be disproved by a "creation scientist," even if, anomalously, he should have a Harvard degree.
It is a fact also that there are living things too small to see with the unaided eye. The microbial world is a reality, even if it was not known to the ancient Greek philosophers or the scientifically illiterate blokes who wrote the Hebrew Bible. It is a fact that some of these microbes help to produce the oxygen we breathe, the nitrate fertilizers needed for growth of plants, and the leavened bread of which we are all so fond. It is a fact that many microbes can cause diseases. No Christian Scientist disciple of Mary Faker Eddy is ever going to win an argument against an anthrax bacillus! Microbiology is another milestone—like which there is nothing similar in philosophy.
In dismay, I gave up my pursuit of any solid-as-stone philosophy, even as the ancient alchemists eventually gave up their pursuit of the philosopher's stone. I settled on the Logical Positivists and their descendants the Logical Empiricists—philosophers whose work most easily fit in with my pursuit of science. Bertrand Russell provided the general frame of reference for my thinking, but it was A. J. Ayer's Language Truth and Logic, with its theory of verifiability, that became my vade mecum--both in my pursuit of science and in my disputes with theists. When Karl Popper refined the principle of verifiability into the principle of falsifiability, I incorporated the nuance without inconvenience.
These philosophers had practical utility for my career in science, and so I pretty much stuck with them, but I despaired of finding justification for my Atheism in philosophy. I was an Atheist because of the evidence of the world: science, history, psychology, and biblical criticism provided all the justification I needed.
I developed my ideas almost completely independently of various philosophers who, unknown to me, were publishing widely read works that said—usually more clearly, as it turns out—many of the same things that I was laboriously bringing to mental birth in my own mind. Unknown to me at the time, there was the work of Antony Flew, and Kai Nielson's distinction between meaningful and meaningless religious talk. There was George H. Smith'sAtheism: The Case Against God (1979) and Michael Martin's Atheism: A Philosophical Justification (1990). These works contrasted two types of Atheism, variously termed weak vs. strong, negative vs. positive, or implicit vs. explicit Atheism. It was belated reading of these philosophers that caused me to return to the contemplation of the philosophical and logical foundations of Atheism.
Here is what I have gleaned from these and other authors—patched together with some of my own threads of thought.
Weak Atheism
The weak Atheist is an a-theist--a person without theism, someone in whom god-belief is absent. The prefix a is the so-called alpha-privative of Greek grammar. It signifies 'not' or 'without'. In this sense, Agnostics are weak Atheists—for the simple reason that they are without god-beliefs.
As George H. Smith puts it (p. 7) "Atheism, in its basic form, is not a belief: it is the absence of belief. An atheist is not primarily a person who believesthat a god does not exist; rather, he does not believe in the existence of a god."
This should lay to rest arguments that Atheism is itself a religion. Atheism is not a belief system, it is a system withoutbeliefs. As Madalyn Murray O'Hair always used to say, "Calling Atheism a type of religion is like calling health a type of disease."
For weak Atheists, the burden of proof is on the theists.Atheists only need to poke holes in theist attempts. The challenge to theists can often be made stronger by asking them to define their god. That which cannot be defined cannot be believed in.
The request for definition can sometimes have a devastating effect on the theistic apologist. Some years ago, I had occasion to do a radio debate with John Koster, the author of a book-length libel entitled The Atheist Syndrome. I asked him to give an operational definition of his god. If a geologist can give an operational definition of 'harder than' by saying that if Rock-A can scratch Rock-B, A is harder than B, surely a theologian should be able to define divinity. ''What does your god do?" I asked him. ''What procedure must one follow to detect your god?"
The question blew him out of the water. He had never been asked to define his god, and he never recovered. The debate was mine.
It is sometimes lamented that Atheism is such a "negative term"—that it defines us in terms of what we are not or what we are against. I like to ask such complainers if they think 'independence' is a negative concept. After all it has the negative prefix in-. Independence is free of dependence. Is independence negative?
Or how about the medical term asepsis? It means the absence of sepsis—the absence of infection. Isn't that a terribly negative concept?
As George H. Smith puts it (p. 7) "Atheism, in its basic form, is not a belief: it is the absence of belief. An atheist is not primarily a person who believesthat a god does not exist; rather, he does not believe in the existence of a god."
This should lay to rest arguments that Atheism is itself a religion. Atheism is not a belief system, it is a system withoutbeliefs. As Madalyn Murray O'Hair always used to say, "Calling Atheism a type of religion is like calling health a type of disease."
For weak Atheists, the burden of proof is on the theists.Atheists only need to poke holes in theist attempts. The challenge to theists can often be made stronger by asking them to define their god. That which cannot be defined cannot be believed in.
The request for definition can sometimes have a devastating effect on the theistic apologist. Some years ago, I had occasion to do a radio debate with John Koster, the author of a book-length libel entitled The Atheist Syndrome. I asked him to give an operational definition of his god. If a geologist can give an operational definition of 'harder than' by saying that if Rock-A can scratch Rock-B, A is harder than B, surely a theologian should be able to define divinity. ''What does your god do?" I asked him. ''What procedure must one follow to detect your god?"
The question blew him out of the water. He had never been asked to define his god, and he never recovered. The debate was mine.
It is sometimes lamented that Atheism is such a "negative term"—that it defines us in terms of what we are not or what we are against. I like to ask such complainers if they think 'independence' is a negative concept. After all it has the negative prefix in-. Independence is free of dependence. Is independence negative?
Or how about the medical term asepsis? It means the absence of sepsis—the absence of infection. Isn't that a terribly negative concept?
Strong Atheism
In addition to being free of godbelief, the strong Atheist positively denies the existence of one or more gods. The explicit Atheist might deny, for example, that the deluge deity of the Old Testament exists, citing the positive evidence of geology to show that the earth was not recently scoured by quintillions of gallons of water. Hence, the god whose definitional biography alleges he destroyed the earth's inhabitants by drowning all but a few of them cannot exist: the evidence of geology leaves no room for such a god.
The positive Atheist might deny the existence of a Jesus of Nazareth on the grounds that the city now known as Nazareth did not exist in the first centuries BCE and CE. Just as there could never have been a Wizard of Oz if Oz is a fiction, there could not have been a Jesus of Nazareth if there was no Nazareth at the time he is alleged to have been driving devils into Porky Pig and his extended family. A Jesus of Cucamonga or Hoboken, maybe—but that's a different god and a separate problem.
A strong Atheist might also deny the existence of a deity that is defined as being both omnipotent and omnibenevolent, citing Epicurus' trilemma as proof:
Either God wants to abolish evil, and cannot;
Or he can, but does not want to;
Or he cannot, and does not want to.
If he wants to, but cannot, he is impotent.
If he can, but does not want to, he is wicked.
If he neither can, nor wants to,
he is both powerless and wicked.
But if (as they say) God can abolish evil,
And God really wants to do it,
Why is there evil in the world?
The existence of evil quite decisively must rule out the existence of such a deity.
The strong Atheist might argue against deities that are logically incoherent: If a god is all-powerful, can it build a wall so strong it cannot tear it down? If a god is infinite, is s/h/it everywhere? Inside the devil? Is the deity in my Dial-an-Atheist® messages that argue against s/h/its existence? Clearly, for a god to be infinite, it must be everywhere—there is no place it can notbe. But if it has to be everywhere, it lacks the power to absent itself from certain places. If it lacks power in any way, it is not omnipotent! So it cannot be both omnipotent and infinite. An explicit Atheist might argue that talk about such deities is meaningless.
The positive Atheist might deny the existence of a Jesus of Nazareth on the grounds that the city now known as Nazareth did not exist in the first centuries BCE and CE. Just as there could never have been a Wizard of Oz if Oz is a fiction, there could not have been a Jesus of Nazareth if there was no Nazareth at the time he is alleged to have been driving devils into Porky Pig and his extended family. A Jesus of Cucamonga or Hoboken, maybe—but that's a different god and a separate problem.
A strong Atheist might also deny the existence of a deity that is defined as being both omnipotent and omnibenevolent, citing Epicurus' trilemma as proof:
Either God wants to abolish evil, and cannot;
Or he can, but does not want to;
Or he cannot, and does not want to.
If he wants to, but cannot, he is impotent.
If he can, but does not want to, he is wicked.
If he neither can, nor wants to,
he is both powerless and wicked.
But if (as they say) God can abolish evil,
And God really wants to do it,
Why is there evil in the world?
The existence of evil quite decisively must rule out the existence of such a deity.
The strong Atheist might argue against deities that are logically incoherent: If a god is all-powerful, can it build a wall so strong it cannot tear it down? If a god is infinite, is s/h/it everywhere? Inside the devil? Is the deity in my Dial-an-Atheist® messages that argue against s/h/its existence? Clearly, for a god to be infinite, it must be everywhere—there is no place it can notbe. But if it has to be everywhere, it lacks the power to absent itself from certain places. If it lacks power in any way, it is not omnipotent! So it cannot be both omnipotent and infinite. An explicit Atheist might argue that talk about such deities is meaningless.
The Principle of Testability
This brings us to the principles of verifiability, falsifiability, or testability as criteria of meaning. To give you a common-sense idea of these abstruse philosophical concepts, I'd like to have you consider two rather silly propositions:
(1) The moon is made of green cheese.
(2) Undetectable gremlins inhabit the rings of Saturn.
An adherent of the testability theory of meaning would assert that only one of the above propositions is false. He or she would argue that if a proposition can't be tested even in the imagination it can't even be false: it is meaningless.
Applying this principle to the two sentences above, we may see that the moon-of-cheese proposition is easily testable. Even before we had rockets and went to the moon and discovered that moon dust made lousy salad dressing—I guess I've confused green cheese and blue cheese here—it was possible to imagine how one could go about testing the statement without violating either the laws of logic or the laws of science. In fact, studying the reflectance and the spectrum of light coming from the moon had shown over a century earlier that the moon was based on silica, not carbon.
The moon sentence was capable of being tested. It was tested and shown to be false. It was capable of being falsified. It was, in fact, falsified.
But what of the Saturn-and-gremlins sentence? Can you even imagine a way to test it? By definition, the gremlins alleged to live there are undetectable. Even if we flew to Saturn with the best gremlinometers the "creation scientists" infesting NASA were capable of building, it would be to no avail. Undetectable gremlins cannot be detected—anywhere, not just on the rings of Saturn. The gremlin sentence thus is untestable. It cannot, therefore, be true or false either one: it is meaningless, not false.
God sentences, when you examine them closely, often prove to be undetectable gremlin· sentences.They aren't even false, they are meaningless. Except for sentences dealing with really "oldtime religion"—such as a religion whose gods camp out on Mt. Olympus--most god sentences cannot be tested.
You can appreciate why this is necessarily so. When ancient theologians claimed that Zeus lived on Mt. Olympus, there were pesky materialists who climbed the mountain to pay him a visit. Without exception, everyone who reached the summit found no evidence of godly house-keeping—not even a note saying "Out to lunch—Zeus." (Of course, they found no holy theo-toilet up there either, so apologists probably argued that Zeus didn't need to eat—whether noon-time noshes or two-martini lunches.)
This had a negative impact, we may suppose, on the income of the priests who claimed personal knowledge of the wants and demands of Zeus. It was necessary to recast Zeus less anthropomorphically—and less physically. In order to keep the faith-drachmas rolling in, Zeus had to become an undetectable gremlin—as have all the gods of modern religions.
This transmogrification of gods into gremlins has been extremely useful—it has kept religions from going extinct—but it can have some amusing aspects. As Ann and I were flying to California for this convention, we happened to sit in front of a young mother with her fouryear-old, precocious son. As the plane ascended to surmount a large bank of storm clouds, the child asked his mother, "Is that where God lives, up there?"
The mother answered, "No, God lives much higher up."'
Even more curious now, the boy asked, "Do you have to have a rocket ship to get up to where God lives?"
Rather nonplused by the boy's questions, the mother could only answer something to the effect that there is no transport means available to get to "God's house." Of course, the boy's next question was "Why?"
(1) The moon is made of green cheese.
(2) Undetectable gremlins inhabit the rings of Saturn.
An adherent of the testability theory of meaning would assert that only one of the above propositions is false. He or she would argue that if a proposition can't be tested even in the imagination it can't even be false: it is meaningless.
Applying this principle to the two sentences above, we may see that the moon-of-cheese proposition is easily testable. Even before we had rockets and went to the moon and discovered that moon dust made lousy salad dressing—I guess I've confused green cheese and blue cheese here—it was possible to imagine how one could go about testing the statement without violating either the laws of logic or the laws of science. In fact, studying the reflectance and the spectrum of light coming from the moon had shown over a century earlier that the moon was based on silica, not carbon.
The moon sentence was capable of being tested. It was tested and shown to be false. It was capable of being falsified. It was, in fact, falsified.
But what of the Saturn-and-gremlins sentence? Can you even imagine a way to test it? By definition, the gremlins alleged to live there are undetectable. Even if we flew to Saturn with the best gremlinometers the "creation scientists" infesting NASA were capable of building, it would be to no avail. Undetectable gremlins cannot be detected—anywhere, not just on the rings of Saturn. The gremlin sentence thus is untestable. It cannot, therefore, be true or false either one: it is meaningless, not false.
God sentences, when you examine them closely, often prove to be undetectable gremlin· sentences.They aren't even false, they are meaningless. Except for sentences dealing with really "oldtime religion"—such as a religion whose gods camp out on Mt. Olympus--most god sentences cannot be tested.
You can appreciate why this is necessarily so. When ancient theologians claimed that Zeus lived on Mt. Olympus, there were pesky materialists who climbed the mountain to pay him a visit. Without exception, everyone who reached the summit found no evidence of godly house-keeping—not even a note saying "Out to lunch—Zeus." (Of course, they found no holy theo-toilet up there either, so apologists probably argued that Zeus didn't need to eat—whether noon-time noshes or two-martini lunches.)
This had a negative impact, we may suppose, on the income of the priests who claimed personal knowledge of the wants and demands of Zeus. It was necessary to recast Zeus less anthropomorphically—and less physically. In order to keep the faith-drachmas rolling in, Zeus had to become an undetectable gremlin—as have all the gods of modern religions.
This transmogrification of gods into gremlins has been extremely useful—it has kept religions from going extinct—but it can have some amusing aspects. As Ann and I were flying to California for this convention, we happened to sit in front of a young mother with her fouryear-old, precocious son. As the plane ascended to surmount a large bank of storm clouds, the child asked his mother, "Is that where God lives, up there?"
The mother answered, "No, God lives much higher up."'
Even more curious now, the boy asked, "Do you have to have a rocket ship to get up to where God lives?"
Rather nonplused by the boy's questions, the mother could only answer something to the effect that there is no transport means available to get to "God's house." Of course, the boy's next question was "Why?"
Metaphysics, Ethics, and Aesthetics
One of Ayer's conclusions was that metaphysics—areas of philosophy beyond the physical, such as ontology and cosmology—could be eliminated. Sentences such as "Existence precedes essence" are meaningless. They can't be tested, nor can one imagine a way to test such blather. Meaningless also are some of the "great" questions of philosophy, such as "Is the universe eternal or time-limited?"
Since the term universe means 'everything that exists', an observer trying to test the proposition that the universe began a finite amount of time in the past would have to have been in existence longer than the universe to observe its beginning. But a part of the universe—the observer—cannot be older than the universe itself. So the proposition alleging a finite universe is meaningless. (Big-Bang theory doesn't really get around this objection. It simply traces the universe back to a "singularity" where time and space blend together. It is meaningless to make statements about what happened—or ask if anything happened—"before" the Big Bang.)
So too, the proposition that the universe is eternal is meaningless. The observer would have to exist longer than eternity to observe the duration of. the universe. But the notion of existence longer than eternity is a meaningless misuse of words. So the number-one "Great Question of Philosophy" has to be given up as fruitless and unknowable—meaningless, in other words.
After Ayer it has become clear that ethical and aesthetic statements too are meaningless in the sense that they can have no empirical truth value. There is no test imaginable that could test—let alone prove—the claim that my composition "Theophagy Tango" is inferior to Brahms' "Symphony Number One," despite it's being in the same key of Cminor. De gustibus non est disputandum.
Similarly, there is no test imaginable that could prove that incest and spousal abuse are bad, or that loving your enemies and feeding the hungry are good. That is, such claims in the abstract cannot be tested and are empirically meaningless.
Both aesthetic and ethical statements are, however, often emotionally meaningful, even if they cannot be veridically meaningful. One certainly could test the proposition "Most people feel that incest is bad," or the proposition "Beethoven is more popular than Nine Inch Nails." Those propositions are empirically meaningful. But after the polls have been taken, we would be no closer to knowing if incest is absolutelybad or if Beethoven is inherently betterthan NIN.
Since the term universe means 'everything that exists', an observer trying to test the proposition that the universe began a finite amount of time in the past would have to have been in existence longer than the universe to observe its beginning. But a part of the universe—the observer—cannot be older than the universe itself. So the proposition alleging a finite universe is meaningless. (Big-Bang theory doesn't really get around this objection. It simply traces the universe back to a "singularity" where time and space blend together. It is meaningless to make statements about what happened—or ask if anything happened—"before" the Big Bang.)
So too, the proposition that the universe is eternal is meaningless. The observer would have to exist longer than eternity to observe the duration of. the universe. But the notion of existence longer than eternity is a meaningless misuse of words. So the number-one "Great Question of Philosophy" has to be given up as fruitless and unknowable—meaningless, in other words.
After Ayer it has become clear that ethical and aesthetic statements too are meaningless in the sense that they can have no empirical truth value. There is no test imaginable that could test—let alone prove—the claim that my composition "Theophagy Tango" is inferior to Brahms' "Symphony Number One," despite it's being in the same key of Cminor. De gustibus non est disputandum.
Similarly, there is no test imaginable that could prove that incest and spousal abuse are bad, or that loving your enemies and feeding the hungry are good. That is, such claims in the abstract cannot be tested and are empirically meaningless.
Both aesthetic and ethical statements are, however, often emotionally meaningful, even if they cannot be veridically meaningful. One certainly could test the proposition "Most people feel that incest is bad," or the proposition "Beethoven is more popular than Nine Inch Nails." Those propositions are empirically meaningful. But after the polls have been taken, we would be no closer to knowing if incest is absolutelybad or if Beethoven is inherently betterthan NIN.
Critiques of Ayer and Logical Empiricism
Not surprisingly, a philosophical system so devastating to theology and other metaphysical systems has come under strong attack by theologians and other philosophers. Norman Geisler, for example, in his Christian Apologetics [Baker Book House 1976] wittily makes the claim that "The verifiability principle itself is not empirically verifiable." Even many Atheist philosophers have agreed with this criticism. But why?
The verifiability principle is empirically verifiable. It is simply an observation of what kinds of questions scientists and ordinary mortals can deal with.To allow untestable propositions to be "meaningful" would lead to impossible consequences. To allow meaning to untestable propositions would produce decisional gridlock. For there is an infinitude of untestable hypotheses, all equally untestable, which would overload our computational channels andreduce us to a condition of decisional anergy.
Of'course, it might. be argued that there exists also an infinitude of testable hypotheses. But this poses no problem of decisional gridlock. Most such hypotheses can be ruled in or out immediately, on the basis of what is already known. If someone claims there is peanut butter in my toaster, based upon my knowledge of who has had access to my toaster and the nature of their habits I can rule in or out the likelihood of the proposition being true. I need only to be fully conscious of the claim to lay it to rest one way or the other. The testable infinitude does not lead to the sort of La Brea Tarpit into which one is ineluctably dragged by the untestable infinitude.
The verifiability principle is empirically verifiable. It is simply an observation of what kinds of questions scientists and ordinary mortals can deal with.To allow untestable propositions to be "meaningful" would lead to impossible consequences. To allow meaning to untestable propositions would produce decisional gridlock. For there is an infinitude of untestable hypotheses, all equally untestable, which would overload our computational channels andreduce us to a condition of decisional anergy.
Of'course, it might. be argued that there exists also an infinitude of testable hypotheses. But this poses no problem of decisional gridlock. Most such hypotheses can be ruled in or out immediately, on the basis of what is already known. If someone claims there is peanut butter in my toaster, based upon my knowledge of who has had access to my toaster and the nature of their habits I can rule in or out the likelihood of the proposition being true. I need only to be fully conscious of the claim to lay it to rest one way or the other. The testable infinitude does not lead to the sort of La Brea Tarpit into which one is ineluctably dragged by the untestable infinitude.
The Spectre of Gesargenplotzianism
It should be noted that it also is not in the best interest of theists to insist on the unreliability of the verifiability principle. If they seriously want to entertain untestable propositions—even those for which one cannot imagine a way of testing—they are in for trouble.
I am indebted to John B. Hodges who, on AACHAT, drew attention to a certain Harlan Miller, a philosophy professor at Virginia Tech, in Blacksburg, Virginia. Miller, it seems, is the inventor of Gesargenplotzianism,which is greatly superior to ordinary Atheism in arguing with theists. As Hodges puts it:
The major advantage that Gesargenplotzianism has over Atheism is that the Gesargenplotzian does not have to dispute much that a religious believer claims: all the arguments and testimony that support any religion also support Gesargenplotzianism. So, e.g., if someone is a creationist, or believes that the angel Moroni brought golden tablets for Joseph Smith to transcribe, the Gesargenplotzian can say, 'Sure, all that happened. But you have not heard the latest news'.
"If you doubt the truth of Gesargenplotzianism, then, on the samegrounds, you must doubt the truth of all other religions."
Note here that the underlying problem is that the revelations upon which religionists rely—and Gesargenplotzians pretend to rely—to support their belief systems are not subject to external verification or falsification—they can't be tested. And, of course, they contradict each other, just as the multitude of jarring sects and cults of the world contradict each other. Did god A say ''X" and god B say "Not-X"? Who can resolve such a question?
Of course, most of the religions of the world have components that can be tested and falsified--e.g. religions that claim a young age for the earth and a recent world-destroying flood. But after all the falsifiable components have been refuted and wiped away, a core of beliefs inevitably remains that is not testable in any useful way: for example, the belief that the god of the particular religion is actually "good" but allows evil in the world to help us mature morally. What will they do when the Gesargenplotzian says, ''Yes, but that was only up until yesterday"?
I am indebted to John B. Hodges who, on AACHAT, drew attention to a certain Harlan Miller, a philosophy professor at Virginia Tech, in Blacksburg, Virginia. Miller, it seems, is the inventor of Gesargenplotzianism,which is greatly superior to ordinary Atheism in arguing with theists. As Hodges puts it:
The major advantage that Gesargenplotzianism has over Atheism is that the Gesargenplotzian does not have to dispute much that a religious believer claims: all the arguments and testimony that support any religion also support Gesargenplotzianism. So, e.g., if someone is a creationist, or believes that the angel Moroni brought golden tablets for Joseph Smith to transcribe, the Gesargenplotzian can say, 'Sure, all that happened. But you have not heard the latest news'.
"If you doubt the truth of Gesargenplotzianism, then, on the samegrounds, you must doubt the truth of all other religions."
Note here that the underlying problem is that the revelations upon which religionists rely—and Gesargenplotzians pretend to rely—to support their belief systems are not subject to external verification or falsification—they can't be tested. And, of course, they contradict each other, just as the multitude of jarring sects and cults of the world contradict each other. Did god A say ''X" and god B say "Not-X"? Who can resolve such a question?
Of course, most of the religions of the world have components that can be tested and falsified--e.g. religions that claim a young age for the earth and a recent world-destroying flood. But after all the falsifiable components have been refuted and wiped away, a core of beliefs inevitably remains that is not testable in any useful way: for example, the belief that the god of the particular religion is actually "good" but allows evil in the world to help us mature morally. What will they do when the Gesargenplotzian says, ''Yes, but that was only up until yesterday"?
God Speaks!
My own method of getting people to observe the futility of allowing untestable propositions --i.e., to verify the verifiability principle itself—is to claim that yes, they are right: a god does in fact exist—and I am He!
Although I have existed from eternity, they and the entire world around them have only existed for three minutes. You see, I created them—and you too—with false memories in the midst of a world that agrees with those false memories in every detail.
Although I have existed from eternity, they and the entire world around them have only existed for three minutes. You see, I created them—and you too—with false memories in the midst of a world that agrees with those false memories in every detail.
Audience Resistance
What's that? You—over there in the second row—are Catholic and think I don't look like those old photographs of Jesus Christ? You want to see some ID? My celestial driver's license, maybe? You want me to do some tricks? Okay [taking out a deck of cards], pick a card, any card ...
That's not the type of trick you had in mind? You meant you wanted a miracle? Well, sorry! I don't do miracles any more. I could if I wanted to, but I don't want to this millennium.
You there, in the mauve shirt, are Jewish and think I'm a blasphemer for saying I'm none other than Yahweh in the flesh? You want to stone me to death to "prove" I'm not your god? How could you prove anything by doing that? After all, I would only pretend to die.
Real gods don't eat quiche or die, you know! No, I would just pretend to die. I might even pretend to rot away. But woe unto thee who seekest after a sign! I'll get even with thee after thou diest. I have a fractal dimension just waiting for thee! Lots of sharp edges in it. And thou shalt suffer for the rest of eternity for doubting me.
Okay, behold: I just read your thoughts over there beside the star and crescent flag! You think you're going to kidnap me and torture me into confessing that I'm not really Allah. Crucifixion, maybe? Well, get ready for fractal frights! Because of your doubt, and because of your blasphemous presumption that you could torture me, I will just go along with the joke.
I'll pretend to confess that I'm just a lowly primate with 46 chromosomes instead of a god with however many chromosomes they found in the sweat stains on the Shroud of Turin. [See? I just feigned non-omniscience to let you see how treacherous your perceptions can be—Oh ye of little faith!]
I might even make my blood look like ordinary blood instead of the ethanolic heavenly hemoglobin that appears in chalices when Catholic priests correctly mumble the magic words I taught them centuries ago.
But watch out when you die! While you're alive, you've never felt pain like what you'll feel after you die. I'll show you! Afterdeath, you'll know everything. But then it will be too late.
You see, I could do some magic tricks to dazzle you into realizing I really am your god. But I don't want to do that. I want you to have faith. I want you to believe without a sign. Oh! What a wicked generation it is that seeketh a sign! Those of you who have faith I shall reward after you die. However, those of you who are female will have to do a bit more than just have faith. See me after the show for detailed instructional revelations ...
That's not the type of trick you had in mind? You meant you wanted a miracle? Well, sorry! I don't do miracles any more. I could if I wanted to, but I don't want to this millennium.
You there, in the mauve shirt, are Jewish and think I'm a blasphemer for saying I'm none other than Yahweh in the flesh? You want to stone me to death to "prove" I'm not your god? How could you prove anything by doing that? After all, I would only pretend to die.
Real gods don't eat quiche or die, you know! No, I would just pretend to die. I might even pretend to rot away. But woe unto thee who seekest after a sign! I'll get even with thee after thou diest. I have a fractal dimension just waiting for thee! Lots of sharp edges in it. And thou shalt suffer for the rest of eternity for doubting me.
Okay, behold: I just read your thoughts over there beside the star and crescent flag! You think you're going to kidnap me and torture me into confessing that I'm not really Allah. Crucifixion, maybe? Well, get ready for fractal frights! Because of your doubt, and because of your blasphemous presumption that you could torture me, I will just go along with the joke.
I'll pretend to confess that I'm just a lowly primate with 46 chromosomes instead of a god with however many chromosomes they found in the sweat stains on the Shroud of Turin. [See? I just feigned non-omniscience to let you see how treacherous your perceptions can be—Oh ye of little faith!]
I might even make my blood look like ordinary blood instead of the ethanolic heavenly hemoglobin that appears in chalices when Catholic priests correctly mumble the magic words I taught them centuries ago.
But watch out when you die! While you're alive, you've never felt pain like what you'll feel after you die. I'll show you! Afterdeath, you'll know everything. But then it will be too late.
You see, I could do some magic tricks to dazzle you into realizing I really am your god. But I don't want to do that. I want you to have faith. I want you to believe without a sign. Oh! What a wicked generation it is that seeketh a sign! Those of you who have faith I shall reward after you die. However, those of you who are female will have to do a bit more than just have faith. See me after the show for detailed instructional revelations ...
Conclusion
Well, by now you've gotten the picture. Even theists by now would have realized that you can't take seriously untestable propositions. They would have verified by observation that the testability principle of meaning is correct—at least in the sense that it is necessary. They would have seen that the testability principle is not metaphysics, it is physics and physiology. It is the principle that practically allows us to separate meaningful propositions from meaningless ones.
With this practical defense of the principle of verifiability/falsifiability/testability we can get back to the problem of sorting out what kind of Atheists we are.
For all the gods that can be defined operationally and tested, we are Strong Atheists. We deny their existence because we can disprove their existence with the appropriate tests.
For all the gods that aren't defined or are meaningless, we are Weak Atheists. We can't deny them because all statements about their qualities are meaningless. They can't even be false. So we have to rest content simply in being free of belief in them. Each of us is both a Strong and a Weak Atheist simultaneously, depending upon the type of spook in question.
With this practical defense of the principle of verifiability/falsifiability/testability we can get back to the problem of sorting out what kind of Atheists we are.
For all the gods that can be defined operationally and tested, we are Strong Atheists. We deny their existence because we can disprove their existence with the appropriate tests.
For all the gods that aren't defined or are meaningless, we are Weak Atheists. We can't deny them because all statements about their qualities are meaningless. They can't even be false. So we have to rest content simply in being free of belief in them. Each of us is both a Strong and a Weak Atheist simultaneously, depending upon the type of spook in question.
Atheism and Humanism:
Veridical and Ethical Dimensions of Self-description
An address given at the 1999 winter solstice banquet of the Humanist Community of Central Ohio
Holding only part of her tongue in her cheek, my wife Ann for many years has explained the difference between Atheists, Agnostics, Humanists, and Unitarians as follows:
• Atheists are people who proclaim proudly and courageously to the world, "There are no gods and the whole subject is meaningless."
• Agnostics are Atheists who are afraid to admit to an unknown god that they are Atheists.
• Humanists are Atheists who are afraid to tell the world they are Atheists.
• Unitarians are Atheists who are afraid to tell themselves they are Atheists.
Not wishing to incur the wrath of my wife for disagreeing with her straight-forward classification, my only available course is to amend a part of it and try to elucidate the part comparing Atheists with Humanists.
As I see it, most popular comparisons of Atheism and Humanism involve confusion of categories: instead of comparing apples and oranges, they should be comparing apples and menus, say, or apples and hunger.
When people describe themselves as being Atheists, they are making a statement concerning what they consider to be true or false regarding the world of physical reality. They are making a veridical claim.
When they describe themselves as being Humanists, they are making a statement about the world of values that understands that all value systems are creations of human beings, that humankind (to paraphrase Protagoras, an ancient Greek philosopher) is the measure of all things. They are making an ethical claim.
Atheism and Humanism interact with each other by virtue of the fact that what one understands to be true or false about the world has an impact on what values one may have to abandon, continue to hold, or create anew. Conversely, the values one holds may motivate the search for truth more actively in one sphere of knowledge than in another.
Atheism is considerably easier to discuss than Humanism, so let me begin with Atheism.
An Atheist is a person without god beliefs. Atheists can be free of god beliefs for several reasons.
They may have observed that one-by-one, the particular gods our kind has worshipped have been ruled out by the march of science. Jupiter, Zeus, Thor, and at least a part of Jehovah were rendered redundant by Ben Franklin's kite experiment. No one supposes any longer that Zeus and Thor might yet exist after all. They appear clearly to be the inventions of pre-scientific folks who needed an explanation of scary phenomena they could not comprehend. When one considers the enormous number of gods, goddesses, and celestial critters who have been wiped out in the course of history, Atheists—who tend to be empiricists—simply extend the trend. Jesus, Jehovah, Allah, and the rest of the heavenly gremlins have never existed except in the minds of frightened prescientific people.
People may be Atheists because they have observed also that any god, goddess, or gremlin that can be defined adequately can be shown conclusively not to exist. Thus, Yahweh is partly definable as a deity who once drowned the entire sphere of the earth under a shell of water that covered the highest mountain peaks of the world. Geology shows conclusively that no such flood ever occurred. Thus, Yahweh as agent of the flood does not exist. Time to redefine Yahweh. Reduce the size of his curriculum vitae (curriculum mortis?).
Yahweh is the author of plagues. Well, Pasteur, Koch, and other microbiologists seem to have put that Yahweh out of business. Time to redefine Yahweh again. And again, and again—until all that remains of Yahweh resembles the grin (or grimace) of a celestial Cheshire cat. "Define your god," such Atheists challenge, "and I'll show it doesn't exist."
Another reason people may be Atheists is they have concluded that when one moves from claims about specific gods to god concepts in general one moves into meaningless, metaphysical territory. Claims about "God" with a capital-G generally cannot be tested. Worse, one often cannot even imagine a way to test them. This renders them scientifically meaningless. When you propose a test—such as "If there is a god, let it strike me dead within the next five minutes"—godmongers generally will inform you that their divinity isn't going to condescend to answer such challenges.
And so, claims of an infinite deity who loves both people and the pox are no more testable than are claims of undetectable gremlins inhabiting the rings of Saturn. It is a waste of breath even to deny that undetectable gremlins haunt the seventh planet. Even the sentence "Undetectable gremlins do not exist" is meaningless. Gremlin sentences cannot even be false: they are without meaning. So too, god statements for the most part can't even be false. They are as meaningless as the gremlin statements.
It is sometimes objected that the assertion that propositions must be testable to be meaningful is itself a metaphysical principle—incapable of being tested and therefore as meaningless as the principles it seeks to eradicate. I guess the critics who make this claim just haven't tried very hard to test the testability principle. When they hit me with this allegation I usually reply simply that they are wrong.
I know they are wrong because, after all, I am god. I created them mere minutes before they made their allegation, fully programmed to speak as they did. They only think they were here, days-to-years ago, because I planted such virtual memories in their brains. When they challenge me to prove I am god—ask me to do some tricks or something—I reply that I could if I wanted to, but I don't want to. I want to test their faith. If they belong to that evil generation that seeketh a sign, I'll get even with them after they die. I'll punish them for an eternity. There's nothing they can possibly do to test my claim.
The purpose of this autoapotheosis, of course, is to get such folks to observe for themselves the need for such a principle as the testability principle of meaning. They are observing that the principle derives from the experience of dealing with the world. It is not the spinnings of some metaphysical spider.
People also may be Atheists because they find the concept of god incoherent—a concept that inevitably leads to self-contradictions. If a god is infinitely powerful, can it build a wall so sturdy it cannot tear it down? You get the idea.
If the only true god loves people, why did it allow smallpox to plague us for so long? If that god is both good and infinitely powerful, why does it allow evil? Why did it create the devil? If our alleged god is omniscient and prescient, why did it give us free will if the certain consequence was to be an original sin to be punished by an eternity of torture?
Atheists want to know what is true about the world. What are the facts of existence? How did we get here? Why did we create the gods? When people describe themselves as Atheists they are describing themselves as being concerned about the veridical aspects of existence—they are saying they want to know what is true.
The problem of defining Humanism
With this brief overview of Atheism, let us now turn our attention to the more difficult subject of Humanism.
As I indicated at the beginning, I believe that when you describe yourself as a Humanist you are making some sort of statement concerning where your values lie—the key to which your heart is tuned, to use the language of romanticism. More particularly, you are relating your value system to humans--human being the linguistic core of the term Humanism. But beyond this vague understanding of what you might be meaning when you call yourself a Humanist, is there anything more specific we can deduce from the term?
It is fairly fashionable to trace the roots of modern Humanism back to the ancient Greek Sophist philosopher Protagoras, who was born somewhere around the year 500 BCE in the city of Abdera, in Thrace. In his book On the Gods, he wrote: "With regard to the gods, I cannot feel sure either that they are or are not, nor what they are like in figure; for there are many things that hinder sure knowledge, the obscurity of the subject and the shortness of human life." [Bertrand Russell, History of Western Philosophy, p. 77] That sounds sort of like an Agnostic, doesn't it?
We can conclude from this at least the fact that Protagoras would not have been likely to have promulgated a system of ethics which derived values and morals from the desires of one or more gods. He could not have felt that something was right or wrong because some god or other said so. He had to look other than to the gods to find the source of values.
Protagoras is most famous for his apothegm "Man is the measure of all things—of things that are that they are, and of things that are not that they are not." (This was translated before it became proper etiquette to use gender-neutral language. Actually, Prot agoras used the Greek word anthropos which, like its Latin counterpart homo, really meant 'human being' as opposed to some other sort of animal. Had he intended to restrict his meaning to male humans, he would have used the word aner.) According to Bertrand Russell, this is usually interpreted to mean that eachperson is the measure of all things and that when people differ there is no objective standard by virtue of which one person is right and the other one is wrong.
Pragmatists nevertheless infer from this apothegm that while one opinion may not be truer than another, one opinion nevertheless may be better than another. Russell cites the case of a person suffering from jaundice. There is no point in arguing with such a person that the world isn't really yellow. To that person it is instantly obvious that the world is yellow. However, since health is better than sickness, we can argue that the opinion of the healthy person concerning the color of the world is better than that of the person with jaundice.
Even so, I have difficulty understanding exactly what Protagoras meant. The above interpretation seems to imply Protagoras was saying that humans are the measurers,and thus individual differences may bias judgment of what is true about the world. But perhaps he meant exactly what he said: that humans are the measure, i.e., the standard, by which the value of all things is to be assessed.
Interpreted in this fashion, we would be dealing with a system in which things are valuable in proportion to the degree to which theypreserve individuals or humanity in general, increase their happiness, help them advance to their goals, etc. In adopting such a system for myself, I would find actions and things—including people—of positive value if they help me to experience beauty, to know love, and to exercise my creative powers. (This is what I often refer to as my "hedonic triad.")
As you probably have perceived, this would be quite compatible with Utilitarianism àla Jeremy Bentham and John Stuart Mill, although I'm not certain it would logically entail the principle of "the greatest good for the greatest number." It also would seem to be compatible with a more selfish, individualistic focus for "the greatest good." About the only thing of which we can be certain at this point in our analysis is that spooks have nothing to do with values—at least not as the source of values.
People are the source of values, the creators of values, and the evaluators. Of course spooks can be very valuable—or at least the idea of spooks can be valuable. The spook trade can be very lucrative, as Jerry Falwell, Pat Robertson, Robert Schuler, and our dear friend Karol Wojtyła demonstrate so well. In fact, the specter-speculation market is the richest one on the planet—but the spooks are the commodities being bought and sold. They have no inherent value and cannot be an evaluation standard. It is human beings who both create and evaluate the spooks and specters, paying higher or lower prices for them in proportion that they think they can be enriched, made happier, or longer preserved by owning one.
As a Humanist, unlike the theists, I find the spooks and specters to be of negative value—things which in the broad perspective are leading humanity to disaster, destruction, and quite possibly, extinction. Often, the trade in spooks and spookish spin-off items is hedged about by restrictions which have baleful consequences for humanity. For example, when Karol Wojtiła—a.k.a. John-Paul II (I guess we can't really call him JP-Jr., since JP-I didn't have any children at all at least the Catholic Church has never claimed or admitted to any)—sells his Fantasyland franchises, he requires that the Playland section be permanently barred shut: no birth control, no sex for pleasure, no abortion, no homosexuality. Only sex that will lead to an overpopulated planet is encouraged.
But of course, all the spooks that are still being actively traded in local or world markets—not just those manufactured in Rome—have their specially licensed spook-estate agents who try to sell not only their particular spirits but spook-associated celestial apartments and streets-of-gold easements. Without fail, these spook-estate agents tell buyers that the properties they will own after they die will be more valuable than the ones they own at the time they are forking over money to pay the mortgage in advance for promised heavenly un-real estate. Buying swampland real-estate in Florida never looked so good as when compared to the spook-estate scams that are a daily occurrence in this country! When you pay rent in advance for heaven, you have less money to pay your earthly rent, or perhaps you won't be able to pay it at all. It may make you happy in the short run, but in the long run you and your kind will suffer and be harmed.
A Humanist will eschew such dealings not so much because they are silly, but because in the Utilitarian hedonic calculus they are of negative value. While you all will probably agree with everything I have said so far—unless there's someone out there who did a Ph.D. thesis on Protagoras—there is one troubling loose end in the fabric of my reasoning.
Throughout, I have pretended that you and I know what is meant by the term "human."It is the heart of the word "Humanism." But do we know what it really means? Before answering this question, I must digress for a moment.
• Atheists are people who proclaim proudly and courageously to the world, "There are no gods and the whole subject is meaningless."
• Agnostics are Atheists who are afraid to admit to an unknown god that they are Atheists.
• Humanists are Atheists who are afraid to tell the world they are Atheists.
• Unitarians are Atheists who are afraid to tell themselves they are Atheists.
Not wishing to incur the wrath of my wife for disagreeing with her straight-forward classification, my only available course is to amend a part of it and try to elucidate the part comparing Atheists with Humanists.
As I see it, most popular comparisons of Atheism and Humanism involve confusion of categories: instead of comparing apples and oranges, they should be comparing apples and menus, say, or apples and hunger.
When people describe themselves as being Atheists, they are making a statement concerning what they consider to be true or false regarding the world of physical reality. They are making a veridical claim.
When they describe themselves as being Humanists, they are making a statement about the world of values that understands that all value systems are creations of human beings, that humankind (to paraphrase Protagoras, an ancient Greek philosopher) is the measure of all things. They are making an ethical claim.
Atheism and Humanism interact with each other by virtue of the fact that what one understands to be true or false about the world has an impact on what values one may have to abandon, continue to hold, or create anew. Conversely, the values one holds may motivate the search for truth more actively in one sphere of knowledge than in another.
Atheism is considerably easier to discuss than Humanism, so let me begin with Atheism.
An Atheist is a person without god beliefs. Atheists can be free of god beliefs for several reasons.
They may have observed that one-by-one, the particular gods our kind has worshipped have been ruled out by the march of science. Jupiter, Zeus, Thor, and at least a part of Jehovah were rendered redundant by Ben Franklin's kite experiment. No one supposes any longer that Zeus and Thor might yet exist after all. They appear clearly to be the inventions of pre-scientific folks who needed an explanation of scary phenomena they could not comprehend. When one considers the enormous number of gods, goddesses, and celestial critters who have been wiped out in the course of history, Atheists—who tend to be empiricists—simply extend the trend. Jesus, Jehovah, Allah, and the rest of the heavenly gremlins have never existed except in the minds of frightened prescientific people.
People may be Atheists because they have observed also that any god, goddess, or gremlin that can be defined adequately can be shown conclusively not to exist. Thus, Yahweh is partly definable as a deity who once drowned the entire sphere of the earth under a shell of water that covered the highest mountain peaks of the world. Geology shows conclusively that no such flood ever occurred. Thus, Yahweh as agent of the flood does not exist. Time to redefine Yahweh. Reduce the size of his curriculum vitae (curriculum mortis?).
Yahweh is the author of plagues. Well, Pasteur, Koch, and other microbiologists seem to have put that Yahweh out of business. Time to redefine Yahweh again. And again, and again—until all that remains of Yahweh resembles the grin (or grimace) of a celestial Cheshire cat. "Define your god," such Atheists challenge, "and I'll show it doesn't exist."
Another reason people may be Atheists is they have concluded that when one moves from claims about specific gods to god concepts in general one moves into meaningless, metaphysical territory. Claims about "God" with a capital-G generally cannot be tested. Worse, one often cannot even imagine a way to test them. This renders them scientifically meaningless. When you propose a test—such as "If there is a god, let it strike me dead within the next five minutes"—godmongers generally will inform you that their divinity isn't going to condescend to answer such challenges.
And so, claims of an infinite deity who loves both people and the pox are no more testable than are claims of undetectable gremlins inhabiting the rings of Saturn. It is a waste of breath even to deny that undetectable gremlins haunt the seventh planet. Even the sentence "Undetectable gremlins do not exist" is meaningless. Gremlin sentences cannot even be false: they are without meaning. So too, god statements for the most part can't even be false. They are as meaningless as the gremlin statements.
It is sometimes objected that the assertion that propositions must be testable to be meaningful is itself a metaphysical principle—incapable of being tested and therefore as meaningless as the principles it seeks to eradicate. I guess the critics who make this claim just haven't tried very hard to test the testability principle. When they hit me with this allegation I usually reply simply that they are wrong.
I know they are wrong because, after all, I am god. I created them mere minutes before they made their allegation, fully programmed to speak as they did. They only think they were here, days-to-years ago, because I planted such virtual memories in their brains. When they challenge me to prove I am god—ask me to do some tricks or something—I reply that I could if I wanted to, but I don't want to. I want to test their faith. If they belong to that evil generation that seeketh a sign, I'll get even with them after they die. I'll punish them for an eternity. There's nothing they can possibly do to test my claim.
The purpose of this autoapotheosis, of course, is to get such folks to observe for themselves the need for such a principle as the testability principle of meaning. They are observing that the principle derives from the experience of dealing with the world. It is not the spinnings of some metaphysical spider.
People also may be Atheists because they find the concept of god incoherent—a concept that inevitably leads to self-contradictions. If a god is infinitely powerful, can it build a wall so sturdy it cannot tear it down? You get the idea.
If the only true god loves people, why did it allow smallpox to plague us for so long? If that god is both good and infinitely powerful, why does it allow evil? Why did it create the devil? If our alleged god is omniscient and prescient, why did it give us free will if the certain consequence was to be an original sin to be punished by an eternity of torture?
Atheists want to know what is true about the world. What are the facts of existence? How did we get here? Why did we create the gods? When people describe themselves as Atheists they are describing themselves as being concerned about the veridical aspects of existence—they are saying they want to know what is true.
The problem of defining Humanism
With this brief overview of Atheism, let us now turn our attention to the more difficult subject of Humanism.
As I indicated at the beginning, I believe that when you describe yourself as a Humanist you are making some sort of statement concerning where your values lie—the key to which your heart is tuned, to use the language of romanticism. More particularly, you are relating your value system to humans--human being the linguistic core of the term Humanism. But beyond this vague understanding of what you might be meaning when you call yourself a Humanist, is there anything more specific we can deduce from the term?
It is fairly fashionable to trace the roots of modern Humanism back to the ancient Greek Sophist philosopher Protagoras, who was born somewhere around the year 500 BCE in the city of Abdera, in Thrace. In his book On the Gods, he wrote: "With regard to the gods, I cannot feel sure either that they are or are not, nor what they are like in figure; for there are many things that hinder sure knowledge, the obscurity of the subject and the shortness of human life." [Bertrand Russell, History of Western Philosophy, p. 77] That sounds sort of like an Agnostic, doesn't it?
We can conclude from this at least the fact that Protagoras would not have been likely to have promulgated a system of ethics which derived values and morals from the desires of one or more gods. He could not have felt that something was right or wrong because some god or other said so. He had to look other than to the gods to find the source of values.
Protagoras is most famous for his apothegm "Man is the measure of all things—of things that are that they are, and of things that are not that they are not." (This was translated before it became proper etiquette to use gender-neutral language. Actually, Prot agoras used the Greek word anthropos which, like its Latin counterpart homo, really meant 'human being' as opposed to some other sort of animal. Had he intended to restrict his meaning to male humans, he would have used the word aner.) According to Bertrand Russell, this is usually interpreted to mean that eachperson is the measure of all things and that when people differ there is no objective standard by virtue of which one person is right and the other one is wrong.
Pragmatists nevertheless infer from this apothegm that while one opinion may not be truer than another, one opinion nevertheless may be better than another. Russell cites the case of a person suffering from jaundice. There is no point in arguing with such a person that the world isn't really yellow. To that person it is instantly obvious that the world is yellow. However, since health is better than sickness, we can argue that the opinion of the healthy person concerning the color of the world is better than that of the person with jaundice.
Even so, I have difficulty understanding exactly what Protagoras meant. The above interpretation seems to imply Protagoras was saying that humans are the measurers,and thus individual differences may bias judgment of what is true about the world. But perhaps he meant exactly what he said: that humans are the measure, i.e., the standard, by which the value of all things is to be assessed.
Interpreted in this fashion, we would be dealing with a system in which things are valuable in proportion to the degree to which theypreserve individuals or humanity in general, increase their happiness, help them advance to their goals, etc. In adopting such a system for myself, I would find actions and things—including people—of positive value if they help me to experience beauty, to know love, and to exercise my creative powers. (This is what I often refer to as my "hedonic triad.")
As you probably have perceived, this would be quite compatible with Utilitarianism àla Jeremy Bentham and John Stuart Mill, although I'm not certain it would logically entail the principle of "the greatest good for the greatest number." It also would seem to be compatible with a more selfish, individualistic focus for "the greatest good." About the only thing of which we can be certain at this point in our analysis is that spooks have nothing to do with values—at least not as the source of values.
People are the source of values, the creators of values, and the evaluators. Of course spooks can be very valuable—or at least the idea of spooks can be valuable. The spook trade can be very lucrative, as Jerry Falwell, Pat Robertson, Robert Schuler, and our dear friend Karol Wojtyła demonstrate so well. In fact, the specter-speculation market is the richest one on the planet—but the spooks are the commodities being bought and sold. They have no inherent value and cannot be an evaluation standard. It is human beings who both create and evaluate the spooks and specters, paying higher or lower prices for them in proportion that they think they can be enriched, made happier, or longer preserved by owning one.
As a Humanist, unlike the theists, I find the spooks and specters to be of negative value—things which in the broad perspective are leading humanity to disaster, destruction, and quite possibly, extinction. Often, the trade in spooks and spookish spin-off items is hedged about by restrictions which have baleful consequences for humanity. For example, when Karol Wojtiła—a.k.a. John-Paul II (I guess we can't really call him JP-Jr., since JP-I didn't have any children at all at least the Catholic Church has never claimed or admitted to any)—sells his Fantasyland franchises, he requires that the Playland section be permanently barred shut: no birth control, no sex for pleasure, no abortion, no homosexuality. Only sex that will lead to an overpopulated planet is encouraged.
But of course, all the spooks that are still being actively traded in local or world markets—not just those manufactured in Rome—have their specially licensed spook-estate agents who try to sell not only their particular spirits but spook-associated celestial apartments and streets-of-gold easements. Without fail, these spook-estate agents tell buyers that the properties they will own after they die will be more valuable than the ones they own at the time they are forking over money to pay the mortgage in advance for promised heavenly un-real estate. Buying swampland real-estate in Florida never looked so good as when compared to the spook-estate scams that are a daily occurrence in this country! When you pay rent in advance for heaven, you have less money to pay your earthly rent, or perhaps you won't be able to pay it at all. It may make you happy in the short run, but in the long run you and your kind will suffer and be harmed.
A Humanist will eschew such dealings not so much because they are silly, but because in the Utilitarian hedonic calculus they are of negative value. While you all will probably agree with everything I have said so far—unless there's someone out there who did a Ph.D. thesis on Protagoras—there is one troubling loose end in the fabric of my reasoning.
Throughout, I have pretended that you and I know what is meant by the term "human."It is the heart of the word "Humanism." But do we know what it really means? Before answering this question, I must digress for a moment.
The world as continuum
One of the great insights provided by science during the last several centuries is the realization that macroscopic reality is a continuum, unlike the quantized reality of the subatomic world. Hot and cold are not simple opposites like on and off: from absolute zero up past the temperatures of the stars there is a smooth continuum of thermal energies. It is not possible to state absolutely that such-and-such a temperature is cold and such-and-such a temperature is hot. It is all relative to the purposes of the person evaluating the temperature.
To an apartment dweller complaining to a landlord about a furnace, two below zero Fahrenheit is dangerously cold. To the operator of a cryonics facility storing Walt Disney's body, two below zero is ominously hot.
Black and white, in the real world in which Humanists dwell, are connected by a finely graded continuum of shades of gray. They may even be connected by colors. Life and death too are not simply the principles of on and off applied to organisms. There is no absolute instant that can be identified as "the moment of death" in the disintegration of any particular living thing. Practical matters determine how far the disintegration can proceed before we throw up our hands and say, "Dead." There is no moment when a soul escapes and leaves a corpse behind.
The fact that the macroscopic world in which our conscious lives exist is indiscreet instead of discrete ("indiscreet" is not the same thing as "indiscretion"!)--continuous rather than quantized—has profound implications for people trying to create value systems or trying to evaluate the realia of their world. This extends into the world of ethical values as well.
Where the godmongers' systems will treat the world as quantized, black or white, good or bad, shalts and shalt-nots, Humanistic systems will try to devise ways to divide the rainbow of reality that will be useful, practical, and satisfying to human beings. They will realize the relativity and ad hoc nature of their divisions of the colors of the cosmos. They will know that their divisions must be provisional. What works today to our satisfaction may not work in the new world of tomorrow. The only absolute we can count on is that we can't have absolutes.
To an apartment dweller complaining to a landlord about a furnace, two below zero Fahrenheit is dangerously cold. To the operator of a cryonics facility storing Walt Disney's body, two below zero is ominously hot.
Black and white, in the real world in which Humanists dwell, are connected by a finely graded continuum of shades of gray. They may even be connected by colors. Life and death too are not simply the principles of on and off applied to organisms. There is no absolute instant that can be identified as "the moment of death" in the disintegration of any particular living thing. Practical matters determine how far the disintegration can proceed before we throw up our hands and say, "Dead." There is no moment when a soul escapes and leaves a corpse behind.
The fact that the macroscopic world in which our conscious lives exist is indiscreet instead of discrete ("indiscreet" is not the same thing as "indiscretion"!)--continuous rather than quantized—has profound implications for people trying to create value systems or trying to evaluate the realia of their world. This extends into the world of ethical values as well.
Where the godmongers' systems will treat the world as quantized, black or white, good or bad, shalts and shalt-nots, Humanistic systems will try to devise ways to divide the rainbow of reality that will be useful, practical, and satisfying to human beings. They will realize the relativity and ad hoc nature of their divisions of the colors of the cosmos. They will know that their divisions must be provisional. What works today to our satisfaction may not work in the new world of tomorrow. The only absolute we can count on is that we can't have absolutes.
What does "human" mean?
Let me return to my rhetorical question: What does the word "human" mean?
By virtue of the fact that humans exist in a world of continua, they too must be identified as occupying some position in some spectrum. Exactly which color in the rainbow of being is it that we wish to identify as human? Can there be more than one color we can call human? How broad can the band-width be? Are humans to be defined in terms of a one-dimensional rainbow of some sort, or must we try to pinpoint humanity with the coordinates of a multi-dimensional rainbow of myriad qualities? Will humanity be a pinpoint in such a multidimensional space, or will it constitute a volume? How large a volume do we need to contain all that we wish to admit to the domain we shall label "humanity"?
Let me muzzle my metaphors for a moment and consider a few particular problems relating to the defining of the word "human."
We all know that an acorn is not an oak tree, even though in some nontrivial sense an acorn is oaken. In like manner, the fertilized egg or zygote of Homo sapiens is not a person—even though the single-celled object is in some non-trivial sense human. How do we decide in the course of ontogeny—individual development—when a human being or person begins?
Since the zygote contains the chromosomal database needed to form an adult human being, we can consider it to be a potential human being. But potentiality is not actuality, and actuality itself seems to unfold as the single-celled zygote divides repeatedly to form an embryo, the embryo evolves into a fetus, the fetus develops to term, a baby is born, etc. Exactly where, between Mrs. Schicklgrüber's fertilized oocyte and the main speaker at the banner-draped rally at Nurnberg, did Adolf Hitler become actual? When did he cease to be potential? Did it ever happen at all? Was there such a point in time?
I think not. I think it is arbitrary where we draw the line between the potential Hitler and the actual Hitler. We will draw the line at different points depending upon the reason for our desire to define the point at which Adolf Hitler became an actual human being. (Of course, some people might want to exclude him from the definition of human altogether!) A lock of hair saved by Mrs. Schicklgrüber from the newborn Adolf might suffice if our purpose is a DNA test to identify remains found in a bunker in Berlin. But no one would suppose the newborn from which the hair derived could be the Adolf Hitler who could be tried for war crimes.
When discussing the potential Adolf Hitler, we must remember that Mrs. Schicklgrüber's fertilized egg was notthe beginning stage of our potential Führer. Her unfertilized egg also carried part of the potential—as did the sperm from papa Schicklgrüber/Hitler. But of course, a potential Adolf Hitler existed in the fertilized egg that became Mrs. Schicklgrüber and the fertilized egg that became papa Hitler. Where do you draw the line?
Not only do we have difficulty deciding when a human being begins, we often have trouble deciding when one ends as well. Does Ronald Religion—I mean, Ronald Reagan—still exist? Do you think the Alzheimer's specimen that carries his fingerprints can identify itself as the former Evangelist-in-Chief? I'm not sure. If it doesn't know it's Ronald Reagan, should we believe that Ronald Reagan nevertheless still exists? Is it a full-fledged human being? Was it a full-fledged human being when it presided in the Oval Office?
By virtue of the fact that humans exist in a world of continua, they too must be identified as occupying some position in some spectrum. Exactly which color in the rainbow of being is it that we wish to identify as human? Can there be more than one color we can call human? How broad can the band-width be? Are humans to be defined in terms of a one-dimensional rainbow of some sort, or must we try to pinpoint humanity with the coordinates of a multi-dimensional rainbow of myriad qualities? Will humanity be a pinpoint in such a multidimensional space, or will it constitute a volume? How large a volume do we need to contain all that we wish to admit to the domain we shall label "humanity"?
Let me muzzle my metaphors for a moment and consider a few particular problems relating to the defining of the word "human."
We all know that an acorn is not an oak tree, even though in some nontrivial sense an acorn is oaken. In like manner, the fertilized egg or zygote of Homo sapiens is not a person—even though the single-celled object is in some non-trivial sense human. How do we decide in the course of ontogeny—individual development—when a human being or person begins?
Since the zygote contains the chromosomal database needed to form an adult human being, we can consider it to be a potential human being. But potentiality is not actuality, and actuality itself seems to unfold as the single-celled zygote divides repeatedly to form an embryo, the embryo evolves into a fetus, the fetus develops to term, a baby is born, etc. Exactly where, between Mrs. Schicklgrüber's fertilized oocyte and the main speaker at the banner-draped rally at Nurnberg, did Adolf Hitler become actual? When did he cease to be potential? Did it ever happen at all? Was there such a point in time?
I think not. I think it is arbitrary where we draw the line between the potential Hitler and the actual Hitler. We will draw the line at different points depending upon the reason for our desire to define the point at which Adolf Hitler became an actual human being. (Of course, some people might want to exclude him from the definition of human altogether!) A lock of hair saved by Mrs. Schicklgrüber from the newborn Adolf might suffice if our purpose is a DNA test to identify remains found in a bunker in Berlin. But no one would suppose the newborn from which the hair derived could be the Adolf Hitler who could be tried for war crimes.
When discussing the potential Adolf Hitler, we must remember that Mrs. Schicklgrüber's fertilized egg was notthe beginning stage of our potential Führer. Her unfertilized egg also carried part of the potential—as did the sperm from papa Schicklgrüber/Hitler. But of course, a potential Adolf Hitler existed in the fertilized egg that became Mrs. Schicklgrüber and the fertilized egg that became papa Hitler. Where do you draw the line?
Not only do we have difficulty deciding when a human being begins, we often have trouble deciding when one ends as well. Does Ronald Religion—I mean, Ronald Reagan—still exist? Do you think the Alzheimer's specimen that carries his fingerprints can identify itself as the former Evangelist-in-Chief? I'm not sure. If it doesn't know it's Ronald Reagan, should we believe that Ronald Reagan nevertheless still exists? Is it a full-fledged human being? Was it a full-fledged human being when it presided in the Oval Office?
The phylogenetic problem
The problem of deciding when during embryonic and postnatal development a human being--a person—begins is paralleled by the problem of deciding where, in the course of evolution, humanity itself began. As we progress from the tiny insectivore cowering in the shadow of T. rex to the first prosimian, to the first monkey-grade primate, to the first ape-grade primate, to Australopithecus, Homo erectus, and Homo sapiens—where does humanity begin? Is it not going to be an arbitrary carving-up of the rainbow that decides that humanity begins at a certain color—a certain wave-length in the spectrum?
Fortunately, I know of no pressing reasons that could force us to draw such a line across the path of phylogeny (evolutionary development). Unfortunately, there are pressing reasons for trying to draw such a line in the case of ontogeny (individual development). Is aborting a hundred-celled conceptus murder or an act of hygiene?
When I said that I know of no pressing reason why we should have to decide when our ancestors first became human, I was referring only to Atheists and Humanists. Theists, on the other hand, do have to figure out how to draw such a line. For you see, theist—at least the kind that are trying to restrict our personal freedoms here in America—believe that the definition of "human being" is very simple: a human being is an organism that contains an immortal soul. As a consequence, they must decide if Australopithecus, with a brain the size of a chimp's brain, had a soul, or whether souls first became embodied with the advent of Homo erectus--or even later with the first "archaic" Homo sapiens individuals.
What they might make of Neanderthal Man is also of interest. If, as a small majority of physical anthropologists now conclude, Neanderthal Man was actually an off-shoot of the line leading to all living people—a remote cousin instead of a great-great-granddaddy—could Neanderthal Man have had a soul? Should we be politically correct and speak of Neanderthal Persons?
Could there have been two fully human species? Are all the Neanderthals now burning in hell for lack of baptism?
(Note added in 2020: National Geographic’s Genome Project recently informed me that 1.5% of my DNA is Neanderthal DNA. Since my DNA is also 98.5% chimpanzee, does that make me 100% non-human?)
Fortunately, I know of no pressing reasons that could force us to draw such a line across the path of phylogeny (evolutionary development). Unfortunately, there are pressing reasons for trying to draw such a line in the case of ontogeny (individual development). Is aborting a hundred-celled conceptus murder or an act of hygiene?
When I said that I know of no pressing reason why we should have to decide when our ancestors first became human, I was referring only to Atheists and Humanists. Theists, on the other hand, do have to figure out how to draw such a line. For you see, theist—at least the kind that are trying to restrict our personal freedoms here in America—believe that the definition of "human being" is very simple: a human being is an organism that contains an immortal soul. As a consequence, they must decide if Australopithecus, with a brain the size of a chimp's brain, had a soul, or whether souls first became embodied with the advent of Homo erectus--or even later with the first "archaic" Homo sapiens individuals.
What they might make of Neanderthal Man is also of interest. If, as a small majority of physical anthropologists now conclude, Neanderthal Man was actually an off-shoot of the line leading to all living people—a remote cousin instead of a great-great-granddaddy—could Neanderthal Man have had a soul? Should we be politically correct and speak of Neanderthal Persons?
Could there have been two fully human species? Are all the Neanderthals now burning in hell for lack of baptism?
(Note added in 2020: National Geographic’s Genome Project recently informed me that 1.5% of my DNA is Neanderthal DNA. Since my DNA is also 98.5% chimpanzee, does that make me 100% non-human?)
The consequences of having souls
I might digress again to note that belief in the existence of an immortal soul has some very funny evolutionary consequences. We can dismiss the young-earth creationists as being too far below our level of discourse to bother with. But there are the so-called theistic evolutionists--many of whom even have had college courses in biology. They believe in both evolution and souls. Consider the implications of this.
Remember that we trace our ancestry back, generation by generation, to primate ancestors that looked less and less like us the farther back we go. Remember also, that no generation along this path differed any more from its parent generation than do we from our parents. Consider now the problem of the entry of the soul into primate evolution.
Somewhere along this rainbow, before reaching us as the pot of gold at its end (maybe pot of violets would be more in keeping with my metaphor), the spirit-believing theist must draw the line: before this line are mere animals; after the line are humans. In practical terms this means that at some point there was a generation which could tell its parents: "Look ma and pa! You are just a couple of animals. We are fully human. When you die, you will rot like rutabagas. When we die, because of this soul we just received, we're going to live again—'somewhere over the rainbow, way up hiiiigh...'."
The Humanist is not out of the woods on the phylogeny problem, however, no matter how abysmally the theist's nose is rubbed in absurdities because of it. For we wish to understand just what it is that makes us human, and studying our evolutionary lineage might give us a strong clue even if not a definitive answer.
Was it the evolution of self-recognition, personal identity, that made us human? Wouldn't that probably have included Australopithecusas well as us? Was it the beginning of a sense of time, especially, a sense of futurity that would allow planning? Would not that include Homo erectus as well as us? If H. erectus were able to hunt effectively, would they not have had to be able to plan and deal with future time as well as the ordinary eternal-present of other animals?
If it was the beginning of language that first made us human, how do we figure out when that began? Does the shouted warning "Cave Bear!" suffice to fill the language criterion? Or do you need relational communication such as "Me Tarzan, You Jane"? Or do we have to have the ability to express more abstract ideas such as "Every 28 days the moon is eaten by a black dragon"?
As we study the human genome, we hope to discover further information with which to answer Shakespeare's question, "What is a man ...?" We are unraveling the chemical recipe as it were, that provides the instructions for making a human being. Interestingly, similar analysis has been applied to the genomes of our evolutionary cousins the chimpanzee and the gorilla. To the consternation of Baptists at the zoo—I've always wondered if Baptists should be inthe zoo—it has been discovered that the DNAs of chimps and humans are almost 99% identical. Gorillas and humans are almost 98% identical.
Does this mean the chimp is 99% human? Or we are 99% ape? Should chimps be allowed to vote? Or might this give us justification to put the Baptists in the zoo?
We will very soon know exactly which genes account for that one percent difference, and then we will have new questions to answer. I believe it has been estimated that we and chimps may differ in little more than approximately a hundred specific gene functions. That is, the DNA for those genes is spelled out slightly differently in chimps and humans, and those genes are expressed differently.
What if, by generally accepted methods of genetic engineering, one at a time we then began to substitute human genes into chimpanzees? Would we reach a point where we would have to consider chimps full-fledged human beings? What if, after the lucky substitution of only a few particular genes, the chimp acquired the capacity for articulate, abstract speech —while still looking like a typical chimp on the outside? What if the chimp toldyou it was human? Would you recruit it for membership in the American Humanist Association? Do we have to admit to the ranks of humanity everything that on its own can claim the position?
This may be the best point at which to bring up a question for which I have no confident answer. I think you all will agree with me that chimpanzees are of value. But, from a Humanistic point of view, why should a chimpanzee be of value? And, in human equivalents, how valuable? Are chimps of value to us because by studying them we can gain insight into our own nature, to the extent that the chimp can serve as a surrogate human?
Are chimps of much greater value to us than rats and mice because there is so much more we can learn about ourselves by studying chimps than by studying rodents? Are chimps valuable because genetically they are 99% human? Would this make one chimp worth 0.99 humans? If you kill a chimp, should you be tried for 0.99th-degree murder? Or would you suppose that not all gene-percents are of equal value, and that the chimp's lack of the most valuable one-percent places its human value much lower than 0.99 human equivalents? How much lower?
Is a chimp slightly more valuable than a gorilla, which genetically is about a half-percent less human than the chimp? What about the fact that chimps and gorillas are endangered species? Would the last chimpanzee alive on the planet be more valuable than a single chimpanzee in the zoo today? Why or why not? Would the last chimpanzee be worth a human being? Would it be worth more than one human being? Our planet is now grossly overpopulated by our species. If the last chimpanzee is worth more than a chimpanzee that is part of a large group, does individual value decrease as population increases?Are humans worth less on our overpopulated planet than they were, say, a century ago?
You may have been thinking that there is an important distinction between the last chimpanzee—or rather, the last small group of chimpanzees that could preserve the existence of the species—and a regulation issue chimp living in a non-threatened population somewhere in Africa. The former represents an entire species, whereas the latter is just an individual. Does the extinction of a species represent a greater loss than the death of an individual? I think it does, but I'mnot sure I can prove why that is. We run afoul of the potentiality-actuality problem.
The last chimp—keep in mind, at this point we are actually considering a small group of reproductively capable individuals, not literally one animal—carries the potential for preservation of the species. But of course, we could say the same thing about every chimpanzee in the woods today. So what's the difference?
A possible answer might be that the importance of potentiality seems to be inversely proportional to the importance of actuality.That is, when there are lots of actual chimps, the potentiality of a particular chimp is not of great value. But as the number of actual chimps decreases, the potential value of each remaining one increases.
Without trying to prove this, I wish to extend this query to the human sphere—even though I'mstill nowhere near finding a successful definition for the term human. I've already asked if humans are worth less on our overpopulated planet today than they were a century ago. Perhaps the size of the human population a century ago was already so large that the value of individuals would be difficult to assess. Let's go back to twenty thousand years ago, when the entire human population of the planet may not have exceeded four to five hundred thousand individuals.
Keeping in mind the fact that at any moment a plague might have greatly reduced even that number, what can we say of the potential value of one of those prehistoric people as compared to the potential value of the average person alive today? Even apart from the fact that many of those individuals were my ancestors—and thus of inestimable potential value—would you not agree that their potential value, their potential significance, was much greater than the potential value of people alive today?
Every one of those people carried the future of our species in his or her loins. You might say that the same is true today, but in reverse: the more the potentialities in our loins become actualities, the greater the danger they pose for the survival of our species—the greater the chances we will go extinct due to the ecocidal consequences of overpopulation. Again, is not the survival of our species of greater value than the survival of any of us as individuals? Although I feel strongly that that is so, I know of no way to prove it.
Remember that we trace our ancestry back, generation by generation, to primate ancestors that looked less and less like us the farther back we go. Remember also, that no generation along this path differed any more from its parent generation than do we from our parents. Consider now the problem of the entry of the soul into primate evolution.
Somewhere along this rainbow, before reaching us as the pot of gold at its end (maybe pot of violets would be more in keeping with my metaphor), the spirit-believing theist must draw the line: before this line are mere animals; after the line are humans. In practical terms this means that at some point there was a generation which could tell its parents: "Look ma and pa! You are just a couple of animals. We are fully human. When you die, you will rot like rutabagas. When we die, because of this soul we just received, we're going to live again—'somewhere over the rainbow, way up hiiiigh...'."
The Humanist is not out of the woods on the phylogeny problem, however, no matter how abysmally the theist's nose is rubbed in absurdities because of it. For we wish to understand just what it is that makes us human, and studying our evolutionary lineage might give us a strong clue even if not a definitive answer.
Was it the evolution of self-recognition, personal identity, that made us human? Wouldn't that probably have included Australopithecusas well as us? Was it the beginning of a sense of time, especially, a sense of futurity that would allow planning? Would not that include Homo erectus as well as us? If H. erectus were able to hunt effectively, would they not have had to be able to plan and deal with future time as well as the ordinary eternal-present of other animals?
If it was the beginning of language that first made us human, how do we figure out when that began? Does the shouted warning "Cave Bear!" suffice to fill the language criterion? Or do you need relational communication such as "Me Tarzan, You Jane"? Or do we have to have the ability to express more abstract ideas such as "Every 28 days the moon is eaten by a black dragon"?
As we study the human genome, we hope to discover further information with which to answer Shakespeare's question, "What is a man ...?" We are unraveling the chemical recipe as it were, that provides the instructions for making a human being. Interestingly, similar analysis has been applied to the genomes of our evolutionary cousins the chimpanzee and the gorilla. To the consternation of Baptists at the zoo—I've always wondered if Baptists should be inthe zoo—it has been discovered that the DNAs of chimps and humans are almost 99% identical. Gorillas and humans are almost 98% identical.
Does this mean the chimp is 99% human? Or we are 99% ape? Should chimps be allowed to vote? Or might this give us justification to put the Baptists in the zoo?
We will very soon know exactly which genes account for that one percent difference, and then we will have new questions to answer. I believe it has been estimated that we and chimps may differ in little more than approximately a hundred specific gene functions. That is, the DNA for those genes is spelled out slightly differently in chimps and humans, and those genes are expressed differently.
What if, by generally accepted methods of genetic engineering, one at a time we then began to substitute human genes into chimpanzees? Would we reach a point where we would have to consider chimps full-fledged human beings? What if, after the lucky substitution of only a few particular genes, the chimp acquired the capacity for articulate, abstract speech —while still looking like a typical chimp on the outside? What if the chimp toldyou it was human? Would you recruit it for membership in the American Humanist Association? Do we have to admit to the ranks of humanity everything that on its own can claim the position?
This may be the best point at which to bring up a question for which I have no confident answer. I think you all will agree with me that chimpanzees are of value. But, from a Humanistic point of view, why should a chimpanzee be of value? And, in human equivalents, how valuable? Are chimps of value to us because by studying them we can gain insight into our own nature, to the extent that the chimp can serve as a surrogate human?
Are chimps of much greater value to us than rats and mice because there is so much more we can learn about ourselves by studying chimps than by studying rodents? Are chimps valuable because genetically they are 99% human? Would this make one chimp worth 0.99 humans? If you kill a chimp, should you be tried for 0.99th-degree murder? Or would you suppose that not all gene-percents are of equal value, and that the chimp's lack of the most valuable one-percent places its human value much lower than 0.99 human equivalents? How much lower?
Is a chimp slightly more valuable than a gorilla, which genetically is about a half-percent less human than the chimp? What about the fact that chimps and gorillas are endangered species? Would the last chimpanzee alive on the planet be more valuable than a single chimpanzee in the zoo today? Why or why not? Would the last chimpanzee be worth a human being? Would it be worth more than one human being? Our planet is now grossly overpopulated by our species. If the last chimpanzee is worth more than a chimpanzee that is part of a large group, does individual value decrease as population increases?Are humans worth less on our overpopulated planet than they were, say, a century ago?
You may have been thinking that there is an important distinction between the last chimpanzee—or rather, the last small group of chimpanzees that could preserve the existence of the species—and a regulation issue chimp living in a non-threatened population somewhere in Africa. The former represents an entire species, whereas the latter is just an individual. Does the extinction of a species represent a greater loss than the death of an individual? I think it does, but I'mnot sure I can prove why that is. We run afoul of the potentiality-actuality problem.
The last chimp—keep in mind, at this point we are actually considering a small group of reproductively capable individuals, not literally one animal—carries the potential for preservation of the species. But of course, we could say the same thing about every chimpanzee in the woods today. So what's the difference?
A possible answer might be that the importance of potentiality seems to be inversely proportional to the importance of actuality.That is, when there are lots of actual chimps, the potentiality of a particular chimp is not of great value. But as the number of actual chimps decreases, the potential value of each remaining one increases.
Without trying to prove this, I wish to extend this query to the human sphere—even though I'mstill nowhere near finding a successful definition for the term human. I've already asked if humans are worth less on our overpopulated planet today than they were a century ago. Perhaps the size of the human population a century ago was already so large that the value of individuals would be difficult to assess. Let's go back to twenty thousand years ago, when the entire human population of the planet may not have exceeded four to five hundred thousand individuals.
Keeping in mind the fact that at any moment a plague might have greatly reduced even that number, what can we say of the potential value of one of those prehistoric people as compared to the potential value of the average person alive today? Even apart from the fact that many of those individuals were my ancestors—and thus of inestimable potential value—would you not agree that their potential value, their potential significance, was much greater than the potential value of people alive today?
Every one of those people carried the future of our species in his or her loins. You might say that the same is true today, but in reverse: the more the potentialities in our loins become actualities, the greater the danger they pose for the survival of our species—the greater the chances we will go extinct due to the ecocidal consequences of overpopulation. Again, is not the survival of our species of greater value than the survival of any of us as individuals? Although I feel strongly that that is so, I know of no way to prove it.
A paradox
There seems to be a paradox here. We have held the working assumption that it is people that make values and it is people who are the standards of value. If one person is worth X units of value, would not two people be worth 2X? Would not 100 people be worth 100X? But have we not just about convinced ourselves that six billion people must be worth less than 6-billion-X? Have we come to a self-contradiction in the Humanistic system of values? Is this contradiction due to a mistake in logic I have made somewhere in the course of this discussion, or is there some deficiency in the premises of Humanism? Is there some standard anterior to humanity that I have not identified?
Is it perhaps not the number of humans alone that we must consider, but rather the total amount of human happiness it represents? Or potential happiness? As more and more of the six billion of us begin to starve and descend into ignominious death and destruction, is not the sum of human happiness less than when there were only three billion people, with a smaller percentage starving?
Before you answer this question, consider one more thing: what was the sum of human happiness twenty thousand years ago, when it seemed the potential value of each person was vastly greater than that of people today? As they were scavenging the carcasses left by lions and tigers and bears, cowering beneath the thunderclaps, and facing the fearful shadows of an incomprehensible natural world, how happy do you think they were? Is the value of people individually or in groups dependent also on qualitative factors—factors that themselves might help define that which is human?
Are intelligent, creative individuals more human—thus more valuable—than dull or witless individuals that rely upon little more than the autonomic nervous system? Is an Albert Einstein or a Marie Curie worth more in our Humanistic calculus than an anencephalic individual who has—against all odds—survived several months past birth and who will never be able to deal with intellectual problems above the level of sphincter control? How do we assess the value of individuals with Down's Syndrome—individuals who can range in ability from near-normal to severe deficiency? Are all of these human? Are they equally human?
I think I must bring this quest for the essence of humanity to a premature halt. As you can see, I am coming up with questions far faster than I can suggest answers. And so, I must end with a confession: When I call myself a Humanist, I don't really know what I'm talking about. But I do know why I don't know what I'm talking about. It's because I don't know how to define the term "human." I'm not sure you do either.
If I have made you conscious of an important problem you had not considered before, perhaps this inquiry has not been in vain. If there are any of you who had not thought about this before, and if I have now made you aware of this fundamental problem of Humanism, then perhaps my presentation will not have been a negative exercise or a waste of your time. I end with the simple hope that some of you might be better able than I to deal with the questions I have raised this evening. They are, after all, the most important questions facing us as human beings—even if for the moment we cannot prove that we are human.
Is it perhaps not the number of humans alone that we must consider, but rather the total amount of human happiness it represents? Or potential happiness? As more and more of the six billion of us begin to starve and descend into ignominious death and destruction, is not the sum of human happiness less than when there were only three billion people, with a smaller percentage starving?
Before you answer this question, consider one more thing: what was the sum of human happiness twenty thousand years ago, when it seemed the potential value of each person was vastly greater than that of people today? As they were scavenging the carcasses left by lions and tigers and bears, cowering beneath the thunderclaps, and facing the fearful shadows of an incomprehensible natural world, how happy do you think they were? Is the value of people individually or in groups dependent also on qualitative factors—factors that themselves might help define that which is human?
Are intelligent, creative individuals more human—thus more valuable—than dull or witless individuals that rely upon little more than the autonomic nervous system? Is an Albert Einstein or a Marie Curie worth more in our Humanistic calculus than an anencephalic individual who has—against all odds—survived several months past birth and who will never be able to deal with intellectual problems above the level of sphincter control? How do we assess the value of individuals with Down's Syndrome—individuals who can range in ability from near-normal to severe deficiency? Are all of these human? Are they equally human?
I think I must bring this quest for the essence of humanity to a premature halt. As you can see, I am coming up with questions far faster than I can suggest answers. And so, I must end with a confession: When I call myself a Humanist, I don't really know what I'm talking about. But I do know why I don't know what I'm talking about. It's because I don't know how to define the term "human." I'm not sure you do either.
If I have made you conscious of an important problem you had not considered before, perhaps this inquiry has not been in vain. If there are any of you who had not thought about this before, and if I have now made you aware of this fundamental problem of Humanism, then perhaps my presentation will not have been a negative exercise or a waste of your time. I end with the simple hope that some of you might be better able than I to deal with the questions I have raised this evening. They are, after all, the most important questions facing us as human beings—even if for the moment we cannot prove that we are human.
OF FREE WILL AND FLUSH TOILETS
The most astonishing logical paradox ever to be cherished by man is presented in the circumstance that the theologists, convinced that God in his omnipotence had predetermined the fate of every man, and in his omniscience had from the beginning of time foreseen that fate, should yet hold to the belief that he nevertheless holds every man responsible for his action, rewarding him either with eternal beatitude or eternal punishment. For theology the invention of free will to which culpability could be assigned only formalized the complete abandonment of reason in order to keep the system in operation.
—Homer W. Smith, Man And His Gods
Long after they have cast aside their belief in gods and angels and fairies, many people still cling tenaciously to a stubborn belief in one last will-o'-the-wisp—the phantomof free will. To challenge the doctrine of free will is to stir up a storm of protest so intense as to make one think, by comparison, that publicly denying the existence of the gods themselves could not produce a breeze strong enough to dislodge the dew from a lightning rod on a church steeple. Many otherwise rational people think it immoral—or at least un-American—to deny that people act "freely" when acting willfully. Although I risk inciting to disaffection many of the people who have expressed admiration for some of my previous articles, I must now focus my 'Probing Mind' upon the question, "Can will be free?"
Let me answer the question straightaway with a firm "no," and then attempt to support my conclusion.But to reassure my horrified readers that at least I was not born a free-will miscreant, and to fix the blame for my present debauched state, I must note that for a year or so after I had become an Atheist (at Kalamazoo College), I still defended the idea of free will in my disputes with theists and Atheists alike. It was only after I had transferred to the University of Michigan that I became convinced that the idea of free will was indefensible. The blame for my fall from grace rests firmly upon Homer W. Smith, whose book Man And His Gods with foreword by Albert Einstein [Grosset's Universal Library, 1952, 1956] ranks along with Alfred Jules Ayers Language, Truth, and Logic [Dover Publications, Inc., 1946] as one of the two most influential books I have ever read.
Although Smith devoted 444 pages to a summary of the evolutionary interaction between philosophy, religion, and science, he needed only several paragraphs to dispatch from my mind forever the notion that will can be free. I would like to quote the relevant paragraphs here—and leave it to my readers to estimate how many philosopher's clocks have been cleaned by Smith's lucid logic.
To challenge "free will" was to challenge the foundations not only of orthodox theology, but in large measure of all transcendentalism. If human decisions, however directly or deviously arrived at, were 'determined' solely by pre-existing knowledge, predilections, predispositions, emotions, memories, desires, by any or all of the multiplicity of mental images afforded to consciousness by the external and internal organs of sense, then it followed that an individual elects one course of action in preference to another, not by 'willful choice,' but simply because consciousness presents a balance positively weighted on the side of the selected action. Hence personal culpability would cease to exist, divine punishment and reward would be both monstrous and absurd morality would be a convention, sin would be an arbitrary condemnation, the grace of the church would be superfluous and that institution could better devote itself to liberal education.
For the naturalists, free will was a countersense, a verbal contradiction. To 'will' is to choose a course of action in which more than one course is potentially presented, and to choose one course of action as opposed to another requires not only knowledge of alternatives, but reason for the choice. Decision (de + caedere, to cut off) without reference to cause or consequence of that which is rejected or accepted could only refer to an act occurring in a referential vacuum, and if such could be conceived it could only be designated as an action issuing from nothing at all, ab nihilo, from absolute ignorance. Since willing can never be free of knowledge of either cause or consequence, it can never be free at all. (Man And His Gods, pp. 409-410)
Theoretically, I could end this article right here, and let Homer Smith's argument stand by itself. But so much has happened in the realm of science since 1952 when Smith wrote the above, and the implications of the argument are so far-reaching, that I must expand the discussion.
Before we consider specific problems inherent in the notion that will can be free (i.e., uncaused or indeterminate), it is worth noting that psychology as a science would be impossible if behavior could occur without causation. If I may be permitted to ignore the complex and confusing case of quantum physics, it can be said that all of science is a quest seeking to relate effects to their causes. Psychology would either cease to exist altogether, or it would be an artificial discipline dealing with the behavior of all animal species except Homo sapiens. What would psychologists do, if they couldn't seek the causes of human behavior? Everyone's behavior would be a case by itself, and psychologists could do no more than catalog the examples of human behavior which have been observed.
Let us first pretend that there be such a thing as free will, and consider two questions: (1) how could it have originated? and (2) when would it operate? Then let us consider the practical meaning of the fact that there is no such thing as free will.
—Homer W. Smith, Man And His Gods
Long after they have cast aside their belief in gods and angels and fairies, many people still cling tenaciously to a stubborn belief in one last will-o'-the-wisp—the phantomof free will. To challenge the doctrine of free will is to stir up a storm of protest so intense as to make one think, by comparison, that publicly denying the existence of the gods themselves could not produce a breeze strong enough to dislodge the dew from a lightning rod on a church steeple. Many otherwise rational people think it immoral—or at least un-American—to deny that people act "freely" when acting willfully. Although I risk inciting to disaffection many of the people who have expressed admiration for some of my previous articles, I must now focus my 'Probing Mind' upon the question, "Can will be free?"
Let me answer the question straightaway with a firm "no," and then attempt to support my conclusion.But to reassure my horrified readers that at least I was not born a free-will miscreant, and to fix the blame for my present debauched state, I must note that for a year or so after I had become an Atheist (at Kalamazoo College), I still defended the idea of free will in my disputes with theists and Atheists alike. It was only after I had transferred to the University of Michigan that I became convinced that the idea of free will was indefensible. The blame for my fall from grace rests firmly upon Homer W. Smith, whose book Man And His Gods with foreword by Albert Einstein [Grosset's Universal Library, 1952, 1956] ranks along with Alfred Jules Ayers Language, Truth, and Logic [Dover Publications, Inc., 1946] as one of the two most influential books I have ever read.
Although Smith devoted 444 pages to a summary of the evolutionary interaction between philosophy, religion, and science, he needed only several paragraphs to dispatch from my mind forever the notion that will can be free. I would like to quote the relevant paragraphs here—and leave it to my readers to estimate how many philosopher's clocks have been cleaned by Smith's lucid logic.
To challenge "free will" was to challenge the foundations not only of orthodox theology, but in large measure of all transcendentalism. If human decisions, however directly or deviously arrived at, were 'determined' solely by pre-existing knowledge, predilections, predispositions, emotions, memories, desires, by any or all of the multiplicity of mental images afforded to consciousness by the external and internal organs of sense, then it followed that an individual elects one course of action in preference to another, not by 'willful choice,' but simply because consciousness presents a balance positively weighted on the side of the selected action. Hence personal culpability would cease to exist, divine punishment and reward would be both monstrous and absurd morality would be a convention, sin would be an arbitrary condemnation, the grace of the church would be superfluous and that institution could better devote itself to liberal education.
For the naturalists, free will was a countersense, a verbal contradiction. To 'will' is to choose a course of action in which more than one course is potentially presented, and to choose one course of action as opposed to another requires not only knowledge of alternatives, but reason for the choice. Decision (de + caedere, to cut off) without reference to cause or consequence of that which is rejected or accepted could only refer to an act occurring in a referential vacuum, and if such could be conceived it could only be designated as an action issuing from nothing at all, ab nihilo, from absolute ignorance. Since willing can never be free of knowledge of either cause or consequence, it can never be free at all. (Man And His Gods, pp. 409-410)
Theoretically, I could end this article right here, and let Homer Smith's argument stand by itself. But so much has happened in the realm of science since 1952 when Smith wrote the above, and the implications of the argument are so far-reaching, that I must expand the discussion.
Before we consider specific problems inherent in the notion that will can be free (i.e., uncaused or indeterminate), it is worth noting that psychology as a science would be impossible if behavior could occur without causation. If I may be permitted to ignore the complex and confusing case of quantum physics, it can be said that all of science is a quest seeking to relate effects to their causes. Psychology would either cease to exist altogether, or it would be an artificial discipline dealing with the behavior of all animal species except Homo sapiens. What would psychologists do, if they couldn't seek the causes of human behavior? Everyone's behavior would be a case by itself, and psychologists could do no more than catalog the examples of human behavior which have been observed.
Let us first pretend that there be such a thing as free will, and consider two questions: (1) how could it have originated? and (2) when would it operate? Then let us consider the practical meaning of the fact that there is no such thing as free will.
When Could We Have Gotten It?
There are few things in science so well established as the fact that humans are animals and that they are related by descent to all the other species of living things now populating our planet. Our brains are extremely similar to those of the great apes (human DNA, after all, is almost 99 percent identical to that of chimpanzees), and the chemical code by which the genetic instructions are written for the making of a person is identical to that employed for the construction of the lowliest bacterium. If humans have evolved from "lower" forms, we must ask how it came about that our will is now free.
Either free will is a characteristic humans share with the rest of the animal kingdom, or it arose as an emergent, qualitatively unique characteristic in the course of human evolution. The former alternative would seem to be ruled out, at least in the case of primitive animals such as jellyfish. Although much still remains to be learned about the nerve-net which, along with a number of simple sensory receptors, constitutes the entire nervous system of these humble creatures, it is too much to suppose that creatures completely lacking brains could display anything describable as free will. While the nerve-net of the jellyfish is capable of decision-making, no one could seriously suggest that jellyfish behavior is any freer or less determined than a thermostatically controlled furnace, a hydrostatically controlled flush toilet, or the behavior of the hypothetical photophobic insect robot of Figure 1.Can it seriously be supposed that the behavior of a real cockroach—although modifiable by many more factors than those affecting our robot—is any less determined?
It seems clear, if we compare a variety of primitive animals with our insect robot, that free will is not to be counted as part of their behavioral repertoire. While we cannot rule it out absolutely, it is more parsimonious to suppose their behavior is completely determined than to postulate the additional factor of free will. Occam's razor, which advises us not to multiply basic assumptions beyond necessity, cuts off most (if not all) of the animal kingdom from the family tree of free-willed beings.
If free will is not a characteristic shared with the rest of the animal kingdom, then—if there be such a thing at all—it must have arisen at some specific instant in the course of human evolution. Can it be that the one percent difference between human and chimpanzee genes is the factor giving freedom to the human will? If, by means of genetic engineering, we could clone the free-will genes and insert them into the ape genome, would they confer free will upon the apes? Would their free will then be determined by those genes? Modern humans are connected to primitive ancestral animals of the arbitrarily remote past by a finite series of generations, each of which differed no more from its parents than do we from ours. Is it conceivable that suddenly, say on the 29th of February of the first leap-year after the termination of the third glacial period, a generation was begun which miraculously had acquired free will—and was thus completely human—but the members of this generation sprang from parents who were animals still enslaved by causality? Simply to ask the question is to reject the idea. Unless will can be partially free, and thus have evolvedby degrees, it is impossible to conceive how we could have acquired free will in the course of evolution. The notion of "partially free will," however, seems to be an oxymoron on par with "approximately even numbers."
Leaving unanswered the question of how free will could have evolved, we turn to the problem of how free will could arise in the course of development of individual humans. Does the fertilized human egg have free will? Does the fetus? Since the reader is not likely to suppose that prenatal humans either possess or exercise free will, we proceed to ask if this faculty is acquired at the 'moment' of birth. Does the human newborn appear to be any less an automaton than the robot insect of Figure 1?
If it is, in fact, impossible for free will to be partial, and if we do in fact have the faculty, there must he some magic moment when we acquire it, and we must wonder if all people acquire it at the same age—in fact, we must wonder if there might be some people, arrested in development, who never acquire the faculty at all. In the case of two-headed monsters, for instance, would the two heads acquire free will simultaneously if one head started out bigger than the other?
Before giving up our quest for the beginning of free will in the course of individual development, we should read what Hiram Elfenbein, a lamentably deceased friend of mine, had to say about the problem.
Only a moment's contemplation is needed for a mature adult to realize that on coming into the world and continuing for several months there is no imaginable situation in which an infant could or would exercise free will. Free will, you see, requires its possessor to know that it has a choice among alternatives. The infant must therefore be able to count at least to two. It must also be able to make an intellectual distinction between two ideological propositions, such as to do or not to do. Actually, under the religionist's concept of free will, if the child is responsible for his sins, it must be able to entertain, enumerate, and evaluate at least three separate ideas: (1) to do a certain thing; (2) not to do it or to do an alternate thing; and (3) to decide whether the first two acts are sins or not.
Either free will is a characteristic humans share with the rest of the animal kingdom, or it arose as an emergent, qualitatively unique characteristic in the course of human evolution. The former alternative would seem to be ruled out, at least in the case of primitive animals such as jellyfish. Although much still remains to be learned about the nerve-net which, along with a number of simple sensory receptors, constitutes the entire nervous system of these humble creatures, it is too much to suppose that creatures completely lacking brains could display anything describable as free will. While the nerve-net of the jellyfish is capable of decision-making, no one could seriously suggest that jellyfish behavior is any freer or less determined than a thermostatically controlled furnace, a hydrostatically controlled flush toilet, or the behavior of the hypothetical photophobic insect robot of Figure 1.Can it seriously be supposed that the behavior of a real cockroach—although modifiable by many more factors than those affecting our robot—is any less determined?
It seems clear, if we compare a variety of primitive animals with our insect robot, that free will is not to be counted as part of their behavioral repertoire. While we cannot rule it out absolutely, it is more parsimonious to suppose their behavior is completely determined than to postulate the additional factor of free will. Occam's razor, which advises us not to multiply basic assumptions beyond necessity, cuts off most (if not all) of the animal kingdom from the family tree of free-willed beings.
If free will is not a characteristic shared with the rest of the animal kingdom, then—if there be such a thing at all—it must have arisen at some specific instant in the course of human evolution. Can it be that the one percent difference between human and chimpanzee genes is the factor giving freedom to the human will? If, by means of genetic engineering, we could clone the free-will genes and insert them into the ape genome, would they confer free will upon the apes? Would their free will then be determined by those genes? Modern humans are connected to primitive ancestral animals of the arbitrarily remote past by a finite series of generations, each of which differed no more from its parents than do we from ours. Is it conceivable that suddenly, say on the 29th of February of the first leap-year after the termination of the third glacial period, a generation was begun which miraculously had acquired free will—and was thus completely human—but the members of this generation sprang from parents who were animals still enslaved by causality? Simply to ask the question is to reject the idea. Unless will can be partially free, and thus have evolvedby degrees, it is impossible to conceive how we could have acquired free will in the course of evolution. The notion of "partially free will," however, seems to be an oxymoron on par with "approximately even numbers."
Leaving unanswered the question of how free will could have evolved, we turn to the problem of how free will could arise in the course of development of individual humans. Does the fertilized human egg have free will? Does the fetus? Since the reader is not likely to suppose that prenatal humans either possess or exercise free will, we proceed to ask if this faculty is acquired at the 'moment' of birth. Does the human newborn appear to be any less an automaton than the robot insect of Figure 1?
If it is, in fact, impossible for free will to be partial, and if we do in fact have the faculty, there must he some magic moment when we acquire it, and we must wonder if all people acquire it at the same age—in fact, we must wonder if there might be some people, arrested in development, who never acquire the faculty at all. In the case of two-headed monsters, for instance, would the two heads acquire free will simultaneously if one head started out bigger than the other?
Before giving up our quest for the beginning of free will in the course of individual development, we should read what Hiram Elfenbein, a lamentably deceased friend of mine, had to say about the problem.
Only a moment's contemplation is needed for a mature adult to realize that on coming into the world and continuing for several months there is no imaginable situation in which an infant could or would exercise free will. Free will, you see, requires its possessor to know that it has a choice among alternatives. The infant must therefore be able to count at least to two. It must also be able to make an intellectual distinction between two ideological propositions, such as to do or not to do. Actually, under the religionist's concept of free will, if the child is responsible for his sins, it must be able to entertain, enumerate, and evaluate at least three separate ideas: (1) to do a certain thing; (2) not to do it or to do an alternate thing; and (3) to decide whether the first two acts are sins or not.
Figure 1. A hypothetical 'insect' wired up in such a way that it will flee from light and approach food. Light falling, say, on the left eye will cause excitation to be transmitted to the ganglia (G) controlling the legs on the left side of the body. As a result, the left legs will be more active than the right legs, and the insect will 'decide' to turn away from the light source until the light strikes both eyes equally from behind and the animal proceeds in a straight line away from the light. (In the unlucky event that both eyes are simultaneously overstimulated by a strong light straight ahead, however, the poor robot will act just like a moth drawn to a flame—and for the same reason.) In the case where the insect comes within range of food, again somewhere to the left of the beast, odor molecules will strike the left antenna more intensely than the right one. Because of the cross-over design of the wiring, excitation in the left antenna will activate the legs on the right side, and the animal will 'decide' to orient itself facing the food. With both antennae being stimulated equally, the animal will march with increasing speed toward the food.
At any rate, the question arises:
When is this ability to decide to behave independently of any outside force bestowed on the human being? A clever defender of this theory could contend that the power is acquired in later stages just as the sex urge develops and manifests itself in adolescence. But such an assertion takes for granted that a person must first undergo physical and mental development before he achieves freedom of choice. Consequently, this means that the individual can exercise free will only after his mind and body have been pre-conditioned for it.
Such preparation of the individual necessarily brings in the outside world as an element of free will, thereby destroying the contention that it is an internally isolated decision.
(Organized Religion: The Great Game of Make-Believe, p. 39. New York, NY: Philosophical Library, 1968).
When Could We Use It?
Leaving as an insoluble riddle the question of how we might have acquired 'free will,' we proceed to the question of just when it might be that we exercise the faculty. Again, Hiram Elfenbein's observations are helpful:
Where, exactly, is free will at work?
In the formation of the person's ideas? Or in the action or speech resulting from the formation of those ideas?
Thus, in its origin, the notion of free will as being an individual's uncontrolled opportunity to do one thing or not to do it or to do an alternate thing is a conception that is inconsistent with reality and impossible to express. No person can have an idea—any idea—that is not connected with and hence dependent upon his past. Nothing is separable from its beginning. And nothing "begins" spontaneously in the sense of having no prior foundation. Every person acts because in part his preceding career makes that act the next step in his journey through life, and his decision to take that step is the culmination of all his previous steps, not to mention the effect on him of his surroundings.
It is utterly impossible—it is more, it is incomprehensible--for any person to decide, for example, that today he will have for dinner whatever his free will alone will choose and that he will not be influenced by anything in existence or in his past in choosing his menu. Or stated otherwise, it is inconceivable for one to think of his "free will" deciding what he will eat tonight, uninfluenced in any way by outer matter or force.
In truth, his own physical existence and his own physical and mental history, not to speak of the outer world, will assiduously work on him and on his choosing despite his most strenuous efforts to ignore them. Among other things his hunger, his palate, his capacity for food are all parts of his being and his life, as is his knowledge of food, no matter how much he wills himself to disregard them, and they will determine for him almost everything he selects to eat. (Organized Religion, pp. 41-42.)
At any rate, the question arises:
When is this ability to decide to behave independently of any outside force bestowed on the human being? A clever defender of this theory could contend that the power is acquired in later stages just as the sex urge develops and manifests itself in adolescence. But such an assertion takes for granted that a person must first undergo physical and mental development before he achieves freedom of choice. Consequently, this means that the individual can exercise free will only after his mind and body have been pre-conditioned for it.
Such preparation of the individual necessarily brings in the outside world as an element of free will, thereby destroying the contention that it is an internally isolated decision.
(Organized Religion: The Great Game of Make-Believe, p. 39. New York, NY: Philosophical Library, 1968).
When Could We Use It?
Leaving as an insoluble riddle the question of how we might have acquired 'free will,' we proceed to the question of just when it might be that we exercise the faculty. Again, Hiram Elfenbein's observations are helpful:
Where, exactly, is free will at work?
In the formation of the person's ideas? Or in the action or speech resulting from the formation of those ideas?
Thus, in its origin, the notion of free will as being an individual's uncontrolled opportunity to do one thing or not to do it or to do an alternate thing is a conception that is inconsistent with reality and impossible to express. No person can have an idea—any idea—that is not connected with and hence dependent upon his past. Nothing is separable from its beginning. And nothing "begins" spontaneously in the sense of having no prior foundation. Every person acts because in part his preceding career makes that act the next step in his journey through life, and his decision to take that step is the culmination of all his previous steps, not to mention the effect on him of his surroundings.
It is utterly impossible—it is more, it is incomprehensible--for any person to decide, for example, that today he will have for dinner whatever his free will alone will choose and that he will not be influenced by anything in existence or in his past in choosing his menu. Or stated otherwise, it is inconceivable for one to think of his "free will" deciding what he will eat tonight, uninfluenced in any way by outer matter or force.
In truth, his own physical existence and his own physical and mental history, not to speak of the outer world, will assiduously work on him and on his choosing despite his most strenuous efforts to ignore them. Among other things his hunger, his palate, his capacity for food are all parts of his being and his life, as is his knowledge of food, no matter how much he wills himself to disregard them, and they will determine for him almost everything he selects to eat. (Organized Religion, pp. 41-42.)
Figure 2. The stretch-reflex arc, the simplest nerve circuit known. When the stretch-receptor (A) buried among the muscle fibers (B) is stretched, say, by the force of gravity pulling downward on the arm, the sensory nerve (S) will fire. The sensory nerve will carry its excitation to the gray matter (G) in the spinal cord, where it will pass its excitation directly on to a motor nerve (M). The motor nerve, running back to the muscle being stretched, will stimulate muscle fibers to contract, canceling the stretch induced by the force of gravity and allowing the arm to maintain its position with great precision.
I leave it to my readers to decide if free will could possibly act at the stage of formation of a person's ideas. Can people decide (freely or otherwise) which ideas to generate?
It might be a good idea before going further to point out that although we deny that there can be such a thing as free, i.e., undetermined, will, we do not deny the fact that there is such a thing as 'will.' Unfortunately, the dictionary is not too helpful in defining this term, allowing it to indicate both desire, inclination, or appetite on the one hand, and choice, or determination on the other. In the present context, however, it seems that a critical part of the meaning of the term involves decision-making. Desire-induced decision-making might be a useful way to define the term.
Now the biological roots of both desire and decision-making are well understood, and we must digress for a moment to study them.
I leave it to my readers to decide if free will could possibly act at the stage of formation of a person's ideas. Can people decide (freely or otherwise) which ideas to generate?
It might be a good idea before going further to point out that although we deny that there can be such a thing as free, i.e., undetermined, will, we do not deny the fact that there is such a thing as 'will.' Unfortunately, the dictionary is not too helpful in defining this term, allowing it to indicate both desire, inclination, or appetite on the one hand, and choice, or determination on the other. In the present context, however, it seems that a critical part of the meaning of the term involves decision-making. Desire-induced decision-making might be a useful way to define the term.
Now the biological roots of both desire and decision-making are well understood, and we must digress for a moment to study them.
A Digression Into Biological Cybernetics
The nervous systems of humans and all higher animals are composed of three subdivisions: sensory receptor cells and sensory neurons, motor neurons which control the muscles and glands, and interneurons—cells which intervene between sensory and motor neurons and serve as modulators and integrators of behavior. In the simplest nerve circuit known, the two-cell stretch-reflex arc (see Fig. 2), interneurons are absent. Causing the sensory nerve to fire (say, by stretching the muscle in which it is located) results in the firing of a motor neuron—with the resultant contraction of the stretched muscle. As simple as this circuit may be, it makes it possible to maintain constant tension in a muscle when it must sustain a weight against the pull of gravity—as when you hold an arm out straight at your side. The two-part decision-making system in Fig. 2 is really no different than the simple negative feedback loop in the reservoir of a flush toilet, where movement of the float shuts off the water valve that opened when the toilet was flushed.
By adding an interneuron (actually, lots of them in most real situations), we can make the firing of the motor neuron a much less inevitable consequence of the firing of the sensory nerve (see Fig. 3). For example, if the motor neuron be connected to a muscle to be used in food-getting behavior, we may find that the firing threshold for the interneuron may be greatly decreased when blood-sugar levels are low, and increased when blood-sugar levels are high (as just after having eaten a meal). When interneuron firing thresholds are low, it will not take very intense firing of the sensory nerve (say, a nerve stimulated by the smell of food) to fire the interneuron and, in turn, the motor nerve which will trigger the food-getting behavior.
If the nerve circuit in question be in a human brain, many other factors may increase or decrease the firing threshold of the interneuron—thus increasing or decreasing the likelihood of food-getting behavior. For example, very intense sensory inputs—say visual and olfactory stimuli revealing the presence of an "irresistible" culinary delight—may "gang up" on the interneuron and force it to fire. Or, memory circuits laid down the last time the food in question made the person sick may inhibit the interneuron and prevent it from firing. Altered levels of circulating hormones (as in pregnant women, for example, or diabetics) may alter the likelihood of interneuronal firing also, and there are many other factors which will affect the nerve circuit's "decision" of whether to fire or not. Make no mistake about it: the nerve circuits of Figures 2 and 3 are decision-making systems.
It should not be supposed, however, that nerve cells are a prerequisite for decision-making. At night, as one sleeps, the system of endocrine glands is making countless numbers of decisions: decisions regulating the level of blood sugar, what to excrete in the urine, what levels of sex hormones to maintain in the blood, etc. Nor should it be supposed that decision-making requires living cells at all. A flush toilet may be highly adept at "deciding" when to start and stop refilling the reservoir after it has been "stimulated" by being flushed. Indeed, decision-making is a characteristic of a very common category of machines—the so-called cybernetic devices. Cybernetic devices take their name from the Greek word kybernetes, "governor."
By adding an interneuron (actually, lots of them in most real situations), we can make the firing of the motor neuron a much less inevitable consequence of the firing of the sensory nerve (see Fig. 3). For example, if the motor neuron be connected to a muscle to be used in food-getting behavior, we may find that the firing threshold for the interneuron may be greatly decreased when blood-sugar levels are low, and increased when blood-sugar levels are high (as just after having eaten a meal). When interneuron firing thresholds are low, it will not take very intense firing of the sensory nerve (say, a nerve stimulated by the smell of food) to fire the interneuron and, in turn, the motor nerve which will trigger the food-getting behavior.
If the nerve circuit in question be in a human brain, many other factors may increase or decrease the firing threshold of the interneuron—thus increasing or decreasing the likelihood of food-getting behavior. For example, very intense sensory inputs—say visual and olfactory stimuli revealing the presence of an "irresistible" culinary delight—may "gang up" on the interneuron and force it to fire. Or, memory circuits laid down the last time the food in question made the person sick may inhibit the interneuron and prevent it from firing. Altered levels of circulating hormones (as in pregnant women, for example, or diabetics) may alter the likelihood of interneuronal firing also, and there are many other factors which will affect the nerve circuit's "decision" of whether to fire or not. Make no mistake about it: the nerve circuits of Figures 2 and 3 are decision-making systems.
It should not be supposed, however, that nerve cells are a prerequisite for decision-making. At night, as one sleeps, the system of endocrine glands is making countless numbers of decisions: decisions regulating the level of blood sugar, what to excrete in the urine, what levels of sex hormones to maintain in the blood, etc. Nor should it be supposed that decision-making requires living cells at all. A flush toilet may be highly adept at "deciding" when to start and stop refilling the reservoir after it has been "stimulated" by being flushed. Indeed, decision-making is a characteristic of a very common category of machines—the so-called cybernetic devices. Cybernetic devices take their name from the Greek word kybernetes, "governor."
Figure 3. A hypothetical decision-making nerve circuit containing an interneuron. Even in people who cannot add two and two, interneurons are busy doing integral calculus. The interneuron shown here, like all interneurons, is busy integrating excitatory (+) and inhibitory (-) factors. Only when the grand total of the inputs shown is greater than the firing threshold for the given cell will the interneuron fire and transmit its excitation to the motor neuron which, in turn, will transmit the excitation to a muscle. The 'decision' to fire may result from such simple conditions as a sudden burst of intense inputs from the sensory receptors, or it may result from extremely subtle causes such as increased excitation from memory-circuit neurons. But whether the causes be simple or subtle, all decisions—and the muscle behaviors resulting from them—are the inevitable mathematical resultant of the factors acting upon the interneuron.
The thermostat in your livingroom is a cybernetic device, and so is the cruise-control in your car. Like all cybernetic devices, the function of these humble decision-makers is to maintain some condition as close to constant as possible. In the case of the thermostat, it is temperature which is regulated. In the case of the cruise control, it is the speed of the automobile. In flush toilets, water levels are regulated.
Every living cell is a cybernetic device, and every living plant or animal is a super-cybernetic system composed of cybernetic subsystems. In living things, the overall state of balance resulting from the functioning of all the cybernetic parts of a body is known as homeostasis. If we may be allowed to use a teleological phrase, the 'purpose' of the various systems of the body is the maintenance of homeostasis—maintaining all physiological factors at the levels optimal for survival.
We have seen in the case of glandular regulation of blood sugar, not all decision-making in living things involves overt 'behavior.' As a matter of fact, there is a hierarchy of cybernetic systems in the body, which is involved in the maintenance of homeostasis. It would appear that the 'natural state' of mammals is the relatively safe condition of sleeping curled up in a hole somewhere safely out of the reach of predators and falling meteorites. During sleep, the glands can regulate blood sugar levels by gradually removing glucose from the liver, where it was stored in the form of insoluble glycogen after the last meal. If the animal slept forever, however, there would come a time when the liver would run out of stored sugar, and the animal would die because its cells had become deprived of fuel.
When glandular regulation is no longer adequate to maintain homeostasis, the nervous system takes over. If the sugar stores of the liver are to be replaced, the animal will have to wake up, go outside, and rustle up some breakfast. This will involve overt behavior including not only locomotion and food-getting behavior, but a wide variety of decision-making activities. Where to hunt? What to hunt for? What risks to take? To the behavior scientist, decision-making of this sort is merely the activity of the highest cybernetic system, involved in last-ditch efforts to maintain physiological homeostasis—in this case, maintaining blood sugar levels at a certain value. (There may be a higher level still: social systems may sometimes function to regulate behavior in a manner optimizing the number of individuals capable of maintaining individual homeostases.)
Nowhere among all these cybernetic systems, from flush toilets to flexing muscles, do we see any evidence of freedom. Indeed, as we have seen, animal behavior itself—including decision-making behavior—would seem to be induced (caused!) by the growing inadequacy of lower-level systems of regulation.
The thermostat in your livingroom is a cybernetic device, and so is the cruise-control in your car. Like all cybernetic devices, the function of these humble decision-makers is to maintain some condition as close to constant as possible. In the case of the thermostat, it is temperature which is regulated. In the case of the cruise control, it is the speed of the automobile. In flush toilets, water levels are regulated.
Every living cell is a cybernetic device, and every living plant or animal is a super-cybernetic system composed of cybernetic subsystems. In living things, the overall state of balance resulting from the functioning of all the cybernetic parts of a body is known as homeostasis. If we may be allowed to use a teleological phrase, the 'purpose' of the various systems of the body is the maintenance of homeostasis—maintaining all physiological factors at the levels optimal for survival.
We have seen in the case of glandular regulation of blood sugar, not all decision-making in living things involves overt 'behavior.' As a matter of fact, there is a hierarchy of cybernetic systems in the body, which is involved in the maintenance of homeostasis. It would appear that the 'natural state' of mammals is the relatively safe condition of sleeping curled up in a hole somewhere safely out of the reach of predators and falling meteorites. During sleep, the glands can regulate blood sugar levels by gradually removing glucose from the liver, where it was stored in the form of insoluble glycogen after the last meal. If the animal slept forever, however, there would come a time when the liver would run out of stored sugar, and the animal would die because its cells had become deprived of fuel.
When glandular regulation is no longer adequate to maintain homeostasis, the nervous system takes over. If the sugar stores of the liver are to be replaced, the animal will have to wake up, go outside, and rustle up some breakfast. This will involve overt behavior including not only locomotion and food-getting behavior, but a wide variety of decision-making activities. Where to hunt? What to hunt for? What risks to take? To the behavior scientist, decision-making of this sort is merely the activity of the highest cybernetic system, involved in last-ditch efforts to maintain physiological homeostasis—in this case, maintaining blood sugar levels at a certain value. (There may be a higher level still: social systems may sometimes function to regulate behavior in a manner optimizing the number of individuals capable of maintaining individual homeostases.)
Nowhere among all these cybernetic systems, from flush toilets to flexing muscles, do we see any evidence of freedom. Indeed, as we have seen, animal behavior itself—including decision-making behavior—would seem to be induced (caused!) by the growing inadequacy of lower-level systems of regulation.
Back To The Question
Let us now return to the question of just when it is that a person might use free will. Some years ago, a man strangled his wife as he was in bed sleeping beside her. He had had a nightmare, he said, and when he thought he had strangled a monster, he awoke to discover he had killed his wife. Although the musculature is normally inhibited during dreaming sleep, it occasionally happens that people can walk and do other things while dreaming. Why not wife-killing? Assuming the story to be true, can anyone suppose the man was exercising free will when his neuromuscular system 'decided' to strangle his wife?
In May of 1985, the prestigious journal Science reported that certain types of psychiatric disorders are actually caused by viral infections of the brain. The study began with an investigation of Borna disease, a viral disease of horses and sheep. A form of encephalitis, it has been known popularly as 'crazy disease.' When virus cultures taken from the brains of diseased horses were injected into various species of laboratory animals, these animals too developed behavioral abnormalities, indicating that the crazy behavior observed in horses is indeed caused by the virus in question.
Studies of human psychiatric patients have revealed that some of those suffering from cyclic depressive illness carry antibodies to Borna virus, although no psychiatrically normal persons have been found who carry such antibodies. This evidence, together with a number of other recent studies, indicates that a substantial percentage of human mental illness results from virus infections of the brain.
For religionists who believe in demonic possession as the cause of mental illness, this news will not be very welcome. After all, Jesus drove demons, not viruses, out of the crazy man at Gadara. Demonic possession as acause of mental illness was still a reasonable explanatory option, as long as psychiatrists couldn't come up with anything better than Freud's fantasies about toilet training and penis envy. But increasingly, modern medical science is revealing the biochemical—and now, virulogical—basis of mental illness. The demonological interpretation itself can no longer be considered as anything other than a form of mental illness. Is it possible that religiosity and religious thinking are sometimes caused by viruses?
All this is bad news for churches which preach the existence of free will, for it shows that a simple virus infection can control a person's behavior—thus negating any putative free will existing prior to infection. In the Catholic Church, for example, it is a sin to commit suicide. But what if patients suffering from Borna virus-induced depression kill themselves? Can a person whose behavior is controlled by viruses commit a sin? Can a person whose behavior is partly controlled by a virus commit a sin? And what if the pope should contract a virus that alters his behavior, making him even crazier than he is at the moment? Would he still be considered infallible?
Since free will is believed by most people to be an all-or-nothing commodity, it is amusing to consider when, in the course of a virus infection, a person might lose it. Is it when 50 percent of all brain cells are infected? 75 percent? 99 percent? Or does every last cell in the brain have to be infected? Or are there certain 'free will cells' in the brain, and free will is lost only when these particular cells are infected?
Although the discovery that viruses can control human behavior is certainly a stunning development in the history of free-will studies, the most significant scientific discovery to occur since the 1950s, when Homer Smith wrote, has been the finding that it is possible to control animal and human behavior by physical means—by implanting electrodes in various centers of the brain.
In the late 1960s, Jose M. R.Delgado electrified the world—as well as experimental animals—when he stepped into a bull ring, incited a bull to charge him, and then, when the fierce animal was soon to toss him with its horns, pressed a button on a radio transmitter and stopped the creature dead in its tracks. Prior to this electrifying demonstration (I can't get away from that word!), Delgado had implanted an electrode in the animal's brain and a radio receiver in its skull. By transmitting a radio stimulus to the appropriate part of the animal's brain, Delgado was able to override and inhibit the expression of the bull's 'intended' behavior. In 1969, when Delgado published his book Physical Control of the Mind [Harper & Row], the question of free will vanished as a topic of serious scientific debate.
For what could be done with animals has also been done with humans. In 1977, Robert G. Heath, a neuropsychiatrist at Tulane University School of Medicine, published a study titled "Modulation of emotion with a brain pacemaker: Treatment for intractable psychiatric illness" [J. Nervous and Mental Disease, Vol. 165: 300-317]. Among the extraordinary cases reported in that article was the case of a nineteen-year-old man who had been in and out of mental hospitals since the age of thirteen. He had slashed his wrists and arms during numerous episodes of violent behavior, and had tried to kill his sister. Despite the use of potent antipsychotic drugs, he still had to be kept in physical restraints much of the time.
All this changed after Heath's colleague, neurosurgeon Raeburn C. Llewellyn, implanted a 'pacemaker' in the man's brain. Consisting of a radio receiver implanted under the skin on the upper edge of the man's chest, with wires running up under the skin at the back of the neck to connect with electrodes implanted in the vermis region of the cerebellum at the back of the man's brain, the device made possible external control of the man's violent behavior. Writing of the man's improvement at the time the paper was published, Heath commented,
Psychological tests, including intelligence quotient, have shown significant improvement, and he is able to cope adequately with the vicissitudes of everyday life. Clinically, the patient has had a complete remission and requires no medication. He was enlisted in a vocational rehabilitation course and he is now ready for job placement.
This case is interesting for the questions it raises with regard to the doctrine of free will. Did the man use free will when he launched into his fits of violence? Does he use free will when he now refrains from violence under the influence of radio stimulation? When it is he himself who presses the radio transmitter button which shuts down fits of violence before they erupt, is he exercising free will?
It has long been supposed that the insane do not exercise free will. But what of persons who are temporarily insane because of drugs such as alcohol or LSD? How many shots of alcohol, and how many molecules of LSD, does it take to wipe out free will? If someone were to slip some LSD into the pope's communion wine, would the pope exercise free will in formulating his 'infallible' ex cathedra updates on the bodily assumption into heaven of the Virgin Mary? If he should proclaim the doctrine that the Blessed Virgin was rapted into heaven with a full bladder and galloping diarrhea, I would want to know if I have to take him seriously—especially if I ever find myself outside and a flying lady in a blue robe invades my airspace.
In May of 1985, the prestigious journal Science reported that certain types of psychiatric disorders are actually caused by viral infections of the brain. The study began with an investigation of Borna disease, a viral disease of horses and sheep. A form of encephalitis, it has been known popularly as 'crazy disease.' When virus cultures taken from the brains of diseased horses were injected into various species of laboratory animals, these animals too developed behavioral abnormalities, indicating that the crazy behavior observed in horses is indeed caused by the virus in question.
Studies of human psychiatric patients have revealed that some of those suffering from cyclic depressive illness carry antibodies to Borna virus, although no psychiatrically normal persons have been found who carry such antibodies. This evidence, together with a number of other recent studies, indicates that a substantial percentage of human mental illness results from virus infections of the brain.
For religionists who believe in demonic possession as the cause of mental illness, this news will not be very welcome. After all, Jesus drove demons, not viruses, out of the crazy man at Gadara. Demonic possession as acause of mental illness was still a reasonable explanatory option, as long as psychiatrists couldn't come up with anything better than Freud's fantasies about toilet training and penis envy. But increasingly, modern medical science is revealing the biochemical—and now, virulogical—basis of mental illness. The demonological interpretation itself can no longer be considered as anything other than a form of mental illness. Is it possible that religiosity and religious thinking are sometimes caused by viruses?
All this is bad news for churches which preach the existence of free will, for it shows that a simple virus infection can control a person's behavior—thus negating any putative free will existing prior to infection. In the Catholic Church, for example, it is a sin to commit suicide. But what if patients suffering from Borna virus-induced depression kill themselves? Can a person whose behavior is controlled by viruses commit a sin? Can a person whose behavior is partly controlled by a virus commit a sin? And what if the pope should contract a virus that alters his behavior, making him even crazier than he is at the moment? Would he still be considered infallible?
Since free will is believed by most people to be an all-or-nothing commodity, it is amusing to consider when, in the course of a virus infection, a person might lose it. Is it when 50 percent of all brain cells are infected? 75 percent? 99 percent? Or does every last cell in the brain have to be infected? Or are there certain 'free will cells' in the brain, and free will is lost only when these particular cells are infected?
Although the discovery that viruses can control human behavior is certainly a stunning development in the history of free-will studies, the most significant scientific discovery to occur since the 1950s, when Homer Smith wrote, has been the finding that it is possible to control animal and human behavior by physical means—by implanting electrodes in various centers of the brain.
In the late 1960s, Jose M. R.Delgado electrified the world—as well as experimental animals—when he stepped into a bull ring, incited a bull to charge him, and then, when the fierce animal was soon to toss him with its horns, pressed a button on a radio transmitter and stopped the creature dead in its tracks. Prior to this electrifying demonstration (I can't get away from that word!), Delgado had implanted an electrode in the animal's brain and a radio receiver in its skull. By transmitting a radio stimulus to the appropriate part of the animal's brain, Delgado was able to override and inhibit the expression of the bull's 'intended' behavior. In 1969, when Delgado published his book Physical Control of the Mind [Harper & Row], the question of free will vanished as a topic of serious scientific debate.
For what could be done with animals has also been done with humans. In 1977, Robert G. Heath, a neuropsychiatrist at Tulane University School of Medicine, published a study titled "Modulation of emotion with a brain pacemaker: Treatment for intractable psychiatric illness" [J. Nervous and Mental Disease, Vol. 165: 300-317]. Among the extraordinary cases reported in that article was the case of a nineteen-year-old man who had been in and out of mental hospitals since the age of thirteen. He had slashed his wrists and arms during numerous episodes of violent behavior, and had tried to kill his sister. Despite the use of potent antipsychotic drugs, he still had to be kept in physical restraints much of the time.
All this changed after Heath's colleague, neurosurgeon Raeburn C. Llewellyn, implanted a 'pacemaker' in the man's brain. Consisting of a radio receiver implanted under the skin on the upper edge of the man's chest, with wires running up under the skin at the back of the neck to connect with electrodes implanted in the vermis region of the cerebellum at the back of the man's brain, the device made possible external control of the man's violent behavior. Writing of the man's improvement at the time the paper was published, Heath commented,
Psychological tests, including intelligence quotient, have shown significant improvement, and he is able to cope adequately with the vicissitudes of everyday life. Clinically, the patient has had a complete remission and requires no medication. He was enlisted in a vocational rehabilitation course and he is now ready for job placement.
This case is interesting for the questions it raises with regard to the doctrine of free will. Did the man use free will when he launched into his fits of violence? Does he use free will when he now refrains from violence under the influence of radio stimulation? When it is he himself who presses the radio transmitter button which shuts down fits of violence before they erupt, is he exercising free will?
It has long been supposed that the insane do not exercise free will. But what of persons who are temporarily insane because of drugs such as alcohol or LSD? How many shots of alcohol, and how many molecules of LSD, does it take to wipe out free will? If someone were to slip some LSD into the pope's communion wine, would the pope exercise free will in formulating his 'infallible' ex cathedra updates on the bodily assumption into heaven of the Virgin Mary? If he should proclaim the doctrine that the Blessed Virgin was rapted into heaven with a full bladder and galloping diarrhea, I would want to know if I have to take him seriously—especially if I ever find myself outside and a flying lady in a blue robe invades my airspace.
The Problem of Personal Responsibility
While the discussion so far has been neither scientifically nor philosophically exhaustive, it seems as though we have said enough on the subject of free will to convince most people that there exists no such faculty, and vie have reached the point where the wrath of readers—alluded to at the beginning—is likely to be kindled.
"No free will?!" I can hear them exclaim. "What happens to human responsibility? If there is no such thing as free will, how can we hold others responsible for their acts?"
After having discussed these questions with many people, I have come to the conclusion that the problem is not really a question of whether or not we can hold people 'responsible' for their acts, but rather what can we do if there be no longer any justification for punishing people when they do things we dislike. I can see no conceivable way in which the 'loss' of free will should change our day-today dealings with other people. We will continue to reward behavior of which we approve, if we wish to increase the probability of its repetition in the future, and we will not reward behavior we wish to see extinguished. We will lose, however, the rationalizations which allow us to become angry and hold wrathful grudges against people who annoy us.
We know it is foolish to get angry at a flush toilet when it misbehaves and won't shut off at the correct point. We are quite properly embarrassed when we find ourselves kicking a car which has broken down. So why shouldn't we be embarrassed when we find ourselves getting emotionally overwrought because a person has done something to displease or harm us? Is not the remedy the same in the case of the faithless toilet, the frustrating car, and the misbehaving person?
When the toilet leaks, we neither hold grudges against it nor try to punish it: we fix it. If it is the case, as I have argued, that human beings too are elaborate cybernetic systems, is it not more reasonable to try to 'fix' them than to punish them when they malfunction? A whole array of tools exists for the repair of misbehaving human systems. These range from the techniques of applied behaviorist psychology to the brain-implant techniques mentioned above. When people do things which are harmful, they do so for definite physiological reasons. It should be the task of society to find ways of repairing behavior, not punishing it. It is silly to spend more than one day kicking a flat tire. To be sure, this will require a rethinking of our laws concerning crime and punishment. It will require that our 'reformatories' become exactly what their name has falsely advertised: places where behavior can be re-formed, i.e., repaired.
"No free will?!" I can hear them exclaim. "What happens to human responsibility? If there is no such thing as free will, how can we hold others responsible for their acts?"
After having discussed these questions with many people, I have come to the conclusion that the problem is not really a question of whether or not we can hold people 'responsible' for their acts, but rather what can we do if there be no longer any justification for punishing people when they do things we dislike. I can see no conceivable way in which the 'loss' of free will should change our day-today dealings with other people. We will continue to reward behavior of which we approve, if we wish to increase the probability of its repetition in the future, and we will not reward behavior we wish to see extinguished. We will lose, however, the rationalizations which allow us to become angry and hold wrathful grudges against people who annoy us.
We know it is foolish to get angry at a flush toilet when it misbehaves and won't shut off at the correct point. We are quite properly embarrassed when we find ourselves kicking a car which has broken down. So why shouldn't we be embarrassed when we find ourselves getting emotionally overwrought because a person has done something to displease or harm us? Is not the remedy the same in the case of the faithless toilet, the frustrating car, and the misbehaving person?
When the toilet leaks, we neither hold grudges against it nor try to punish it: we fix it. If it is the case, as I have argued, that human beings too are elaborate cybernetic systems, is it not more reasonable to try to 'fix' them than to punish them when they malfunction? A whole array of tools exists for the repair of misbehaving human systems. These range from the techniques of applied behaviorist psychology to the brain-implant techniques mentioned above. When people do things which are harmful, they do so for definite physiological reasons. It should be the task of society to find ways of repairing behavior, not punishing it. It is silly to spend more than one day kicking a flat tire. To be sure, this will require a rethinking of our laws concerning crime and punishment. It will require that our 'reformatories' become exactly what their name has falsely advertised: places where behavior can be re-formed, i.e., repaired.
To the charge that I am treating people as 'objects,' I plead guilty. People are objects; they are highly organized systems of matter and energy. Far from this demeaning us or making us in any way less valuable, it simply shows what marvelous things 'objects' can be. In the universe known to me, I know of no objects as wonderful or precious as my fellow humans. Their importance can hardly be diminished by my suggestion that they should not be treated in a way that would be unreasonable even vis-a-vis a flush toilet.
It remains for us to ask if it not be contradictory for an Atheist to deny the existence of free will, yet fight for social justice and political liberty. Is it not unwarranted, then, to call for freedom of the press, freedom of religion, freedom of speech—and, freedom of the mind?
The answer once again is "no." It is not contradictory to pursue political liberties even though behavior has its causes. These liberties have nothing whatever to do with the question of whether or not behavior is determined. The liberties which we seek are nothing more than conditions wherein our decision-making (free or otherwise) can benefit from the widest range of choices possible, conditions wherein we have the greatest chance of finding happiness. We do not need free will to experience love, but we might very well need liberty to find it.
It remains for us to ask if it not be contradictory for an Atheist to deny the existence of free will, yet fight for social justice and political liberty. Is it not unwarranted, then, to call for freedom of the press, freedom of religion, freedom of speech—and, freedom of the mind?
The answer once again is "no." It is not contradictory to pursue political liberties even though behavior has its causes. These liberties have nothing whatever to do with the question of whether or not behavior is determined. The liberties which we seek are nothing more than conditions wherein our decision-making (free or otherwise) can benefit from the widest range of choices possible, conditions wherein we have the greatest chance of finding happiness. We do not need free will to experience love, but we might very well need liberty to find it.
Fallacies for the Faith
Despite the nearly ubiquitous prejudice that still faces Atheists in their daily lives, life really is much simpler for an Atheist than for a believer. Whenever it is necessary to learn what is true about the world—or of any part of it that assumes importance in the life of an Atheist—truth can usually be discovered effectively and efficiently simply by applying the canons of science, logic, and common sense.
Of course, there are situations where truth may be reluctant to reveal itself—as in the case of finding answers to questions such as, "How can we cure cancer?" or "How can we live to be two hundred years old?" But despite the existence of inherently difficult problems such as these, it is still easier for an Atheist than for a believer to discover truth.
Indeed, believers are often precluded from discovering truth because of the restrictive nature of their belief systems and the impediments they place in the way of free inquiry. The Amish, for example, do not allow their children to be educated beyond the eighth grade, and fundamentalists of all stripes do not allow their children to learn anything of importance in the areas of biology and geology. Up until recent times, the Roman Catholic church had its Index of Forbidden Books. Adding to this the fact that true believers have to waste endless hours trying to justify the unjustifiable—why does a loving god allow suffering, why are there so many "apparent" contradictions in a supposedly perfect Bible, etc.—we see that believers are in trouble every time they try to think.
As a matter of fact, there is a very basic reason why religionists have difficulty in discovering, justifying, and demonstrating truth: Their "truth" is alleged to derive from revelation, not observation.
Consider the problem facing the person who, in a state of hypoglycemia, "learns" that god exists as three separate persons and yet is but a single individual. How will he demonstrate the truth of his claim to the person who, after eating an ergot-infested piece of rye bread, receives an "inspiration" which indicates that god exists as π-persons—and that the godhead is a pi-ety, not a trinity. How will the trinitarian counter the heretical argument that a pi-ety is more appropriate than a trinity because π is an irrational number?
Unlike scientists, who, when faced with a controversy, can appeal to the witness of nature itself for corroboration or disqualification of their various competing hypotheses, religionists have no objective, material standard to which they can appeal for support of their often psychoceramic (crackpot) notions.In order to advance their peculiar ideas; there are basically, only two methods they can employ: They can trick people into believing as they do, or they can suppress opposing points of view—even going so far as to exterminate their opponents when possible.
Alas for the cause of "true" religion: It has been a long time since god's anointed and appointed representatives in the North American market were able to convert their opponents to carbon dioxide and ashes in order to demonstrate the superiority of their theological ruminations. To be sure, they are still fairly adept at suppressing books, films, and broadcast media productions which threaten their ability to hoodwink the gullible. Nevertheless, it is becoming harder and harder for religionists to close the floodgates of information emanating from religion-free sources. More and more, the religious have to put their eggs in the deception and disinformation basket.
If people are to be successful in avoiding the traps of logic and misinformation set for them by the purveyors of preposterosity, they must study not only science but logic. They must learn to identify and deal with the more common forms of fallacy used by the faithful to further their cause. It is for this reason that I concentrate "The Probing Mind" upon some of the faulty arguments advanced by defenders of the faith. Readers will forgive me, I hope, for discussing only a few major types of fallacy. The religious mind not only continues to employ old fallacies, it creates new ones! I cannot keep up with them.
Pascal's Wager:
The Fallacy of Excluded Middle
Although Blaise Pascal (1623-1662) came honestly to the logical fallacy which bears his name, it is routinely used by dishonest religionists to trap people into making out their wills to Jesus. As the word wager implies, this fallacy appeals to the gambling instinct in us all. Stated briefly, Pascal's wager argues that although we cannot really know for sure if a god exists or not, we are wise to gamble on god.
If we live our lives as observant Christians, the argument goes, and it turns out that when we die there is nothing out there—no god, no life after death—we have not really lost anything. We got through life pleasantly enough and died in peace. But if we lead our lives as Atheists, die, and come to find that god and heaven and hell are realitie—we are in a heap of trouble! So, it is better to bet on god: We may not win anything, but at least we won't end up spending an eternity polishing the chandeliers over god's gaming tables.
The problem with this argument is that it ignores the fact that there is a middle ground between the Roman Catholic god imagined by Pascal and the no-spooks-at-all position taken by Atheists. For example, there may be a god and she's Chinese—and highly resentful of people who masculinize the deity. In this case, Pascal would be in more trouble after death than would an Atheist! There may be two gods, one goddess, one celestial eunuch, and an advisory committee of quasi-omnipotent animals. It is easy to imagine theological situations in which both Christians and Atheists would find themselves equally in trouble after death—or in no trouble at all, if reality turns out to involve a committee of divine experimenters who, lacking complete omniscience, have created the human comedy for their amusement and are simply watching to see how it comes out.
Considering the thousands of gods, goddesses, and combinations thereof who have been worshiped by humans in their known history—and considering the fact that no convincing evidence exists for anyone of them, including Jesus—it is obvious that the person betting on Jesus has even less chance of "winning" than I have of winning the New York State lottery on the same day I win the Ohio lottery. Moreover, "believing" in Jesus isn't as easy as it sounds. Every sect has its own rules as to just how and what to believe, and their rules often are mutually exclusive.
Indeed, the Atheist can now completely reverse Pascal's wager. The fact that thousands of deities have been worshiped—and the fact that in thousands of years not one shred of evidence has been found to prove the existence of anyone of them—tells the intelligent gambler that there isn't a trump card in the whole deck. By considering the middle ground ignored by Pascal and his successors, the Atheist concludes that it is foolish to waste life preparing for death. Carpediem!
The Black or White Fallacy
This fallacy is really another aspect of the excluded middle fallacy. It involves treating the world and its phenomena as though everything were black or white, on or off, good or bad, with no intermediate conditions existing. While it is true that at the subatomic level of quantum mechanics nature does appear to behave in an all-or-nothing manner, above the atomic level reality appears most often to be a continuum joining arbitrarily demarcated extremes. How hot is hot? How cold is cold? Is a brain-dead individual "completely" dead? How often does anyone act in a way that is absolutely good or absolutely bad?
It is this inability to handle continua that prevents creationists from accepting the fact of evolution. This is the reason fundamentalists cannot see the utility of "situation ethics." This is why Right-to-Single-Celled-Lifers think zygotes are people, and acorns are oak trees. When creationists insist that humans are a unique creation separate from the "animals"—despite the fact that their genome is 98.5 percent identical to that of chimpanzees—they are carving a continuum into black-or-white compartments. When the Mandatory Motherhood people claim that a single cell containing forty-six chromosomes is a "human being," they argue falsely that two stages in a developmental continuum are identical simply because they are connected. This is like saying that Martin Luther died a Roman Catholic because he started out as one!
The Ad Hominem Fallacy
This is probably the fallacy encountered most frequently in the effusions of the faithful. It comes in two varieties: the abusive species and the circumstantial species. The term ad hominem means "to the man" and refers to the fact that this type of argument attacks one's opponent rather than his argument. The abusive species substitutes name-calling for logic or evidence:
"So you're an Atheist, are you? Stalin was a mass murderer and an Atheist too. Do you really want to be like that? Why don't you repent of your error and get saved?"
Of course, even if every Atheist in the world were a mass murderer, it would have no bearing whatsoever upon the question, "Do gods exist?"
Since the days of old Joe McCarthy, the most popular adhominem argument used against Atheists is the one that links Atheism with communism. The effort continues to prevent people from openly calling themselves Atheists by making them fearful of being called communists as well. The abusive adhominem here blends into another type of fallacy, the guilt by associationfallacy. The fact that repressive regimes in Eastern Europe happen to be run by Atheists is used to brand American Atheists as communists and—by a leap of illogic—wrong in their philosophy. Ofcourse, the fact that Franco and Hitler and Pope Innocent Ill (who sent a special crusade to annihilate a million people in southern France) were Christians should not be used as an indication of what kinds of folk the Christians be!
The circumstantial speciesof the ad hominem fallacy is more subtle and difficult for beginning logicians to detect. It attacks the opponent by appealing to the special circumstances of his life. Professor Irving Copi, in his textbook Introduction to Logic [New York: Macmillan, 1953, p. 55] gives a very good example of this fallacy:
The classical example of this fallacy is the reply of the hunter when accused of barbarism in sacrificing unoffending animals to his own amusement. His reply is to ask the critic, "Why do you feed on the flesh of harmless cattle?" The sportsman here is guilty of an argumentum ad hominem because he does not try to prove that it is right to sacrifice animal life for human pleasure, but merely that it cannot consistently be decried by his critic because of the critic's own special circumstances, in this case his not being a vegetarian. Arguments such as these are not correct; they do not present good evidence for the truth of their conclusions but are only intended to win assent to the conclusion from one's opponent because of his special circumstances. This they frequently do; they are often very persuasive.
Creationists are very adroit in the use of this fallacy. In their fight against Einstein's theory of relativity (creationists hate Einstein every bit as much as Darwin), they frequently claim that relativity in physics leads straight to relativity in morals: Once you give up the absolutes in physics, the absolutes in ethics fly out the window too! This appeals. to the special circumstance that most of the people who come to listen to creationists are adherents of strict, inflexible codes of authoritarian morality. The same special circumstance that prevents them from accepting relativity in physics will prevent them from accepting the idea that humans evolved from "lower animals." Teaching our children that they came from animals will give them license (a particularly fearsome word) to behave as animals - whatever that means. (Perhaps they fear that humans might pair-bond for life, as doves do, or practice altruistic behavior like that of ants, bees, or baboons.)
The Appeal to Authority
Known to logicians as the argumentum ad verecundiam, the appeal to authority is fallacious only when it is implied that such and such is true simply because some famous "authority" says so. The fact that a famous astronaut believes there is a battleship-sized boat on the top of a Turkish volcano does not prove the existence of Noah's Ark. The fact that the president of the United States thinks the earth is only six thousand years old does not make the delusion reality. The creationists are especially skilled at quoting famous scientists in such a way as to array their authority against Darwinism. To be sure, they are usually guilty of quoting out of context—itself a type of fallacy involving dishonesty more than illogic. Even so, when their quotes are in context, they rarely give the necessary evidence to support the point in question; they merely appeal to the authority of a famous scientist as a reason for accepting their argument.
Now, of course, scientists routinely cite authorities when they write articles and give lectures. So what is the difference between citingan authority—the way real scientists do—and appealing to an authority, the way creationists and other religious apologists do? The difference is profound. When the religionist appeals to an authority, it is becausehe hopes the appeal will be sufficient tocompel belief. When a scientist cites an authority, it is for reasons very different from those of the religionist. First of all, a scientist cites other authorities for the sake of efficiency. It is impossible for any one person to repeat all the experiments and observations that have been done in an area of interest. Secondly, citing authorities is a way of indicating who should get the credit (or blame!) for providing parts of the background information needed to do a particular piece of research. It is a way of giving credit where credit is due—and tells readers whom to blame if some critical part of the background information should prove incorrect and invalidate the new piece of research.
It is always expected that any piece of research cited could be repeated if necessary. Citation of authorities is merely a time- and money-saving procedure which gives credit where credit is due. Appealing to authorities seeks only to compel belief without proof.
Thirty Million Frenchmen ...
I am sure that all of my readers have had the experience of arguing with someone about something, only to be told that "Everybody knows that …" or "It has always been known that …" Readers who have ever listened to the Spiel of a professional creationist will remember being told that "Hundreds of scientists every year now are giving up the myth of Darwinism." Then there is Augustine's "proof' that Christianity is true because it has been "believed everywhere, always, and by all men."
Known popularly as the "Thirty-MillionFrenchmen-Can't-Be-Wrong Fallacy," this fallacy is known to logicians as the argumentum ad populum. For proof it substitutes an appeal to the masses, the majority, or simply a large number of people—thirty million Frenchmen, for example. Of course, thirty million Frenchmen were wrong back in the days when they believed the sun went around the earth! Remember this the next time someone challenges your doubting ways by saying, "Who are you to suggest that Jesus never existed when everyone who is anyone ...”
At the beginning of this article I begged my readers to forgive me for presenting an article which could not be comprehensive in its survey of all the fallacies used in defense of the faith. My very words were, "Readers will forgive me, I hope, for discussing only a few major types of fallacy. The religious mind not only continues to employ old fallacies, it creates new ones! I cannot keep up with them."
Readers may find it of interest to know that I wrote the quoted words two days before I wrote the present paragraph. In the period between making my excuse and quoting it, I tried out my argument against Pascal's wager on my Dial-an-Atheist service in Columbus, Ohio. To my great delight, one local fundamentalist "refuted" me by employing several old fallacies (such as begging the question and using a false analogy)—and by creating a new one as well! I call his new fallacy "The ThirtyMillion-Dead-Frenchmen-Must-Have-Been-Wrong Fallacy."
"Most of the thousands of 'other' gods that have been worshiped," he argued, "are worshiped no longer because their devotees are dead. These other deities are now but history," he claimed, "and can be ignored. They really are not part of a 'middle ground' between" the three gods plus-or-minus a goddess worshiped by Christians and the spiritless vacuum "believed in" by Atheists.
Well, what about this? If beliefs no longer believed are ipsofacto false, and if cessation of belief is all it takes to wipe out the deities previously believed in, it will be interesting to learn whether our fundamentalist apologist expects that Jesus and Jehovah also would cease to exist if there were no one left on earth who believed in them! Were they nonexistent in the early days before anyone on the planet had come to believe in them?
The bottom line of this discussion of the ad populumfallacy and its would-be obverse, just coined in Columbus, is this: Truth cannot be determined by voting.Everyone alive may be wrong about a particular issue, and all the people who ever knew the correct answer to a question may be dead.
The Irrelevant Conclusion
In his book Answers to 200 of Life's Most Probing Questions, Pat Robertson tries to answer the question, "What duty do I owe the government, and what duty do I owe to God?" To do this he quotes scripture (the old appeal-to-authority fallacy):
That question was asked of Jesus: "Is it lawful to pay taxes to Caesar, or not?" He said, "Show Me the tax money." They gave Him the denarius, on which was a picture of Caesar. Then He asked, "Whose image and inscription is this?" They said, "Caesar's." He answered, "Render therefore to Caesar the things that are Caesar's, and to God the things that are God's." And that has become the standard for what we owe the government. [New York: Bantam Books, 1987, p. 186.]
Now of course, the fact that Caesar's picture is on a coin does not mean it is Caesar's coin! The conclusion does not follow from the "argument": It is a non-sequitur. The conclusion is "irrelevant" in the sense that it does not relate to the argument given.
The fact that this argument does not prove what Pat Robertson wanted it to prove does not, however, mean that it doesn’t prove anything at all. It might be cited as evidence that Jesus—if in fact, he ever existed—could not have been a member of the Zealot Party, as some liberal New Testament scholars have alleged. How so? The denarius mentioned in the quote would have been a coin of Tiberius Caesar. Not only would it have born the image of Tiberius Caesar (something forbidden by the no-graven-images commandment so important to Jews, Muslims, and some early Christian fanatics), it would have borne a version of the inscription “Ti[berius] Caesar Divi Aug(usti) F[ilius] Augustus”—“Caesar Augustus Tiberius, son of the Divine Augustus.” Tiberius claims to be the son of a god?! Blasphemy! If Jesus had been a Zealot, he would have thrown down the coin and stamped it into the dust. He would not have told the story thought by many preachers today to have been so wise a saying.
To better appreciate the irrelevant conclusion fallacy, it may be a good idea to consider another example. Let us consider the case of the Moral Majority city council member who is presenting a bill to close adult book stores. He foams and froths at the mouth about how women must be protected from violence and that "the streets of this city have to be made safe again for women." He inflames the rest of the council by telling them that the incidence of rape is rising, and rape has to be stopped. The council closes the book stores.
Since no evidence was given to show any connection whatsoever between adult book stores and the incidence of rape, the conclusion that the stores should be closed was irrelevant to the argument actually presented, namely, that women should be protected against rape. As so often happens, the speaker succeeded in evoking an attitude of approval for himself and his views and managed to get his listeners to transfer this positive attitude to his final conclusion - more by psychological association than by logical implication, as Prof. Copi shows in his discussion of this fallacy. [Copi, opcit., p, 52.]
The Fallacy of False Cause
This fallacy is the stuff of which superstitions are made. Because bad luck befell someone soon after a black cat crossed his path, he falsely assumed a cause-and-effect relation between the cat and the bad fortune. Because solar eclipses never fail to end if savages beat their drums long enough, they conclude their drumming "saved" the sun. This fallacy is often called the post hoc ergo propter hoc ("after this, therefore because of this") fallacy. Just because event B followed event A, it does not follow that A was the cause of B. Even so, this fallacy can be employed quite lucratively, as Pat Robertson and other televangelists have proven.
In his book Beyond Reason: How Miracles Can Change Your Life, Pat Robertson tells the story of a very wealthy young man who lost everything after a bank failure. The man was reduced to "two dollars in cash and a shotgun." Before blowing his brains out, however, he turned the television on (since he had only two dollars and a gun, must we assume he borrowed the television?) to Pat Robertson's show—perhaps to prepare his brain for total wipe-out.
But the shotgun blast was never heard because God had other plans for Leon Hooten. The image on the screen was my associate, Ben Kinchlow . . . urging the viewers to give their burdens to Jesus Christ. As Ben invited the viewers to pray with him, Leon Hooten quietly laid the shotgun on the floor and gave his life, his shame, and his financial problems to Jesus Christ.
Then in a burst of generosity, he called our counseling center, and like the widow in Jesus' day, gave God all he had. Leon pledged his last two dollars to help us help others ...
Before that evening was over, the telephone rang. At the other end of the line was a friend who said, "Leon, I have some money for you." In fact, it was quite a bit of money—enough to pay Leon's pressing bills and feed his family.
And shortly after that, a friend called to say that he had an idea for a business and wanted Leon to be a part of it.[New York: Bantam Books, 1986, p. 133-4.]
Needless to say. the man who gave his last two dollars to Robertson's empire is now a millionaire—and all because he ...
Miscellaneous Fallacies
There are, as has been mentioned, many fallacies used to defend the faith. Limitations of space have allowed me to elaborate on only a few of them, however, and I can only catalog here a few of the remainder.
Begging the question. This assumes the truth of what is to be proved. It is pretty much the same thing as circular reasoning. A common example of this is the religious argument that "Everything that is in the Bible is true because the Bible is the font and source of all truth."
Argumentum ad baculum. This is an argument which appeals to force: "Support parochial schools with tax money, or else the bishop will close them all and dump the kids into the already inadequate public schools." "Believe on the Lord Jesus Christ and be saved—or you will be tortured for eternity in hell for your unbelief." "Become a Mormon and save your job at the gym."
Argumentum ad ignoranti nam. This is the argument from ignorance (a particularly appropriate fallacy for religionists). A proposition is said to be true simply because it has not yet been proven false: "God does exist. No one has ever been able to prove he does not exist!" "Undetectable gremlins inhabit the rings of Saturn. I know this because you will never be able to disprove it."
Appeal to pity or other emotion. This is often very effective in raising money for one or another rip-off ministry. "Send your faith-dollars to (insert name of scam operation) to save the starving children in (insert name of place from which pathetic pictures have just been received)." This fallacy works by using emotion to overpower reason.
The complex question. Here there is a hidden question which is presumed to be already answered and is hidden inside another: "Have you stopped beating your wife?" "Why is it that Atheists such as evolutionists have never yet come up with a reason for human existence?" (Three hidden questions are to be found here!)
Various other techniques may be employed which are not really logical errors, but nevertheless may be extremely effective. Among these are the following:
• Simple falsification of data. Inventing imaginary "references."
• Merely making powerful assertions and then offering no proof—or saying one is offering proof but actually providing none.
• Taking references out of context. Quite often one may find, upon examining the original sources from which quotations are "lifted," that the original has material damaging to the proposition being "proven," but which is conveniently let out of the quotation. Often the original may have nothing to do with the question at hand.
·Setting up a "Straw Man" and then knocking him down. The periodically recurring canard that Madalyn O'Hair is trying to get the FCC to ban all religious broadcasting often serves as a straw man which even the least resourceful preachers can bash to chaff—increasing thereby their resources.
• Changing the meaning of words as one proceeds through a lengthy argument.
***
For scientists and all other people whose lives are devoted to the quest for truth, the use of fallacious reasoning is a disability, an embarrassment, and something to be avoided at all cost. One need only to tum on the radio or the television, however, or examine the sales statistics of creationist presses to see that for religionists, on the contrary, fallacy not only is a way of life, it's the way to "the good life." Shouldn't Atheists be doing a lot more to correct this sorry state of affairs?
Of course, there are situations where truth may be reluctant to reveal itself—as in the case of finding answers to questions such as, "How can we cure cancer?" or "How can we live to be two hundred years old?" But despite the existence of inherently difficult problems such as these, it is still easier for an Atheist than for a believer to discover truth.
Indeed, believers are often precluded from discovering truth because of the restrictive nature of their belief systems and the impediments they place in the way of free inquiry. The Amish, for example, do not allow their children to be educated beyond the eighth grade, and fundamentalists of all stripes do not allow their children to learn anything of importance in the areas of biology and geology. Up until recent times, the Roman Catholic church had its Index of Forbidden Books. Adding to this the fact that true believers have to waste endless hours trying to justify the unjustifiable—why does a loving god allow suffering, why are there so many "apparent" contradictions in a supposedly perfect Bible, etc.—we see that believers are in trouble every time they try to think.
As a matter of fact, there is a very basic reason why religionists have difficulty in discovering, justifying, and demonstrating truth: Their "truth" is alleged to derive from revelation, not observation.
Consider the problem facing the person who, in a state of hypoglycemia, "learns" that god exists as three separate persons and yet is but a single individual. How will he demonstrate the truth of his claim to the person who, after eating an ergot-infested piece of rye bread, receives an "inspiration" which indicates that god exists as π-persons—and that the godhead is a pi-ety, not a trinity. How will the trinitarian counter the heretical argument that a pi-ety is more appropriate than a trinity because π is an irrational number?
Unlike scientists, who, when faced with a controversy, can appeal to the witness of nature itself for corroboration or disqualification of their various competing hypotheses, religionists have no objective, material standard to which they can appeal for support of their often psychoceramic (crackpot) notions.In order to advance their peculiar ideas; there are basically, only two methods they can employ: They can trick people into believing as they do, or they can suppress opposing points of view—even going so far as to exterminate their opponents when possible.
Alas for the cause of "true" religion: It has been a long time since god's anointed and appointed representatives in the North American market were able to convert their opponents to carbon dioxide and ashes in order to demonstrate the superiority of their theological ruminations. To be sure, they are still fairly adept at suppressing books, films, and broadcast media productions which threaten their ability to hoodwink the gullible. Nevertheless, it is becoming harder and harder for religionists to close the floodgates of information emanating from religion-free sources. More and more, the religious have to put their eggs in the deception and disinformation basket.
If people are to be successful in avoiding the traps of logic and misinformation set for them by the purveyors of preposterosity, they must study not only science but logic. They must learn to identify and deal with the more common forms of fallacy used by the faithful to further their cause. It is for this reason that I concentrate "The Probing Mind" upon some of the faulty arguments advanced by defenders of the faith. Readers will forgive me, I hope, for discussing only a few major types of fallacy. The religious mind not only continues to employ old fallacies, it creates new ones! I cannot keep up with them.
Pascal's Wager:
The Fallacy of Excluded Middle
Although Blaise Pascal (1623-1662) came honestly to the logical fallacy which bears his name, it is routinely used by dishonest religionists to trap people into making out their wills to Jesus. As the word wager implies, this fallacy appeals to the gambling instinct in us all. Stated briefly, Pascal's wager argues that although we cannot really know for sure if a god exists or not, we are wise to gamble on god.
If we live our lives as observant Christians, the argument goes, and it turns out that when we die there is nothing out there—no god, no life after death—we have not really lost anything. We got through life pleasantly enough and died in peace. But if we lead our lives as Atheists, die, and come to find that god and heaven and hell are realitie—we are in a heap of trouble! So, it is better to bet on god: We may not win anything, but at least we won't end up spending an eternity polishing the chandeliers over god's gaming tables.
The problem with this argument is that it ignores the fact that there is a middle ground between the Roman Catholic god imagined by Pascal and the no-spooks-at-all position taken by Atheists. For example, there may be a god and she's Chinese—and highly resentful of people who masculinize the deity. In this case, Pascal would be in more trouble after death than would an Atheist! There may be two gods, one goddess, one celestial eunuch, and an advisory committee of quasi-omnipotent animals. It is easy to imagine theological situations in which both Christians and Atheists would find themselves equally in trouble after death—or in no trouble at all, if reality turns out to involve a committee of divine experimenters who, lacking complete omniscience, have created the human comedy for their amusement and are simply watching to see how it comes out.
Considering the thousands of gods, goddesses, and combinations thereof who have been worshiped by humans in their known history—and considering the fact that no convincing evidence exists for anyone of them, including Jesus—it is obvious that the person betting on Jesus has even less chance of "winning" than I have of winning the New York State lottery on the same day I win the Ohio lottery. Moreover, "believing" in Jesus isn't as easy as it sounds. Every sect has its own rules as to just how and what to believe, and their rules often are mutually exclusive.
Indeed, the Atheist can now completely reverse Pascal's wager. The fact that thousands of deities have been worshiped—and the fact that in thousands of years not one shred of evidence has been found to prove the existence of anyone of them—tells the intelligent gambler that there isn't a trump card in the whole deck. By considering the middle ground ignored by Pascal and his successors, the Atheist concludes that it is foolish to waste life preparing for death. Carpediem!
The Black or White Fallacy
This fallacy is really another aspect of the excluded middle fallacy. It involves treating the world and its phenomena as though everything were black or white, on or off, good or bad, with no intermediate conditions existing. While it is true that at the subatomic level of quantum mechanics nature does appear to behave in an all-or-nothing manner, above the atomic level reality appears most often to be a continuum joining arbitrarily demarcated extremes. How hot is hot? How cold is cold? Is a brain-dead individual "completely" dead? How often does anyone act in a way that is absolutely good or absolutely bad?
It is this inability to handle continua that prevents creationists from accepting the fact of evolution. This is the reason fundamentalists cannot see the utility of "situation ethics." This is why Right-to-Single-Celled-Lifers think zygotes are people, and acorns are oak trees. When creationists insist that humans are a unique creation separate from the "animals"—despite the fact that their genome is 98.5 percent identical to that of chimpanzees—they are carving a continuum into black-or-white compartments. When the Mandatory Motherhood people claim that a single cell containing forty-six chromosomes is a "human being," they argue falsely that two stages in a developmental continuum are identical simply because they are connected. This is like saying that Martin Luther died a Roman Catholic because he started out as one!
The Ad Hominem Fallacy
This is probably the fallacy encountered most frequently in the effusions of the faithful. It comes in two varieties: the abusive species and the circumstantial species. The term ad hominem means "to the man" and refers to the fact that this type of argument attacks one's opponent rather than his argument. The abusive species substitutes name-calling for logic or evidence:
"So you're an Atheist, are you? Stalin was a mass murderer and an Atheist too. Do you really want to be like that? Why don't you repent of your error and get saved?"
Of course, even if every Atheist in the world were a mass murderer, it would have no bearing whatsoever upon the question, "Do gods exist?"
Since the days of old Joe McCarthy, the most popular adhominem argument used against Atheists is the one that links Atheism with communism. The effort continues to prevent people from openly calling themselves Atheists by making them fearful of being called communists as well. The abusive adhominem here blends into another type of fallacy, the guilt by associationfallacy. The fact that repressive regimes in Eastern Europe happen to be run by Atheists is used to brand American Atheists as communists and—by a leap of illogic—wrong in their philosophy. Ofcourse, the fact that Franco and Hitler and Pope Innocent Ill (who sent a special crusade to annihilate a million people in southern France) were Christians should not be used as an indication of what kinds of folk the Christians be!
The circumstantial speciesof the ad hominem fallacy is more subtle and difficult for beginning logicians to detect. It attacks the opponent by appealing to the special circumstances of his life. Professor Irving Copi, in his textbook Introduction to Logic [New York: Macmillan, 1953, p. 55] gives a very good example of this fallacy:
The classical example of this fallacy is the reply of the hunter when accused of barbarism in sacrificing unoffending animals to his own amusement. His reply is to ask the critic, "Why do you feed on the flesh of harmless cattle?" The sportsman here is guilty of an argumentum ad hominem because he does not try to prove that it is right to sacrifice animal life for human pleasure, but merely that it cannot consistently be decried by his critic because of the critic's own special circumstances, in this case his not being a vegetarian. Arguments such as these are not correct; they do not present good evidence for the truth of their conclusions but are only intended to win assent to the conclusion from one's opponent because of his special circumstances. This they frequently do; they are often very persuasive.
Creationists are very adroit in the use of this fallacy. In their fight against Einstein's theory of relativity (creationists hate Einstein every bit as much as Darwin), they frequently claim that relativity in physics leads straight to relativity in morals: Once you give up the absolutes in physics, the absolutes in ethics fly out the window too! This appeals. to the special circumstance that most of the people who come to listen to creationists are adherents of strict, inflexible codes of authoritarian morality. The same special circumstance that prevents them from accepting relativity in physics will prevent them from accepting the idea that humans evolved from "lower animals." Teaching our children that they came from animals will give them license (a particularly fearsome word) to behave as animals - whatever that means. (Perhaps they fear that humans might pair-bond for life, as doves do, or practice altruistic behavior like that of ants, bees, or baboons.)
The Appeal to Authority
Known to logicians as the argumentum ad verecundiam, the appeal to authority is fallacious only when it is implied that such and such is true simply because some famous "authority" says so. The fact that a famous astronaut believes there is a battleship-sized boat on the top of a Turkish volcano does not prove the existence of Noah's Ark. The fact that the president of the United States thinks the earth is only six thousand years old does not make the delusion reality. The creationists are especially skilled at quoting famous scientists in such a way as to array their authority against Darwinism. To be sure, they are usually guilty of quoting out of context—itself a type of fallacy involving dishonesty more than illogic. Even so, when their quotes are in context, they rarely give the necessary evidence to support the point in question; they merely appeal to the authority of a famous scientist as a reason for accepting their argument.
Now, of course, scientists routinely cite authorities when they write articles and give lectures. So what is the difference between citingan authority—the way real scientists do—and appealing to an authority, the way creationists and other religious apologists do? The difference is profound. When the religionist appeals to an authority, it is becausehe hopes the appeal will be sufficient tocompel belief. When a scientist cites an authority, it is for reasons very different from those of the religionist. First of all, a scientist cites other authorities for the sake of efficiency. It is impossible for any one person to repeat all the experiments and observations that have been done in an area of interest. Secondly, citing authorities is a way of indicating who should get the credit (or blame!) for providing parts of the background information needed to do a particular piece of research. It is a way of giving credit where credit is due—and tells readers whom to blame if some critical part of the background information should prove incorrect and invalidate the new piece of research.
It is always expected that any piece of research cited could be repeated if necessary. Citation of authorities is merely a time- and money-saving procedure which gives credit where credit is due. Appealing to authorities seeks only to compel belief without proof.
Thirty Million Frenchmen ...
I am sure that all of my readers have had the experience of arguing with someone about something, only to be told that "Everybody knows that …" or "It has always been known that …" Readers who have ever listened to the Spiel of a professional creationist will remember being told that "Hundreds of scientists every year now are giving up the myth of Darwinism." Then there is Augustine's "proof' that Christianity is true because it has been "believed everywhere, always, and by all men."
Known popularly as the "Thirty-MillionFrenchmen-Can't-Be-Wrong Fallacy," this fallacy is known to logicians as the argumentum ad populum. For proof it substitutes an appeal to the masses, the majority, or simply a large number of people—thirty million Frenchmen, for example. Of course, thirty million Frenchmen were wrong back in the days when they believed the sun went around the earth! Remember this the next time someone challenges your doubting ways by saying, "Who are you to suggest that Jesus never existed when everyone who is anyone ...”
At the beginning of this article I begged my readers to forgive me for presenting an article which could not be comprehensive in its survey of all the fallacies used in defense of the faith. My very words were, "Readers will forgive me, I hope, for discussing only a few major types of fallacy. The religious mind not only continues to employ old fallacies, it creates new ones! I cannot keep up with them."
Readers may find it of interest to know that I wrote the quoted words two days before I wrote the present paragraph. In the period between making my excuse and quoting it, I tried out my argument against Pascal's wager on my Dial-an-Atheist service in Columbus, Ohio. To my great delight, one local fundamentalist "refuted" me by employing several old fallacies (such as begging the question and using a false analogy)—and by creating a new one as well! I call his new fallacy "The ThirtyMillion-Dead-Frenchmen-Must-Have-Been-Wrong Fallacy."
"Most of the thousands of 'other' gods that have been worshiped," he argued, "are worshiped no longer because their devotees are dead. These other deities are now but history," he claimed, "and can be ignored. They really are not part of a 'middle ground' between" the three gods plus-or-minus a goddess worshiped by Christians and the spiritless vacuum "believed in" by Atheists.
Well, what about this? If beliefs no longer believed are ipsofacto false, and if cessation of belief is all it takes to wipe out the deities previously believed in, it will be interesting to learn whether our fundamentalist apologist expects that Jesus and Jehovah also would cease to exist if there were no one left on earth who believed in them! Were they nonexistent in the early days before anyone on the planet had come to believe in them?
The bottom line of this discussion of the ad populumfallacy and its would-be obverse, just coined in Columbus, is this: Truth cannot be determined by voting.Everyone alive may be wrong about a particular issue, and all the people who ever knew the correct answer to a question may be dead.
The Irrelevant Conclusion
In his book Answers to 200 of Life's Most Probing Questions, Pat Robertson tries to answer the question, "What duty do I owe the government, and what duty do I owe to God?" To do this he quotes scripture (the old appeal-to-authority fallacy):
That question was asked of Jesus: "Is it lawful to pay taxes to Caesar, or not?" He said, "Show Me the tax money." They gave Him the denarius, on which was a picture of Caesar. Then He asked, "Whose image and inscription is this?" They said, "Caesar's." He answered, "Render therefore to Caesar the things that are Caesar's, and to God the things that are God's." And that has become the standard for what we owe the government. [New York: Bantam Books, 1987, p. 186.]
Now of course, the fact that Caesar's picture is on a coin does not mean it is Caesar's coin! The conclusion does not follow from the "argument": It is a non-sequitur. The conclusion is "irrelevant" in the sense that it does not relate to the argument given.
The fact that this argument does not prove what Pat Robertson wanted it to prove does not, however, mean that it doesn’t prove anything at all. It might be cited as evidence that Jesus—if in fact, he ever existed—could not have been a member of the Zealot Party, as some liberal New Testament scholars have alleged. How so? The denarius mentioned in the quote would have been a coin of Tiberius Caesar. Not only would it have born the image of Tiberius Caesar (something forbidden by the no-graven-images commandment so important to Jews, Muslims, and some early Christian fanatics), it would have borne a version of the inscription “Ti[berius] Caesar Divi Aug(usti) F[ilius] Augustus”—“Caesar Augustus Tiberius, son of the Divine Augustus.” Tiberius claims to be the son of a god?! Blasphemy! If Jesus had been a Zealot, he would have thrown down the coin and stamped it into the dust. He would not have told the story thought by many preachers today to have been so wise a saying.
To better appreciate the irrelevant conclusion fallacy, it may be a good idea to consider another example. Let us consider the case of the Moral Majority city council member who is presenting a bill to close adult book stores. He foams and froths at the mouth about how women must be protected from violence and that "the streets of this city have to be made safe again for women." He inflames the rest of the council by telling them that the incidence of rape is rising, and rape has to be stopped. The council closes the book stores.
Since no evidence was given to show any connection whatsoever between adult book stores and the incidence of rape, the conclusion that the stores should be closed was irrelevant to the argument actually presented, namely, that women should be protected against rape. As so often happens, the speaker succeeded in evoking an attitude of approval for himself and his views and managed to get his listeners to transfer this positive attitude to his final conclusion - more by psychological association than by logical implication, as Prof. Copi shows in his discussion of this fallacy. [Copi, opcit., p, 52.]
The Fallacy of False Cause
This fallacy is the stuff of which superstitions are made. Because bad luck befell someone soon after a black cat crossed his path, he falsely assumed a cause-and-effect relation between the cat and the bad fortune. Because solar eclipses never fail to end if savages beat their drums long enough, they conclude their drumming "saved" the sun. This fallacy is often called the post hoc ergo propter hoc ("after this, therefore because of this") fallacy. Just because event B followed event A, it does not follow that A was the cause of B. Even so, this fallacy can be employed quite lucratively, as Pat Robertson and other televangelists have proven.
In his book Beyond Reason: How Miracles Can Change Your Life, Pat Robertson tells the story of a very wealthy young man who lost everything after a bank failure. The man was reduced to "two dollars in cash and a shotgun." Before blowing his brains out, however, he turned the television on (since he had only two dollars and a gun, must we assume he borrowed the television?) to Pat Robertson's show—perhaps to prepare his brain for total wipe-out.
But the shotgun blast was never heard because God had other plans for Leon Hooten. The image on the screen was my associate, Ben Kinchlow . . . urging the viewers to give their burdens to Jesus Christ. As Ben invited the viewers to pray with him, Leon Hooten quietly laid the shotgun on the floor and gave his life, his shame, and his financial problems to Jesus Christ.
Then in a burst of generosity, he called our counseling center, and like the widow in Jesus' day, gave God all he had. Leon pledged his last two dollars to help us help others ...
Before that evening was over, the telephone rang. At the other end of the line was a friend who said, "Leon, I have some money for you." In fact, it was quite a bit of money—enough to pay Leon's pressing bills and feed his family.
And shortly after that, a friend called to say that he had an idea for a business and wanted Leon to be a part of it.[New York: Bantam Books, 1986, p. 133-4.]
Needless to say. the man who gave his last two dollars to Robertson's empire is now a millionaire—and all because he ...
Miscellaneous Fallacies
There are, as has been mentioned, many fallacies used to defend the faith. Limitations of space have allowed me to elaborate on only a few of them, however, and I can only catalog here a few of the remainder.
Begging the question. This assumes the truth of what is to be proved. It is pretty much the same thing as circular reasoning. A common example of this is the religious argument that "Everything that is in the Bible is true because the Bible is the font and source of all truth."
Argumentum ad baculum. This is an argument which appeals to force: "Support parochial schools with tax money, or else the bishop will close them all and dump the kids into the already inadequate public schools." "Believe on the Lord Jesus Christ and be saved—or you will be tortured for eternity in hell for your unbelief." "Become a Mormon and save your job at the gym."
Argumentum ad ignoranti nam. This is the argument from ignorance (a particularly appropriate fallacy for religionists). A proposition is said to be true simply because it has not yet been proven false: "God does exist. No one has ever been able to prove he does not exist!" "Undetectable gremlins inhabit the rings of Saturn. I know this because you will never be able to disprove it."
Appeal to pity or other emotion. This is often very effective in raising money for one or another rip-off ministry. "Send your faith-dollars to (insert name of scam operation) to save the starving children in (insert name of place from which pathetic pictures have just been received)." This fallacy works by using emotion to overpower reason.
The complex question. Here there is a hidden question which is presumed to be already answered and is hidden inside another: "Have you stopped beating your wife?" "Why is it that Atheists such as evolutionists have never yet come up with a reason for human existence?" (Three hidden questions are to be found here!)
Various other techniques may be employed which are not really logical errors, but nevertheless may be extremely effective. Among these are the following:
• Simple falsification of data. Inventing imaginary "references."
• Merely making powerful assertions and then offering no proof—or saying one is offering proof but actually providing none.
• Taking references out of context. Quite often one may find, upon examining the original sources from which quotations are "lifted," that the original has material damaging to the proposition being "proven," but which is conveniently let out of the quotation. Often the original may have nothing to do with the question at hand.
- Exaggeration.
·Setting up a "Straw Man" and then knocking him down. The periodically recurring canard that Madalyn O'Hair is trying to get the FCC to ban all religious broadcasting often serves as a straw man which even the least resourceful preachers can bash to chaff—increasing thereby their resources.
• Changing the meaning of words as one proceeds through a lengthy argument.
***
For scientists and all other people whose lives are devoted to the quest for truth, the use of fallacious reasoning is a disability, an embarrassment, and something to be avoided at all cost. One need only to tum on the radio or the television, however, or examine the sales statistics of creationist presses to see that for religionists, on the contrary, fallacy not only is a way of life, it's the way to "the good life." Shouldn't Atheists be doing a lot more to correct this sorry state of affairs?