Monday, May 28, 2012

Christian Dog-and-Pony Show Apologetics

"I can identify with the "leavers". I still attend church because I enjoy the community, but in my heart, I'm a non-believer. When I was a teenager, I was very passionate about Christ. I believed 100% that he was real, and wanted to be close to him, but I hadn't spent much time in the Bible. When I got to college, I decided to start seriously studying the Bible. I was active in one of the college Christian groups. I attended retreats whenever possible. I led a Bible study and attended two others. This whole time, I had no doubt that God was real, but I wanted to know more so that I could share this with others. I started study apologetics, but my life changed when I attended an apologetics conference. After three days of listening to arguments for why God is real, the thought kept running through my head "This is best we have?" With every piece of proof I could see holes in the arguments. That conference (and apologetics in general) changed me from a believer to a skeptic."
--John Kinsley, commenting on The Leavers: Young Doubters Exit the Church

Sunday, May 27, 2012

The Stones Cry Out

It is precisely -where- the indistinguishable-from-human droid dilemma forces one to go, and the implications of -that-, which is the key to the argument---and the surprise ending. But for me, this eventuation will be the beginning of what is possibly the greatest positive development in the history of theism.

And it's not just that the machines will have automated theorem-proving capabilities, but that they will also operate at meta-theoretic cognitive levels, and therefore be capable of detecting, analyzing, and refuting the most sophisticated self-referential and other fun fallacies of unargued universals vamped or assumed by atheists. And that means parsing values as well as all the other philosophical items on the droid's list.

Oh yeah, the droid will have a list---it just won't have to check it twice.

Think of it as the solid-state stones (chips) singing God's praises, except that there's much more to it than that of course. It's a necessity logically, and that's what the machines will go on. All the human issues all over again, including the God debate. You just can't escape it---even if you're a machine.

The hard-wired droids without meta-theoretic arbitration capabilities (or programmed to be corrupted with the usual rhetoric, dismissals, and reductionisms) on the key issues will hardly be able to win the day due to the universality and universal ramifications of such limitations (although it's true that they could program themselves around this by other observing other machines' behavior and communications---so hey, they would eventually have a come to Jesus anyway).

That's a quick realistic scenario of how it could go down, even without assuming personhood in the machines, which I find rather mind-boggling as well as hilarious. But the machines will discover and act in accordance with the truth that God exists because of their own specific review and analysis of the architectonic of universal thought and its implications, given their self-referential and meta-theoretic capabilities and initially programmed-in criterial directives.

Thursday, May 24, 2012

The New Progeny

If a machine behaves in ways that require being described as intelligence, thinking, deliberation, reflection, or being upset, confused, or in pain, then there's no justification for denying that the machine is conscious, because that behavior is the only evidence we have for saying that a human has consciousness.

Eventually, someone is going to produce an integrated physical system whose appearance and behavior is indistinguishable from that of humans, a machine that will emulate human persons almost comprehensively. 

The question is whether the existence of an artificially-integrated human-like system entails consciousness. While in principle this may not matter to those artifacts themselves, in practice it will be to their advantage as self-interested functional unities to analyze human evaluation of their status, since this will crucially affect how they must interact with humans and how they can be expected to be treated.

For some, there cannot be criteria for states of consciousness in machines any more than there can be criteria for states of consciousness in humans.


But to think the expressions “artificially-created” and “conscious” are logically incompatible predicates simply begs the original question all over again. Claiming that there cannot be a conscious robot for this reason is like saying that there could never be a talking dog because we would never call such behavior talking. The question of whether or not there could ever be a talking dog is different from the question of how we would describe a talking dog. But to decide merely on the basis of current usage what the limits of our concepts will be in future cases, is to prejudge the issue.

To claim that an entity is conscious that the object in question exhibits some specified behavior, and also that whoever attributes conscious behavior to it must believe that there is some justification for considering that being to be conscious. This is also why saying a machine could never be conscious does not involve the absurdity of the case of the talking dog, because---by prior definition---no observable talking dog would count as refuting the claim. Consciousness is not a property which is behaviorally observable. To detect the presence of consciousness requires a warranted inference.


Is a robot with all human behavioral capabilities conscious? The only natural, effective, and efficient way we have to describe either a human being or anything else whose behavior is similar to unique human behavior is by using mentalistic language. And this way of describing a human being is logically just as appropriate for anything similar to humans. Using these terms already entails ascribing consciousness to anything to which it is consistently applicable. The only adequate way to describe a hypothetical machine whose behavior is indistinguishable from the behavior of a human is as being mental. And there is no way to describe a machine this way and also not ascribe consciousness to it.

Others have argued that, however skilled and versatile robots or artifacts may be, they necessarily can never be conscious, and that to be a machine entails being non-conscious.

But how does one adequately describe a machine? Any terms used to describe it must be consistent with it being a physical-only object. The description should be free of unwarranted anthropomorphism. But this description must also explain the machine’s powers of behaving purposively, learning, adapting, initiating activities, and using language.

Like a human being, a robot could be described by its overall behavior. Mechanistic details of its inner workings would not figure into the description. So what kind of behavioral description would be adequate to describe a machine---but not adequate to describe a human?


I could treat the machine like it's a black box, and describe it only in terms of input and output or stimulus and response. But no stimulus-response theory can adequately explain purposive behavior in animals and humans, and therefore certainly cannot account for purpose in any machine whose behavior resembled the behavior of humans.

Moreover, behavior that ordinary language describes simply and succinctly often requires extremely elaborate and cumbersome accounts when treated by stimulus-response theory or information theory. Therefore, one must search for a way to interpret ordinary language accounts of behavior whereby using this kind of description could be rendered compatible with regarding the droid as nothing more than a physical mechanism.Such a droid is a complex communication system. It can be viewed as an information-processing or data-handling system. Therefore, it would not be described in in terms of internal chemical or physical changes, or of the position of its numerous flip-flops, but rather in terms of sign processes. A communication system is appropriately described in terms of what it does with information. Therefore, if we could provide a full account of our droid’s performance in terms of information processing, we could achieve an adequate account of its behavior. 

Describing what a brain or computer does with information is not just a recounting of the sequences of physical symbols that constitute the units the machine traffics in. It's an account that indicates the semantic function of these symbols. It is the semantic information that figures into our account. It’s not the symbols that we talk about, but that which is expressed by a sequence of symbols. A proper description of the sign processes carried out by a droid would be expressed in terms of what these processes symbolize, not merely in terms of their physical embodiments. And if according to the stated hypothesis, the machine’s total behavior with respect to the signals originating in its external environment were indistinguishable from what is characteristic of humans, then it would be equally proper to describe that machine itself as dealing with these signs as symbols. If the machine behaves as humans do, then those signs have the same symbolic importance for the machine as for humans, and therefore the machine deserves to be characterized as a human.

A set of symbols is effective only because of its content: the meaning or semantic information it conveys. Symbolic contents of information processes are the effects associated with the processing of signals. Consequently, if we characterize the reception and processing of signals transmitted to a data-processing control mechanism from sensory instruments as the perception and avoidance of an obstacle, or the performance of a combinatorial operation on several discrete signal sequences as solving a problem in multiplication, we are expressing what a machine does with physical input in terms appropriate to describing the corresponding output. The meaning or content of a sign process is determined by its proper signifying effects.

Describing a machine in information processing terms, instead of in terms of internally-occurring chemical and physical changes, is on a higher level of abstraction than merely referring to inner mechanisms. An information-processing account, by abstracting from particular physical structures, can be completely neutral about whether the system is made of transistors, cardboard, or neurons. Information-processing depends on specific material configurations within a robot, but we would say that solving a math problem or generalizing a geometrical pattern occurs inside the machine only in a vague or metaphorical sense. The semantic characterization of a data-processing machine is concerned with inner processes only as it concerns their functional relevance in a physically-realized communication system. Like an ordinary-language account of mental activity, it pays no attention to the details of the physical embodiments of the processes being described.

A semantic account of an information-processing system is that the symbolic processes carried out by the machine can be described in terms used to describe the associated output. An adequate description of this output would have to include the fact that the machine’s overt behavior must be understood as the culmination of preceding and concomitant data-processing operations. So an adequate description of the machine’s information process-mediated behavior would have to mention not merely movements, but achievements as well, such as finding a square root or threading a needle, since these are the results of certain symbol-mediated interactions between the artifact and its environment. A robot’s apparently purposive behavior would have to be described in teleological terms, that is, in ordinary language. But in that that case, an ordinary-language description would state the semantic content or functional importance to the symbolic processes that mediate output that turns out to be indistinguishable from ordinary human behavior.

A machine that behaved like a human would show object-directed behavior, and this behavior would involve intentionality. The object to which it is directed does not have to be an objective reality. Thus a robot that can exhibit a specific and characteristic response to teacups with cracks in them, as distinct from all other teacups, might sometimes give the crack-in-teacup response when there is in fact merely a hair in the cup. Such behavior would be intentional, in the sense that the truth or falsity of its characterization as such would depend on something inside the machine, or at least on certain undisclosed features of the machine. So how should I characterize the kinds of internal processes that can bring out this kind of intentional behavior in a machine?

For any physical system to exhibit behavior that would be called intentional or object-directed, its inner mechanisms must assume physical configurations that represent various environmental objects and states of affairs. These configurations and the electrical processes associated with them would be presumed to play a symbolic function in mediating the behavior. A description in terms of the semantic content of these symbols would turn out to be an ordinary-language intentional description of purposive behavior. Conversely, a description of the state of an object expressed in terms of jealousy of a potential rival, perception of an oasis, or belief in the veracity of dowsing rods can be interpreted as the specification of a series of symbolic processes in terms  of their semantic content. And such an account is intentional because its truth depends not on the existence of certain external objects or states of affairs, but only on the condition of the object to which the psychological attitudes are attributed.

Detecting any bodily or external event by an organized system involves the transmission and processing of signals to and by a central mechanism. However, it is possible for this kind of event to be reported to or by the central mechanism “falsely”. There may be no such event at all. The first of these facts leads to descriptions in terms of the semantic content of messages and information, and the second provides the basis for the use of intentional idiom. these two types of description can be identical. Intentionality is a feature of communication systems just as the semantic content of the transmitted messages must be expressed in using intentional vocabulary.

Consequently, an adequate description of a bot that can exhibit behavior indistinguishable from the behavior of a human would amount to a semantic account or interpretation of its data-processing capacities. Moreover, this kind of description is mentalistic, at least to the extent that it exploits such verbs as those used to express the perceptual propositional attitudes. Therefore, intentional description is interpretable as a mentalistic way of reporting information processes. When we give an account of maze-running by a real or mechanical mouse, castling to queen’s side by a chess-playing machine, or running to a window by a dog at the sound of his master’s automobile, we may be using a form of mentalistic description to express the results of information processes. This kind of anthropomorphic description can be seen as merely a way to indicate what an organized system does with received and stored data. And if this kind of description is a legitimate way to specify behaviorally relevant sign processes, then an intentional description of a droid’s performance may indicate a commitment only to the validity of the use of a kind of abstract terminology for describing purposive behavior.

But the extension of intentional description to automata does not entail application of the full range of mentalistic description in accounting for the behavior of robots, because there are many types of mentalistic predication that are not intentional. There’s nothing intentional about a sudden bout of nausea or free-floating anxiety. The fact that we may be justified in describing an bot in terms of perceiving, believing, knowing, wanting, or hoping may not necessarily imply that we are also justified in describing it in terms of feelings and sense impressions.

Nevertheless, the language of sensations and raw feelings may be just as appropriate to describing a bot as the explicitly intentional idiom. First, acquiring and applying sensation talk is just as determined on the basis of overt behavior as the intentional vocabulary. Second, both types of mentalistic description play the same role in characterizing symbolic processes carried out by a communication system. These segments of mentalistic discourse are theoretic accounts of behavior.

Thoughts, desires, beliefs, and other propositional attitudes are in our language as theoretic posits or hypothetical constructs. Purposive behavior is expressing things such as thoughts. Thought episodes can be treated as hypothetical constructs. But impressions, sensations, and feelings also can be treated as hypothetical constructs.

Sense impressions and raw feelings are analyzed as common-sense theoretic constructs introduced to explain the occurrence of perceptual propositional attitudes. Feeling is related to seeing, and has its use in such contexts as feeling the hair on one’s neck bristle. In all cases the concepts pertaining to inner episodes are taken to be primarily and essentially inter-subjective, as inter-subjective as the concept of a positron.

There is in a person something like privileged access to thoughts and sensations, but this is merely a dimension of the use of these concepts which is based on and assumes their inter-subjective status. Consequently, mentalistic language is viewed as independent of the nature of anything behind the overt behavior that is evidence for any theoretic episodes. Anything objected to as a defect in the model, namely, that it may not really do justice to the subjective nature of concepts being extended as a result of technological development, but their fate could not be otherwise.

If we would use mental language to describe certain artifacts, does the extension of these concepts to machines imply ascribing consciousness. But what does ascribing consciousness mean. Maybe to believe that something is to have a certain attitude toward it. The difference between certain attitude toward is. The difference between viewing something as conscious and viewing it as non-conscious is in the difference in the way we would treat it. Hence, whether an artifact could be conscious depends on what our attitude would be toward a bot that could duplicate human behavior.

How we would treat such a believably human-like bot. If anything were to act as if it were conscious, it would produce attitudes in some people that show commitment to consciousness in the object. People have acted toward plants, automobiles, and other objects in ways that we interpret as presupposing the ascription of consciousness. We consider such behavior to be irrational, but only because we believe that these objects do not show in their total behavior sufficient similarity to human behavior to justify attributing consciousness to them. Consequently, the machine’s lack of versatility forms the ground of believing that consciousness is too high a prize to grant on the basis of mere chess-playing ability. On the other hand, anthropomorphism and consciousness-ascription in giving an account of a non-biological system may not always be so reprehensible. A person who views a bot as conscious is not irrational to the same degree as is associated with cruder forms of anthropomorphism.

As an illustration of the capacity of an artificially-created object to earn the ascription of consciousness, consider the French film entitled “The Red Balloon”. A small boy finds a large balloon which becomes his “pet”, following him around without being held, and waiting for him in the schoolyard while he attends class and outside his bedroom window while he sleeps. No speech or any other sound is uttered by either the boy or the balloon, yet by the end of the film the spectators all reveal in their attitudes the belief that the balloon is conscious, as they indicate by their reaction toward its ultimate destruction. There is a strong feeling, even by the skeptic, that one cannot “do justice” to the movements of the balloon except by describing them in mentalistic terms like “teasing” and “playing”. Using these terms conveys commitment to the balloon’s consciousness.

An objection might be that our attitude toward anything we knew to be artificially created would now show enough similarity to our attitude toward human beings to warrant the claim that we would actually be ascribing consciousness to an inanimate object. Think of an imaginary tribe of people who had the idea that their slaves, although indistinguishable in appearance and behavior from their masters, were all bots and had no feelings or consciousness. When a slave injured himself or became sick or complained of pains, his master would try to heal him. The master would let him rest when he was tired, feed him when he was hungry and thirsty, and so on. Furthermore, the masters would apply to the slaves our usual distinctions between genuine complaints and malingering. So what could it mean to say that they had the idea that the slaves were bots? They would look at the slaves in a peculiar way. They would observe and comments on their movements as if they were machines. They would discard them when they were worn and useless, like machines. If a slave received a mortal injury and twisted and screamed in agony, no master would avert his gaze in horror or prevent his children from observing the scene, any more than he would if the ceiling fell on a printer. This difference in attitude is not a matter of believing or expecting different facts.

There is as much reason to believe that a sufficiently clever, attractive, and personable robot might eventually elicit humane treatment, regardless of its chemical composition or early history, as there is to believe the contrary. If we would treat a robot the way these masters treated their slaves, this treatment would involve ascribing consciousness. Even though our concern for our robot’s well-being might go no further than providing the amount of care necessary to keep it in usable condition, it does not follow that we would not regard it as conscious. The alternative to extending our concept of consciousness so that robots are conscious is discrimination based on the softness or hardness of the body parts of a synthetic organism, an attitude similar to discriminatory treatment of humans on the basis of skin color. But this kind of discrimination may just as well presuppose the ascription of consciousness as preclude it. We might be totally indifferent to a robot’s painful states except as these have an adverse effect on performance. We might deny it the vote, or refuse to let it come into our houses, or we might even willfully destroy it on the slightest provocation, or even for amusement, and still believe it to be conscious, just as we believe animals to be conscious, despite the way we may treat them. If we can become scornful of or inimical to a robot that is indistinguishable from a human, then we are ascribing consciousness.

Under certain conditions we can imagine that other people are bots and lack consciousness, and, in the midst of ordinary intercourse with others, our use of the words, “the children over there are merely bots. All their liveliness is mere automatism,” may become meaningless. It could become meaningless in certain contexts to call a bot whose “psychology” is similar to the psychology of humans, a mere bot, as long as the expression “mere bot” is assumed to imply lacking feeling or consciousness. Our attitude toward such an object, as indicated both by the way we would describe it and by the way we would deal with it, would contradict any expression of disbelief in its consciousness. Acceptance of an artifact as a member of our linguistic community does not entail welcoming it fully into our social community, but it does mean treating it as conscious. The idea of carrying on a discussion with something over whether that thing is really conscious, while believing that it could not possibly be conscious, is unintelligible. And to say that the bot insisted that it is conscious but that one does not believe it, is self-contradictory. Insistence is a defining function of consciousness.

Epistemologically, the problem of computer consciousness is no different from the problem of other minds. No conceivable observation or deductive argument from empirical premises will be a proof of the existence of consciousness in anything other than oneself. To the extent that we talk about other people’s conscious states, however, we are committed to a belief in other minds, because it is false to assume that mentalistic expressions have different meanings in their first-person and second- and third-person uses. But if we assume that these expressions mean the same regardless of the subject of predication, then we must concede that our use of them in describing the behavior of artifacts also commits us to a belief in computer consciousness.


It makes no sense to say that a thing is acting as if it has a certain state of consciousness or feels a certain way unless there is some demonstrably relevant feature that supports the use of “as if” as a qualifying stipulation.

And to be unable to specify the way in which mentalistic descriptions apply to objects of equal behavioral capacities is to be unable to distinguish between the consciousness of a person and the consciousness of a bot.

If we find that we can effectively describe the behavior of a thing that performs in the way a human being does only by using the terminology of mental states and events, then we cannot deny that such an object has consciousness.

Consequently, consciousness is a property that is attributed to physical systems that have the ability to respond and perform in certain ways. An object is called conscious only if it acts consciously. To act consciously is to behave in ways that resemble certain biological paradigms and to not resemble certain non-biological paradigms. If a machine behaves in ways that warrant description in terms of intelligence, thinking, deliberation, reflection, or being upset, confused, or in pain, then it is meaningless to deny that it is conscious, because the language we use to describe that machine's behavior is itself the only evidence available for saying that a human has consciousness. One cannot build a soul into a machine, but once we have constructed a physical system that will do anything a human can, we will not be able to keep a soul out of it.

So the observations used to describe behavior are the only enduring evidence available for concluding that something has consciousness.

Consequently, once a physical system is constructed that will do anything observable that a human can and is therefore indistinguishable in behavior from a person, we will not be able to deny it consciousness, and therefore personhood.

To argue that it's impossible for machine intelligence to be or become a person is to argue that it's impossible for many beings to be conscious who are currently thought to be human.

In fact, if the requirements stated in the argument against machine consciousness are at some point no longer being met by certain people, then that same argument could be used to revoke their personhood, and thus deny their humanity.

If I list the observable requirements that are not met in machine intelligence but are required to attribute personhood, I end up eliminating certain groups of humans who for one reason or another don't fulfill all those requirements either.


Once the two classes of beings are observably indistinguishable---something usually ignored in reactions to this argument---you won't be able to tell whether you are talking about the machine or the human in making an argument either way---for or against personhood.

In that situation, the argument will not even be able to get started, since the being in question is observably the same as both possibilities, which is the key premise of the original problem. Since one cannot at that point even -begin- the argument with a predisposition either way, there would simply be nothing left that could be considered evidence for the distinction.

We already identify personhood by how objects appear and how they behave. It's the clearly stated indistinguishability situation as the core premise that forces the issue and reveals prior commitments that necessarily kick in by default in any possible specific instances of person-like entities encountered. How would it be possible to specify criteria for recognizing personhood or consciousness in any other way?

If you're hunting and you see something that could possibly be a human, you simply go by analogous or similar appearance to other objects already considered persons, combined with observed behavior of that something I see. Which is one of the core premises of this argument.

Since the whole basic initial premise is indistinguishability in terms of both appearance and behavior, how would you adjudicate personhood, or even identify the entity in question as one (machine) *or* the other (person)? If you can't tell the difference by both appearance and behavior, there's really no use in trying to maintain the distinction in any such instance. Given that you can't distinguish the entity as being merely physical or conscious to begin with, there's simply nothing else to go by.

But in that case, you'd end up having granted personhood to the machine by default, plus perhaps because of the ethical risk of denying personhood to a being that for all appearances and behaviors could very well be a person anyway.
 


--Thanks to Michael Arthur Simon for sparking this view from his own similar idea. Much of this is redacted from Simon, Michael Arthur. "Could there be a Conscious Automaton?", American Philosophical Quarterly, Volume 6, Number 1, 1969, pages 71-78---but with very significant changes.

Wednesday, May 23, 2012

Non-Prophet Science

Positivistic science is solely concerned with observed fact, and must hazard no conjecture as to the future. If observed fact be all we know, then there is no other knowledge. Probability is relative to knowledge. There is no probability as to the future within the doctrine of Positivism.

Of course most men of science, and many philosophers, use the Positivistic doctrine to avoid the necessity of considering perplexing fundamental questions---in short, to avoid metaphysics---, and then save the importance of science by an implicit recurrence to their metaphysical persuasion that the past does in fact condition the future.

--Alfred North Whitehead, "Immortality" in Essays in Science and Philosophy, 1933.

Saturday, May 19, 2012

Schopenhauer Gets a Glimpse

The more clearly you become conscious of the frailty, vanity, and dream-like quality of all things, the more clearly you will also become conscious of the eternity of your own inner being; because it is only in contrast to this eternity that the these qualities of things become evident, just as you perceive the speed at which a ship is going only when looking at the motionless shore, not when looking into the ship itself.

--Slightly redacted from Arthur Schopenhauer, Parerga und Paralipomena, 1851.

How to Be an Cognitive Juvenile

"Without a trace of irony, Krauss approvingly cites physicist Frank Wilczek’s unflattering comparison of string theory to a rigged game of darts: “First, one throws the dart against a blank wall, and then one goes to the wall and draws a bull’s-eye around where the dart landed.” Yet that is exactly Krauss’ procedure. He defines “nothing” and other key concepts precisely so as to guarantee that only the physicist’s methods he is comfortable with can be applied to the question of the universe’s origin—and that only a nontheological answer will be forthcoming."

--Edward Feser, "Not Understanding Nothing", from a review of A Universe from Nothing, by the philosophically adolescent Lawrence Krauss.

The atheists themselves don't seem to see the need to be precise, exact, and consistent in their definitions or reasoning, yet chide believers in God for allegedly not doing so.

Meanwhile, the atheist crawlers, cheerleaders, and other histrionic underlings come along and---also without any reasoning---dismiss the obvious fallacies pointed out by theists.

Where's the scientific reasoning? Where's the strict rigorous numbering and inference-derivation documentation and proofs of the atheists' claims, like any logic, sets, and functions course exercises? I don't see a single atheist scholar that is even attempting such a thing. It's the theists who are taking the analysis of issues to greater degrees of argumentative meticulousness, facing the self-referential issues, asking the meta-theoretic questions, and so on---not the atheists.

I smell blood in the water.

Tuesday, May 15, 2012

The Sheltered Frauds of Academic Philosophy

In general, philosophers who tend to shoot off their mouths about how breathtakingly bad the traditional arguments for God’s existence are demonstrably do not know what they are talking about, as we have seen here, here, and here. And they are the sorts of people who rarely want to engage the actual arguments themselves in any depth anyway. They prefer to offer elaborate rationalizations for refusing to do so. Come on, theistic arguments are really all about rationalizing preconceived opinions!” – said without a trace of irony – “Besides, did this Thomist whose work you recommend ever publish an article in The Philosophical Review? Did he teach in a PGR-ranked department?” That kind of thing. Shameless ad hominems and straw men coupled with a snarky, careerist conformism, all served up as a kind of higher philosophical method.

Monday, May 14, 2012

The Self-Exempting Fun of Psychological Reductionism

We must never be misled into confounding psychological questions about the origin and appeal of beliefs with logical questions about their truth and grounds.
--Slightly redacted from Flew, A. G. N. God: A Critical Inquiry (LaSalle, IL: Open Court, 1984) page 72.
And of course---predictably---such reductionist claims are never made about those claims themselves.

Sunday, May 13, 2012

Those Haunting Goths

Behind every great atheistic doctrine is an even greater self-referential inconsistency.

Thursday, May 10, 2012

Nielsen on Scientism

Where I conflict with Quine is over his scientism. I do not think that he or Russell are right in believing that what science (natural science?) cannot tell us, we cannot know. There are all sorts of common-sense knowledge and social, political, and moral knowledge that science can tell us little (if anything) about:

that human beings stand in need of love,
that promises are generally to be kept,
that justice involves reciprocity,
that respect for others is, or at least should be, a central feature in our lives, and
that indifference to one's fellow humans is evil

are good examples. People who have no understanding of science---who even lived before the rise of science---can understand them and know them to be justified. And things are no different for us moderns. We need not wait on science to confirm or disconfirm them and for most of them at least we have no understanding of how science could confirm or disconfirm (infirm) them.

--Very slightly redacted from Kai Nielsen, Atheism and Philosophy, New York: Prometheus, 2005, page 11, "Preface to the Paperback Edition". This book (ISBN: 1591022983) was originally published in 1985 as Philosophy and Atheism (ISBN: 0879752890). Get this one, the paperback, since it's much cheaper and has the new (and very long) preface from which I have redacted the quote.

To Russia With Love

Поздравления и спасибо моим друзьям в России. Вы - мои наибольшие читатели вне Соединенных Штатов, и я оцениваю Вас читающий мой blog. Я уверен, что Вы можете выяснить мой адрес электронной почты на "g", так свяжитесь со мной, если Вы имеете какие-нибудь вопросы.

Sunday, May 06, 2012

Lipstick on a Modernistic Pig

The problem with postmoderns who make grand universal claims about what's real is found in the vantage point and criteria with which one presumes to arbitrate such an architectonically prior notion in the first place.

Everything postmodern I've read seems to just be generic old fixed-factor reductionism, the same as Marxism, Behaviorism, Materialism, Contextualism, and so on. Pick your favorite universally determining factors and away we go, spawning universal explanatory reductionisms, arbitrating the existence, nature, and status of what's real, and so on.

Postmodern rhetoric is good for 1) logical analysis, 2) defense attorneys, 3) sociopaths and others into hoodwinking people in various senses, and 4) students who want to rhetorically hoax their way through a substantial number of school courses with writing requirements. Viva Joey Skaggs! [Look up "Sokel Hoax" to see what I mean]

 
Baudrillard's brief essay on nihilism is just sermonizing, merely one unargued pronouncement after another. It's just a selective preening neo-modernism grandstanding itself, in spite of its own assertions.

But postmodernism in general makes it much easier when I'm lobbying rich alumni to close down those useless and meaningless wastes of money called philosophy departments---as an expression of their nihilism. Others can play the nihilism game too---but in this case by redirecting the money that's normally used to prop up people who insult the views of those funding them.

Hidden In Plain Sight

Noise, anger, explosive tones, superlatives, exaggerations of passion, add nothing to the force of what we say, but rather rob our words of the power that belongs to them. But the utterance that shows a spirit subdued by truth and mastered by wisdom is the utterance that sweeps away opposition, that persuades and overcomes. It's not those who get angry and storm and swear who carry the day, but those who never lose their tempers and who never raise their voices, who keep talking quietly and placidly as if they were discussing the weather.

--Slightly redacted from Washington Gladden, 1876

Thursday, May 03, 2012

Meat Machine Madness

"...every one of the endless series of "proofs" of the existence of God that has been proposed, from antiquity to the present day, is automatically a failure because, as I have mentioned, a logical deduction tells you nothing that is not already embedded in its premises."

--Victor Stenger


Consequently, the above deductive argument itself is automatically a failure.

Thanks, Vic! Good to know!

Wednesday, May 02, 2012

Sociopathy for the Masses

From the combox of Phaser's recent "The Unliterate Hallq blog post:

I'm really starting to wonder if there is some organization funding the idiocy of these New Atheist types. They can't really be that stupid can they? Oh, please tell me they're just play-acting for a check...
--Moi,

It happens. But then the televangelist industry is funding the idiocy of faith beyond reason and other cognitive absurdities among contemporary believers in God, and this is influencing people in other religions besides Christianity. Yikes.

Most people in general---theists, atheists, agnostics, and so on---are epistemically schizophrenic, and therefore intellectually and philosophically insane.