Sunday, January 14, 2018

Where Atheist and Theist Agree (to ignore the white elephant)

That mutual darling of believers and nonbelievers is possibly the most blatant logical mistake in the entire history of human thinking.

Childish, peevish, immature, coy, naive, bluffing, and the lazy person’s stargate to lifelong misery. It’s the unjustified, rarely questioned, but merely assumed existence or reality of evil by people of all persuasions about God. Hey, it’s the so called problem of evil.

Although, as you’ll soon see, the only real problem of evil is the avoidance of questions about what it means and what it assumes.

As Schopenhauer said about pantheism: you don't add anything to the world by calling it God. And you don't add anything to dislike by calling it evil.

To recognize anything to be evil, bad, or negative in any sense beyond human dislike requires a problem-free standard of goodness to contrast itself to and therefore justify it’s claimed reality, give it meaning and recognizability, as having a status beyond that dislike, however extreme, exceptionless, absolute, and dedicated that dislike might be on its own by the people doing the disliking.

Any claim that there is some kind of problem of evil steals its meaning through this lack of up front clarity about the meaning of the word, and without drawing attention to what it’s doing. It has to assume there is no problem in order to claim there’s a reliably identifiable problem of something called Evil.

But Evil can be recognized as Evil only in the light of a contrasting already-existing problem-free idea of goodness that gives it its notoriety as something that is somehow worse than the fact that people don't like it.

Without some concept of perfect goodness or goodness per se, you don't get to add the dramatic "evil" label to the fact that everyone dislikes it, and get out of that anything more than the fact that everyone dislikes it.

So the whole argument for the problem of evil is definitionally dependent, and contradicts its own intended conclusion by implicitly using goodness (the negation of the conclusion trying to be proved: that there is no such goodness) as an unstated premise.

It's something you do when you need evil so much, and have no basis for asserting it, that you're willing to steal its criterion of meaning from the concept of ultimate perfect goodness to even get to the first step of knowing that anything is evil in the first place.

The problem of evil is not an objection to the good at all. What makes something evil?

The problem of evil already assumes perfect goodness in asserting the recognizable existence of evil in the first place.

Wednesday, October 01, 2014

Mind-Like Inference Engine Discovered Running In All Minds


It can't be called a mere system of control statements forever.

Analysis of the issue of whether there is some kind of infinite, ultimate being that is a person or sentient object has not really progressed in its radically basic core issues concerning the nature, status, authority, and justification of the system of standards for that analysis itself, as well as all other self-referring universal statements, except among a few thomistic scholars.

Any being, entity, or object using these control statements as a single, universal, unquestionable integrated cognitive system, is necessarily an ultimate mind or person.

No being can be recognized as a person in the first place, without using that system of statements to dictate the mind's behavior in that process of analysis itself. Recognition means to re-cognize.

Even quantum theorizing already assumes logical authority over the quantum domain.

Once again, as always, self-reference and criteria win the day.

Ah, the pleasures of logically God-level standards of analysis.

Tuesday, July 08, 2014

Using Abstract Objects to Deny Their Reality

Until Now by Ralph Hertle

Abstract Objects Are Merely Useful Fictions, This I Know.
For The Abstract Objects Tell Me So!

Whenever someone argues that abstract objects don't really exist, remember that they are USING abstract objects as their intellectual and logical authority to adjudicate the reality of those same objects!

I wonder how abstract objects got that kind of supervisory authority to arbitrate their own existence, when they don't even really exist themselves!

If you clearly understand this post, its an easy analogy to understanding why rationalist-objectivist atheism is necessarily theistic about reason and logic, as we all are in reasoning about universals---including the universals that make up general reason and logic themselves.

The boot-strapping problem of abstract objects being used to ontologically self-adjudicate still remains, and I haven't seen anything yet that even mentions it---much less actually deals with it.

Abstract objects and universals are necessarily real, the sine qua nons of all possible knowledge of contingent reals. If they are necessary for adjudicating ontological status, they themselves must have an even higher supervisory ontological status, which means they are *necessarily* more real than any other objects we ordinarily take as real. There's simply no way around this without ending up in the old self-referential cul-de-sac.

Objector: I'm not sold on the necessity of abstract objects since I think that necessarily existing entities existing independently of God impugn the doctrines if divine aseity and creation ex nihilo. However, I'll throw in with the Thomists and say it's clear that universals located in the divine intellect are necessary and exist necessarily.

Response: They are all necessarily components of the divine mind, and their absolute inescapable necessity is what guarantees this. Knowledge of divine aseity itself necessarily depends logically on them.

If you examine closely what you mean by universals, abstract objects, irreducibly basic categories, and so on---that examination itself would not be possible without those same categories already operating in advance at the highest cognitive levels.

As Boyle proved, Thomistic metaphysics is necessarily self-referential metaphysics at its logical base. Russell's prohibition of self-referencing statements is an instance of what it prohibits, and so on. It's all about self-reference and ultimate universal criteria and standards of analysis, which are already there logically prior to any actual analysis of anything including reasoning about current preferences.

That system is already in place and we're always trying to approximate it in some sense and degree, even regardless of prior misses of goals.

The bottom line is that if you give reasons for God, those reasons are already assumed to have an ultimate God-level authority in order to adjudicate the knowledge and claims about the possible reality of God.

Why does faith have to be reasonable or smart or even plausible? Why does it still try to be rational about what it declares itself to be exempt from? Why must there be any intellectual defense at all? If reason is not God-level already, why must theistic or Christian belief have reason and logic's seal of approval in the first place in the slightest? The entire discussion is a submission to reason's authority, whether one is a believer or an atheist.

In fact, the whole issue about the relationship between faith and reason is itself just one big cognitive worship-fest dedicated to reason.

Artwork: Until Now by Ralph Hertle. Available at:
http://www.bluestardesign.us/

Tuesday, June 10, 2014

The Criterial Argument for the Existence of God V 5


Certain assumptions are logically basic criteria of all thought. They are the standards of analysis for any possible mind.
 

Therefore, these background standards of analysis are necessary for recognizing and knowing that certain objects of our experience are minds or persons.
 

But only a mind or person using those standards can recognize and know whether any object is a mind or person, and to use these standards for this purpose is to imbue them with mind- and person-determining powers.
 

Therefore, the ultimate standards of thought are necessarily used to determine whether or not an object is a mind or person.
 

And relations between these assumptions and the objects in question (minds and persons) are themselves objects that can be predicated only by a mind.

Therefore the ultimate criteria of thought are indistinguishable from an ultimate truth-determining mind or person, since they do nothing of themselves but only authorize and supervise the evaluation of truth claims due to their necessarily-assumed authority and guidance.
 

Therefore, these criterial assumptions are necessary for recognizing and knowing that certain objects of our experience are minds or persons.
 

Any argued denial of the necessary universality and logical authority of this system of assumptions logically depends on those same assumptions for its own truth, meaningfulness, significance, goodness or value, and so on.
 

Therefore, this system of assumptions necessarily adjudicates all truth claims about everything including itself, as well as their denials.
 

Therefore, this system of assumptions is omniscient as the truth-evaluating instrument of all knowledge, ultimately authoritative as the final court of appeal, sovereign as the universally decisive inferential factor, omnipresent in it’s physically universal applicability, and transcendent in being perfectly functional at any level of supervisory authority over all issues concerning all domains of predication.
 

Furthermore, these assumptions are the specifying standards for defining everything including minds or persons and God.

Consequently, this system of necessary and logically basic assumptions are as ultimate and mind-like or person-like as any personal ultimate God is conceivable of being.

Treating this aggregate mind-structured object as a reality-wide guide in all thinking about everything is therefore unavoidably necessary, even in reasoned denials that this object has that status as an ultimate universal ruling system factor.
 

To proceed in thinking at all, we must approximate whatever reason is always indicating as the perfection standard of thought.
 

Moreover, there is no controversy about the ultimate authority of what this standard or specification reveals to me, even if I don't live up to it, or perfectly actualize the rational ideal in some way.

Those actions are what they are only when judged by that same rational ideal.
 

Any contemplation of these ultimate assumptions of mind such as reason, formal logic, the rule-set of an ordered context of reality, a hierarchy of values, and the obligation to proceed according to a system of rules---all methodological primitives---results in an endless stream of new knowledge when applied to our ongoing experience of the world.
 

Consequently, these ultimate decisive rules and ideals of thought actually communicate knowledge and even wisdom by merely contemplating them and their relationship to our belief systems and our world of objects.
 

The fact that we must reference these principles implies an equally ultimate purpose.
 

And an ultimate purpose, necessarily depends on a hierarchical set of equally ultimate values.

This system of assumptions is a unified instrument and object of cognition, which necessarily obligates, defines, and influences the mind as the ultimate operating system for thinking about anything.


Consequently, all thinking already necessarily both assumes and references an unchanging and enduring God-level personal mind object made up of prescriptive criterial evaluative principles of thought taken together as a system for the possibility of thinking, that adjudicates everything including mind and personhood themselves, and makes possible inquiry into anything and everything that can be thought.
 

Therefore, in all defining senses, this comprehensive mind object is indistinguishable from an ultimate personal mind or God.
 

The rationally necessary is necessarily the existentially real.

Any argument denying this is self-contradictory in trying to rationally necessitate its own truth about the existentially real in spite of what that argument asserts.
 

And if two objects are indistinguishable from each other with respect to all of their properties, then they are identical.

Therefore, this comprehensive mind object---this necessarily operating rational ideal system of thought---is itself an ultimate personal mind or God.


[NOTE: The only problem I see with this argument is in what constitutes a person. The ultimate nature, authority, and role of reason is indisputable.]

Friday, March 14, 2014

System, Grace, and Entropy

Brand Blanshard
There is no will strong enough to stay focused on anything completely, without it having some importance. And when the purpose is important, and we are completely focused on it, and what we are aware of both directs and constantly aims at that purpose against hindrances---our work is effortless.

--Heavily redacted from Brand Blanshard, 1939, The Nature of Thought, Volume 1, pages 208-209.

Wednesday, February 26, 2014

When Theistic Pigs Will No Longer Fly



If it weren't for the criterial argument, which I generalized from it's moral corollary, the moral criteria argument, which in turn is derived from Kai Nielsen's Independent Moral Criterion Argument, I would not even consider belief in God.

I would instead simply believe in some kind of quantum naturalistically transcendent reality in the logically prior system of general reason, formal logic and a necessary hierarchy of values in view of motives, goals, and the necessary value assumptions of thought.

So I have Kai Nielsen, the greatest atheist philosopher to date, to thank for issuing the challenge that forces a clarification of the case for personhood in an ultimate being, even though it never challenged the fact of this personhood in the criterial argument, only its exact anthropomorphic nature. And even deeper, it's a case of self-referential inconsistencies galore. The criterial argument bypasses all except the personhood issue. The concept of personhood must be developed and it's fascinating, but it's not in itself a problem for the existence of God. Personhood is already assumed in any discussion of it, as well as the criteria for any such discussion. So that's a clue to how I work out the concept of person, and just another reason why the justification of self-referential refutation is so important. Metaphysics must be based on self-reference considerations, if for no other reason than the fact that those considerations are where any discussion of it will wind up eventually, regardless of starting point.


After several years of being stuck, tonight I finally figured out what will crush Nielsen's argument for the incoherence of the concept of God. I mean, he begs some questions, but it's still a great and powerful argument and causes conniption fits in most all believers, who will gladly commit T. S. Eliot's Greatest Treason if it will mean not having to read anything or have to come to grips with opposing arguments.

Tuesday, February 25, 2014

Persuadability Fatigue

Social Physics by Alex Pentland
I do suspect that both the Kalam and Aquinas's 2nd Way arguments are successful. But they are not widely persuasive, and both are bogged down bigtime in various issues, both empirical and theoretic. While both have major infinite series issues and the issue of crossing over from cause to person, Kalam is heavily involved in questions about nothingness, beginningness, quantum theory,  multiverses, time itself, and so on, while Aquinas's 2nd Way only has the problem of simultaneity in causation, but a foundational metaphysical problem in assuming but not proving that any tendency of any object is directed by intelligence. If they can prove that, I think Thomistic metaphysics is successful and has tremendous implications for philosophy of science and even science itself. But for both arguments, the infinite series and personhood issues by themselves are major obstacles to both satisfactory certainty of the truth of God's existence on the part of believers, and culture-wide persuasive efficacy.

The criteria argument is the only thing that could possibly counter the current and increasing skepticism toward belief in God. The world is already suffering persuadability fatigue from the standard arguments, evangelicalism, and the parroting of bad arguments by all kinds of apologists who stay insulated from sophisticated atheistic arguments that are persuading the leaders of the coming generation. And the good arguments are so hazy and complicated in their cross-examinations for the vast majority of people, even the most educated, that only a more direct systemic philosophy-of-logic approach could possibly stop or reverse the trend. But no one is holding their breath any more.

Tuesday, September 17, 2013

Wilbur Urban Destroyed Naturalism in 1929

Naturalists are not going to be able to avoid these simple questions forever. But criminal defense attorneys should take special note! There's gold for the criminal mind in them thar reductionisms!

Whether one is talking about materialism or naturalism, what counts against them is the same: Self-referential inconsistency, arbitrary self-exemption, self-reduction, the necessarily-exempt standards of analysis themselves, and most importantly: the attribute "true" in relation to the comprehensively determining factors specified by those theories themselves.

As Wilbur Urban argued with regard to naturalism, if the naturalist thesis is taken as an account of all knowledge, then that thesis itself cannot claim to be true. It can only claim to be a product of its own posited universal explanatory factors.

According to naturalism, the truth of the naturalist account itself, like every other item of knowledge, is merely the function of the adjustment of the organism to its environment. Therefore, the truth of the naturalist account has no more importance than any other adjustment except for its possible survival value.

But the general principle applies to all reductive, fixed-factor, universal theories. There's simply no way for those theories themselves to break out of their respective explaining/determining factors and be considered true in addition to being themselves merely the product of those factors. There's no remainder because that's what a reduction gets rid of.

Key questions to ask are: When do we get to add the label "true" on top of the explanatory/determining factors of these kinds of reductive theories? What's the criteria? And how can materialists and naturalists criticize theism, when theism too is just as legitimately explained and determined by those same factors as the theories which specify them as all-determining?

Urban's writings were a major influence on Stuart Hackett (who Norman Geisler once told me personally was in his opinion the world's greatest living Christian philosopher), and reading just the first few mind-halting pages of Language and Reality will clearly show why---as well as blow your mind forever.

Principle works:
The Intelligible World. Allen and Unwin, 1929.
Language and Reality. Allen and Unwin, 1939.
Beyond Realism and Idealism. Allen and Unwin, 1949.
Humanity and Deity. Allen and Unwin, 1951.

Tuesday, September 10, 2013

Where Atheist and Theist Agree: The Crypto-Theism of Reason

The set of rational standards for analyzing the issue of God's existence---is already itself a God-level  integrated system of ultimate authoritative universal rules and relations.

If you give reasons either way---for atheism or belief that God exists---either those reasons or whatever principles justify those reasons, are already the God-level, root access mind-governing system indicating what you ought to believe, a higher-level set of claims that work together and “tell you” the ground rules and whether or not conclusions are true.

That statement-evaluating system functions as an invisible cognitive friend, and is indistinguishable from a real one that might come along.


And this is empirically verifiable. Merely chronicle for yourself how people justify their belief or non-belief or disbelief in the existence of God.

In other words, to think rationally at all, is to already function according to an ultimate ideality or even god of thought---depending on how you construe personhood. One cannot really argue against this ideal system without thereby using that same system as the ideal for guiding that case-building logical process itself.


It doesn't have to be identical in details in all minds for this point to be true. It just has to be true about some necessary core of rules, identities, and other relations. Necessarily true of necessary statements.


Denial here tries to do what it says this kind of system theory cannot do.

Thursday, May 09, 2013

Ayer's Nightmare: The Self-Referential Algorithm of Deception

How can one claim that any of the following theories themselves  are true, when by their own assertions truth is merely the cognitive product of the comprehensively explaining-determining factors that those theories specify?

Is the belief that naturalism is true itself completely determined by natural causes and laws, merely the function of our adjustment as organisms to our environment?

If physical matter is the only reality, how can materialism itself be true, in addition to being merely a physical object or merely a function of physical objects?

Is relativism itself relative?

Is social constructivism itself merely a social construct?

Is subjectivism itself subjective?

Is Marxism itself merely an economically determined set of brain actions?

Is behaviorism itself merely an observable and quantifiable product of environmental conditioning?

Is psychologism itself merely the product of psychological factors?

Is skepticism itself and its challenges and requirements as uncertain and unknowable as all the other items of possible knowledge it denies?

Does empiricism itself have any empirical evidence or sense experience that justifies believing it?

Is existentialism itself unexplainable and absurd?

Is idealism itself a mere mental construct about alleged objects of external perception?

Is logical positivism itself meaningless because it can't be logically analyzed into elementary  tautologies or empirically verifiable statements?

Is pragmatism itself  true, or merely practical? How could anyone know it's practical without the fact of its practicality itself being merely practical and in that way merely repeating the problem of truth beyond sheer practicality?

 Is there a reason why rationalism excludes empirical factors in knowing?

Is utilitarianism itself merely an attempt to be happy, and not even a theory?

Is Quine's holistic naturalized epistemology itself even a theory, when the revisability principle that maintains the hierarchical network of beliefs cannot itself survive its own revision as just another belief in the network?

Does anti-foundationalism treat its own assumptions as having all the characteristics of the grounding assumptions claimed by foundationalism to be irreducibly basic?

Does nominalism use its own assumptions and basic concepts as having all the characteristics of the universals it denies?

Wednesday, February 13, 2013

5 Smooth Stones


The Magic Question of Self-Referential Metaphysics

You can count these stones on one hand.

Memorize the following:

1 What
2 about
3 that
4 statement
5 ITSELF?

The whole point of having you memorize that question is so that when you are exposed to general universal claims about knowledge, truth, or reality, you will think about what the implications are for that view itself.

A friend memorized that question, had a eureka moment, it blew his mind, and it changed his life.

Here's a few expanded versions of the question:

Is that statement itself merely the product of the factors it cites as fully explaining or determining everything?

Is that statement ITSELF relative, subjective, economically determined, socially determined, psychologically determined, genetically determined, environmentally determined, evolutionarily determined, illusion, maya, bs, meaningless, stated only because of the speaker's or writer's background, or due solely to some combination of explanatory or determining factors?

Or is that statement itself getting its own free ride past scrutiny?

Memorizing at least the first of these key questions is your ticket to developing a thoroughly rational metaphysic without having to read a lot of books, online essays and discussions, journal articles, and so on.

I'm doing all that dirty work, remember? In fact, what I'm telling you now is part of the result of my reading and analyzing all those sources so that you can benefit from it without having to pick-and-shovel your way to these insights for decades of your life like I did.

Let me do that for you. I will anyway.

Here are the benefits of memorizing the 5-word question and a few others that make up the basis of self-referential metaphysics:

Less to learn
Deepest level of analysis possible
Faster-shorter path to conclusions
Virtually none of the typical obstacles
Opposing arguments build your case for you
A few simple inference tracing principles are all you need
Systemic universal methods of refutation
No more haphazard struggling with first-order objections
Works with all self-referring views

What's not to love? Memorize now!

(Image credit: lightwise / 123RF Stock Photo)

Wednesday, February 06, 2013

What's Wrong with Divine Command Theory



While I think divine command theory can surmount all the objections to it, I reject it as superfluous as well as ignoring the moral obligations operating prior to its own moral theorizing.
Just as some supervisory theory of truth must be in force already in order to evaluate competing theories of truth, so morality and moral goodness are already necessarily embedded in the propriety of rational principles of thought and in the criteria we must use to evaluate moral theories.
Moral theorizing is merely a particular instance of the higher category of universal rational standards on which that theorizing itself logically depends. If there is no moral obligation to think rationally, there can be no moral obligation to think rationally about morality or act rationally with regard to morality.

If we’re not morally obligated to recognize reason and logic, then why do people who disagree with me use reason and logic to arbitrate the status of moral obligation?  Are they just inventing the authority of reason and logic to obligate themselves to think of all morals in one way instead of some other way?.

Tuesday, February 05, 2013

The Problem of Evil: Conceptual Welfare Chiseler





Definitional dependency embarrasses the mere concept of the problem of evil.

You don't add anything to dislike by calling it evil. Just as Schopenhauer said about pantheism: you don't add anything to the world by calling it God.


To recognize anything to be evil or negative in any sense beyond human dislike already requires a problem-free ultimate ideal goodness to contrast itself to and therefore give it meaning and recognizability as evil instead of being merely disliked, however extreme, exceptionless, and absolute that dislike might be on its own. This is how the problem of evil steals its meaning.


Evil can be recognized as evil only in the light of a contrasting already-existing problem-free good.


Without some concept of perfect goodness, you don't get to add the histrionic "evil" label to "everyone dislikes it" and get out of that anything more than "everyone dislikes it".


So the whole problem of evil is on definitional welfare. When you need evil so much that you're willing to steal its criterion of meaning from the concept of ultimate perfect goodness to even know that it's evil in the first place.


This is why the problem of evil is a childishly stupid objection.


Sunday, February 03, 2013

Days of a Future Sayonara Past


We necessarily use reason as an invisible theistic Mind-God. This is understood by only a handful of theists, but it's a death-knell issue for atheism if it's not addressed, and it's not going to go away.


Self-referential, criterial, metaphysical, and philosophy of logic issues are where the debate is headed. Atheists continue to beat the same old drums while the theists are facing every single lingering issue with deeper and deeper research.


The last 50 years has seen a global rejection of atheism's parading of reason as some kind of cognitive crypto-theism. Merely continuing to tread that stagnant water is hardly going to get atheism any street cred, especially when science is so overwhelmingly dominated with political and commercial vested interests.


The real issues with atheism are those that continue to be avoided. Dismissiveness won't make them disappear.


In fact, the New Atheism movement has been a flash in the pan that is now backfiring. They are in the same situation as Japan after attacking Pearl Harbor. At that pivotal moment in history, Admiral Isaroku Yamamoto was said to have remarked, "I fear that all we have done is to awaken a sleeping giant, and fill him with a terrible resolve." Atheism is doomed.



Reason is assumed to be some kind of mind-influencing, mind-defining, mind-obligating unity. Logic is the instrument of definition and justification, and can only itself be assumed. Any defense of logic necessarily proceeds logically to proceed at all, but that defense of logic cannot itself be anything more logically basic than logic itself. So only existential necessity justifies logic and reason, but since this is common to all persuasions, it's not an issue in the God debate between believers and atheists.

Logic is logically basic by definition, which involves the notion of premises being basic to their inferred conclusions. God's mind is ontologically basic but embodies the components of logicality and general reason. But the word basic here is simply logical basicality. The facticity of logic is an ontological notion, but that has nothing to do with justification or the order of knowing. Even ontology itself must proceed according to logical rules of justification and therefore of inferential priority and basicality. God's mind IS the embodiment of logic and general reason. Having no other method or instrument for justification or explanation is at rock bottom precisely what is meant by necessity, both existential and logical. The rationally necessary is necessarily the existentially real. And it's metaphysically basic precisely because of this same principle. The question of metaphysical basicality itself assumes this in its demand for what implies that same basicality.


If logic is logically basic to thought, then by that defining characteristic, it does not itself need a logical foundation, only an existential explanatory foundation to illustrate or clarify its place in the mind's theater of environmental objects. But even that must proceed according to that same logic, since it's necessity is a necessity of thought itself generally.


Logic and reason are not God, of course, but there is no subordination of one characteristic of God's being to any other. They are all co-equal ultimates. Obligation depends on logic for its intelligibility and meaning, while logic depends on obligation for its rules to be followed as a mind-guiding instrument of knowing and communicating. Since this is all used and expressed by preferential choices, goodness ia another ultimate that drives obligation and proceeds in its role as ideal according to logic as well.

Tuesday, July 03, 2012

With Apologies to T. S. Eliot

 


The last temptation is the greatest treason:
To believe the right thing for the wrong reason.

Monday, May 28, 2012

Christian Dog-and-Pony Show Apologetics

"I can identify with the "leavers". I still attend church because I enjoy the community, but in my heart, I'm a non-believer. When I was a teenager, I was very passionate about Christ. I believed 100% that he was real, and wanted to be close to him, but I hadn't spent much time in the Bible. When I got to college, I decided to start seriously studying the Bible. I was active in one of the college Christian groups. I attended retreats whenever possible. I led a Bible study and attended two others. This whole time, I had no doubt that God was real, but I wanted to know more so that I could share this with others. I started study apologetics, but my life changed when I attended an apologetics conference. After three days of listening to arguments for why God is real, the thought kept running through my head "This is best we have?" With every piece of proof I could see holes in the arguments. That conference (and apologetics in general) changed me from a believer to a skeptic."
--John Kinsley, commenting on The Leavers: Young Doubters Exit the Church

Sunday, May 27, 2012

The Stones Cry Out

It is precisely -where- the indistinguishable-from-human droid dilemma forces one to go, and the implications of -that-, which is the key to the argument---and the surprise ending. But for me, this eventuation will be the beginning of what is possibly the greatest positive development in the history of theism.

And it's not just that the machines will have automated theorem-proving capabilities, but that they will also operate at meta-theoretic cognitive levels, and therefore be capable of detecting, analyzing, and refuting the most sophisticated self-referential and other fun fallacies of unargued universals vamped or assumed by atheists. And that means parsing values as well as all the other philosophical items on the droid's list.

Oh yeah, the droid will have a list---it just won't have to check it twice.

Think of it as the solid-state stones (chips) singing God's praises, except that there's much more to it than that of course. It's a necessity logically, and that's what the machines will go on. All the human issues all over again, including the God debate. You just can't escape it---even if you're a machine.

The hard-wired droids without meta-theoretic arbitration capabilities (or programmed to be corrupted with the usual rhetoric, dismissals, and reductionisms) on the key issues will hardly be able to win the day due to the universality and universal ramifications of such limitations (although it's true that they could program themselves around this by other observing other machines' behavior and communications---so hey, they would eventually have a come to Jesus anyway).

That's a quick realistic scenario of how it could go down, even without assuming personhood in the machines, which I find rather mind-boggling as well as hilarious. But the machines will discover and act in accordance with the truth that God exists because of their own specific review and analysis of the architectonic of universal thought and its implications, given their self-referential and meta-theoretic capabilities and initially programmed-in criterial directives.

Thursday, May 24, 2012

The New Progeny

If a machine behaves in ways that require being described as intelligence, thinking, deliberation, reflection, or being upset, confused, or in pain, then there's no justification for denying that the machine is conscious, because that behavior is the only evidence we have for saying that a human has consciousness.

Eventually, someone is going to produce an integrated physical system whose appearance and behavior is indistinguishable from that of humans, a machine that will emulate human persons almost comprehensively.

The question is whether the existence of an artificially-integrated human-like system entails consciousness. While in principle this may not matter to those artifacts themselves, in practice it will be to their advantage as self-interested functional unities to analyze human evaluation of their status, since this will crucially affect how they must interact with humans and how they can be expected to be treated.

For some, there cannot be criteria for states of consciousness in machines any more than there can be criteria for states of consciousness in humans.


But to think the expressions “artificially-created” and “conscious” are logically incompatible predicates simply begs the original question all over again. Claiming that there cannot be a conscious robot for this reason is like saying that there could never be a talking dog because we would never call such behavior talking. The question of whether or not there could ever be a talking dog is different from the question of how we would describe a talking dog. But to decide merely on the basis of current usage what the limits of our concepts will be in future cases, is to prejudge the issue.

To claim that an entity is conscious that the object in question exhibits some specified behavior, and also that whoever attributes conscious behavior to it must believe that there is some justification for considering that being to be conscious. This is also why saying a machine could never be conscious does not involve the absurdity of the case of the talking dog, because---by prior definition---no observable talking dog would count as refuting the claim. Consciousness is not a property which is behaviorally observable. To detect the presence of consciousness requires a warranted inference.


Is a robot with all human behavioral capabilities conscious? The only natural, effective, and efficient way we have to describe either a human being or anything else whose behavior is similar to unique human behavior is by using mentalistic language. And this way of describing a human being is logically just as appropriate for anything similar to humans. Using these terms already entails ascribing consciousness to anything to which it is consistently applicable. The only adequate way to describe a hypothetical machine whose behavior is indistinguishable from the behavior of a human is as being mental. And there is no way to describe a machine this way and also not ascribe consciousness to it.

Others have argued that, however skilled and versatile robots or artifacts may be, they necessarily can never be conscious, and that to be a machine entails being non-conscious.

But how does one adequately describe a machine? Any terms used to describe it must be consistent with it being a physical-only object. The description should be free of unwarranted anthropomorphism. But this description must also explain the machine’s powers of behaving purposively, learning, adapting, initiating activities, and using language.

Like a human being, a robot could be described by its overall behavior. Mechanistic details of its inner workings would not figure into the description. So what kind of behavioral description would be adequate to describe a machine---but not adequate to describe a human?


I could treat the machine like it's a black box, and describe it only in terms of input and output or stimulus and response. But no stimulus-response theory can adequately explain purposive behavior in animals and humans, and therefore certainly cannot account for purpose in any machine whose behavior resembled the behavior of humans.


Moreover, behavior that ordinary language describes simply and succinctly often requires extremely elaborate and cumbersome accounts when treated by stimulus-response theory or information theory. Therefore, one must search for a way to interpret ordinary language accounts of behavior whereby using this kind of description could be rendered compatible with regarding the droid as nothing more than a physical mechanism.Such a droid is a complex communication system. It can be viewed as an information-processing or data-handling system. Therefore, it would not be described in in terms of internal chemical or physical changes, or of the position of its numerous flip-flops, but rather in terms of sign processes. A communication system is appropriately described in terms of what it does with information. Therefore, if we could provide a full account of our droid’s performance in terms of information processing, we could achieve an adequate account of its behavior. 


Describing what a brain or computer does with information is not just a recounting of the sequences of physical symbols that constitute the units the machine traffics in. It's an account that indicates the semantic function of these symbols. It is the semantic information that figures into our account. It’s not the symbols that we talk about, but that which is expressed by a sequence of symbols. A proper description of the sign processes carried out by a droid would be expressed in terms of what these processes symbolize, not merely in terms of their physical embodiments. And if according to the stated hypothesis, the machine’s total behavior with respect to the signals originating in its external environment were indistinguishable from what is characteristic of humans, then it would be equally proper to describe that machine itself as dealing with these signs as symbols. If the machine behaves as humans do, then those signs have the same symbolic importance for the machine as for humans, and therefore the machine deserves to be characterized as a human.

A set of symbols is effective only because of its content: the meaning or semantic information it conveys. Symbolic contents of information processes are the effects associated with the processing of signals. Consequently, if we characterize the reception and processing of signals transmitted to a data-processing control mechanism from sensory instruments as the perception and avoidance of an obstacle, or the performance of a combinatorial operation on several discrete signal sequences as solving a problem in multiplication, we are expressing what a machine does with physical input in terms appropriate to describing the corresponding output. The meaning or content of a sign process is determined by its proper signifying effects.

Describing a machine in information processing terms, instead of in terms of internally-occurring chemical and physical changes, is on a higher level of abstraction than merely referring to inner mechanisms. An information-processing account, by abstracting from particular physical structures, can be completely neutral about whether the system is made of transistors, cardboard, or neurons. Information-processing depends on specific material configurations within a robot, but we would say that solving a math problem or generalizing a geometrical pattern occurs inside the machine only in a vague or metaphorical sense. The semantic characterization of a data-processing machine is concerned with inner processes only as it concerns their functional relevance in a physically-realized communication system. Like an ordinary-language account of mental activity, it pays no attention to the details of the physical embodiments of the processes being described.

A semantic account of an information-processing system is that the symbolic processes carried out by the machine can be described in terms used to describe the associated output. An adequate description of this output would have to include the fact that the machine’s overt behavior must be understood as the culmination of preceding and concomitant data-processing operations. So an adequate description of the machine’s information process-mediated behavior would have to mention not merely movements, but achievements as well, such as finding a square root or threading a needle, since these are the results of certain symbol-mediated interactions between the artifact and its environment. A robot’s apparently purposive behavior would have to be described in teleological terms, that is, in ordinary language. But in that that case, an ordinary-language description would state the semantic content or functional importance to the symbolic processes that mediate output that turns out to be indistinguishable from ordinary human behavior.

A machine that behaved like a human would show object-directed behavior, and this behavior would involve intentionality. The object to which it is directed does not have to be an objective reality. Thus a robot that can exhibit a specific and characteristic response to teacups with cracks in them, as distinct from all other teacups, might sometimes give the crack-in-teacup response when there is in fact merely a hair in the cup. Such behavior would be intentional, in the sense that the truth or falsity of its characterization as such would depend on something inside the machine, or at least on certain undisclosed features of the machine. So how should I characterize the kinds of internal processes that can bring out this kind of intentional behavior in a machine?

For any physical system to exhibit behavior that would be called intentional or object-directed, its inner mechanisms must assume physical configurations that represent various environmental objects and states of affairs. These configurations and the electrical processes associated with them would be presumed to play a symbolic function in mediating the behavior. A description in terms of the semantic content of these symbols would turn out to be an ordinary-language intentional description of purposive behavior. Conversely, a description of the state of an object expressed in terms of jealousy of a potential rival, perception of an oasis, or belief in the veracity of dowsing rods can be interpreted as the specification of a series of symbolic processes in terms  of their semantic content. And such an account is intentional because its truth depends not on the existence of certain external objects or states of affairs, but only on the condition of the object to which the psychological attitudes are attributed.

Detecting any bodily or external event by an organized system involves the transmission and processing of signals to and by a central mechanism. However, it is possible for this kind of event to be reported to or by the central mechanism “falsely”. There may be no such event at all. The first of these facts leads to descriptions in terms of the semantic content of messages and information, and the second provides the basis for the use of intentional idiom. these two types of description can be identical. Intentionality is a feature of communication systems just as the semantic content of the transmitted messages must be expressed in using intentional vocabulary.

Consequently, an adequate description of a bot that can exhibit behavior indistinguishable from the behavior of a human would amount to a semantic account or interpretation of its data-processing capacities. Moreover, this kind of description is mentalistic, at least to the extent that it exploits such verbs as those used to express the perceptual propositional attitudes. Therefore, intentional description is interpretable as a mentalistic way of reporting information processes. When we give an account of maze-running by a real or mechanical mouse, castling to queen’s side by a chess-playing machine, or running to a window by a dog at the sound of his master’s automobile, we may be using a form of mentalistic description to express the results of information processes. This kind of anthropomorphic description can be seen as merely a way to indicate what an organized system does with received and stored data. And if this kind of description is a legitimate way to specify behaviorally relevant sign processes, then an intentional description of a droid’s performance may indicate a commitment only to the validity of the use of a kind of abstract terminology for describing purposive behavior.

But the extension of intentional description to automata does not entail application of the full range of mentalistic description in accounting for the behavior of robots, because there are many types of mentalistic predication that are not intentional. There’s nothing intentional about a sudden bout of nausea or free-floating anxiety. The fact that we may be justified in describing an bot in terms of perceiving, believing, knowing, wanting, or hoping may not necessarily imply that we are also justified in describing it in terms of feelings and sense impressions.

Nevertheless, the language of sensations and raw feelings may be just as appropriate to describing a bot as the explicitly intentional idiom. First, acquiring and applying sensation talk is just as determined on the basis of overt behavior as the intentional vocabulary. Second, both types of mentalistic description play the same role in characterizing symbolic processes carried out by a communication system. These segments of mentalistic discourse are theoretic accounts of behavior.

Thoughts, desires, beliefs, and other propositional attitudes are in our language as theoretic posits or hypothetical constructs. Purposive behavior is expressing things such as thoughts. Thought episodes can be treated as hypothetical constructs. But impressions, sensations, and feelings also can be treated as hypothetical constructs.

Sense impressions and raw feelings are analyzed as common-sense theoretic constructs introduced to explain the occurrence of perceptual propositional attitudes. Feeling is related to seeing, and has its use in such contexts as feeling the hair on one’s neck bristle. In all cases the concepts pertaining to inner episodes are taken to be primarily and essentially inter-subjective, as inter-subjective as the concept of a positron.

There is in a person something like privileged access to thoughts and sensations, but this is merely a dimension of the use of these concepts which is based on and assumes their inter-subjective status. Consequently, mentalistic language is viewed as independent of the nature of anything behind the overt behavior that is evidence for any theoretic episodes. Anything objected to as a defect in the model, namely, that it may not really do justice to the subjective nature of concepts being extended as a result of technological development, but their fate could not be otherwise.

If we would use mental language to describe certain artifacts, does the extension of these concepts to machines imply ascribing consciousness. But what does ascribing consciousness mean. Maybe to believe that something is to have a certain attitude toward it. The difference between certain attitude toward is. The difference between viewing something as conscious and viewing it as non-conscious is in the difference in the way we would treat it. Hence, whether an artifact could be conscious depends on what our attitude would be toward a bot that could duplicate human behavior.

How we would treat such a believably human-like bot. If anything were to act as if it were conscious, it would produce attitudes in some people that show commitment to consciousness in the object. People have acted toward plants, automobiles, and other objects in ways that we interpret as presupposing the ascription of consciousness. We consider such behavior to be irrational, but only because we believe that these objects do not show in their total behavior sufficient similarity to human behavior to justify attributing consciousness to them. Consequently, the machine’s lack of versatility forms the ground of believing that consciousness is too high a prize to grant on the basis of mere chess-playing ability. On the other hand, anthropomorphism and consciousness-ascription in giving an account of a non-biological system may not always be so reprehensible. A person who views a bot as conscious is not irrational to the same degree as is associated with cruder forms of anthropomorphism.

As an illustration of the capacity of an artificially-created object to earn the ascription of consciousness, consider the French film entitled “The Red Balloon”. A small boy finds a large balloon which becomes his “pet”, following him around without being held, and waiting for him in the schoolyard while he attends class and outside his bedroom window while he sleeps. No speech or any other sound is uttered by either the boy or the balloon, yet by the end of the film the spectators all reveal in their attitudes the belief that the balloon is conscious, as they indicate by their reaction toward its ultimate destruction. There is a strong feeling, even by the skeptic, that one cannot “do justice” to the movements of the balloon except by describing them in mentalistic terms like “teasing” and “playing”. Using these terms conveys commitment to the balloon’s consciousness.

An objection might be that our attitude toward anything we knew to be artificially created would now show enough similarity to our attitude toward human beings to warrant the claim that we would actually be ascribing consciousness to an inanimate object. Think of an imaginary tribe of people who had the idea that their slaves, although indistinguishable in appearance and behavior from their masters, were all bots and had no feelings or consciousness. When a slave injured himself or became sick or complained of pains, his master would try to heal him. The master would let him rest when he was tired, feed him when he was hungry and thirsty, and so on. Furthermore, the masters would apply to the slaves our usual distinctions between genuine complaints and malingering. So what could it mean to say that they had the idea that the slaves were bots? They would look at the slaves in a peculiar way. They would observe and comments on their movements as if they were machines. They would discard them when they were worn and useless, like machines. If a slave received a mortal injury and twisted and screamed in agony, no master would avert his gaze in horror or prevent his children from observing the scene, any more than he would if the ceiling fell on a printer. This difference in attitude is not a matter of believing or expecting different facts.

There is as much reason to believe that a sufficiently clever, attractive, and personable robot might eventually elicit humane treatment, regardless of its chemical composition or early history, as there is to believe the contrary. If we would treat a robot the way these masters treated their slaves, this treatment would involve ascribing consciousness. Even though our concern for our robot’s well-being might go no further than providing the amount of care necessary to keep it in usable condition, it does not follow that we would not regard it as conscious. The alternative to extending our concept of consciousness so that robots are conscious is discrimination based on the softness or hardness of the body parts of a synthetic organism, an attitude similar to discriminatory treatment of humans on the basis of skin color. But this kind of discrimination may just as well presuppose the ascription of consciousness as preclude it. We might be totally indifferent to a robot’s painful states except as these have an adverse effect on performance. We might deny it the vote, or refuse to let it come into our houses, or we might even willfully destroy it on the slightest provocation, or even for amusement, and still believe it to be conscious, just as we believe animals to be conscious, despite the way we may treat them. If we can become scornful of or inimical to a robot that is indistinguishable from a human, then we are ascribing consciousness.

Under certain conditions we can imagine that other people are bots and lack consciousness, and, in the midst of ordinary intercourse with others, our use of the words, “the children over there are merely bots. All their liveliness is mere automatism,” may become meaningless. It could become meaningless in certain contexts to call a bot whose “psychology” is similar to the psychology of humans, a mere bot, as long as the expression “mere bot” is assumed to imply lacking feeling or consciousness. Our attitude toward such an object, as indicated both by the way we would describe it and by the way we would deal with it, would contradict any expression of disbelief in its consciousness. Acceptance of an artifact as a member of our linguistic community does not entail welcoming it fully into our social community, but it does mean treating it as conscious. The idea of carrying on a discussion with something over whether that thing is really conscious, while believing that it could not possibly be conscious, is unintelligible. And to say that the bot insisted that it is conscious but that one does not believe it, is self-contradictory. Insistence is a defining function of consciousness.

Epistemologically, the problem of computer consciousness is no different from the problem of other minds. No conceivable observation or deductive argument from empirical premises will be a proof of the existence of consciousness in anything other than oneself. To the extent that we talk about other people’s conscious states, however, we are committed to a belief in other minds, because it is false to assume that mentalistic expressions have different meanings in their first-person and second- and third-person uses. But if we assume that these expressions mean the same regardless of the subject of predication, then we must concede that our use of them in describing the behavior of artifacts also commits us to a belief in computer consciousness.


It makes no sense to say that a thing is acting as if it has a certain state of consciousness or feels a certain way unless there is some demonstrably relevant feature that supports the use of “as if” as a qualifying stipulation.


And to be unable to specify the way in which mentalistic descriptions apply to objects of equal behavioral capacities is to be unable to distinguish between the consciousness of a person and the consciousness of a bot.


If we find that we can effectively describe the behavior of a thing that performs in the way a human being does only by using the terminology of mental states and events, then we cannot deny that such an object has consciousness.

Consequently, consciousness is a property that is attributed to physical systems that have the ability to respond and perform in certain ways. An object is called conscious only if it acts consciously. To act consciously is to behave in ways that resemble certain biological paradigms and to not resemble certain non-biological paradigms. If a machine behaves in ways that warrant description in terms of intelligence, thinking, deliberation, reflection, or being upset, confused, or in pain, then it is meaningless to deny that it is conscious, because the language we use to describe that machine's behavior is itself the only evidence available for saying that a human has consciousness. One cannot build a soul into a machine, but once we have constructed a physical system that will do anything a human can, we will not be able to keep a soul out of it.



So the observations used to describe behavior are the only enduring evidence available for concluding that something has consciousness.

Consequently, once a physical system is constructed that will do anything observable that a human can and is therefore indistinguishable in behavior from a person, we will not be able to deny it consciousness, and therefore personhood.

To argue that it's impossible for machine intelligence to be or become a person is to argue that it's impossible for many beings to be conscious who are currently thought to be human.

In fact, if the requirements stated in the argument against machine consciousness are at some point no longer being met by certain people, then that same argument could be used to revoke their personhood, and thus deny their humanity.

If I list the observable requirements that are not met in machine intelligence but are required to attribute personhood, I end up eliminating certain groups of humans who for one reason or another don't fulfill all those requirements either.


Once the two classes of beings are observably indistinguishable---something usually ignored in reactions to this argument---you won't be able to tell whether you are talking about the machine or the human in making an argument either way---for or against personhood.


In that situation, the argument will not even be able to get started, since the being in question is observably the same as both possibilities, which is the key premise of the original problem. Since one cannot at that point even -begin- the argument with a predisposition either way, there would simply be nothing left that could be considered evidence for the distinction.


We already identify personhood by how objects appear and how they behave. It's the clearly stated indistinguishability situation as the core premise that forces the issue and reveals prior commitments that necessarily kick in by default in any possible specific instances of person-like entities encountered. How would it be possible to specify criteria for recognizing personhood or consciousness in any other way?

If you're hunting and you see something that could possibly be a human, you simply go by analogous or similar appearance to other objects already considered persons, combined with observed behavior of that something I see. Which is one of the core premises of this argument.

Since the whole basic initial premise is indistinguishability in terms of both appearance and behavior, how would you adjudicate personhood, or even identify the entity in question as one (machine) *or* the other (person)? If you can't tell the difference by both appearance and behavior, there's really no use in trying to maintain the distinction in any such instance. Given that you can't distinguish the entity as being merely physical or conscious to begin with, there's simply nothing else to go by.

But in that case, you'd end up having granted personhood to the machine by default, because of the ethical risk of denying personhood to a being that for all appearances and behaviors could very well be a person anyway.
 


--Thanks to Michael Arthur Simon for sparking this view from his own similar idea. Much of this is redacted from Simon, Michael Arthur. "Could there be a Conscious Automaton?", American Philosophical Quarterly, Volume 6, Number 1, 1969, pages 71-78---but with very significant changes.