Is it possible to build a sapient machine ?

For the love of the pursuit of knowledge

Moderator: Moderators

Post Reply
User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Is it possible to build a sapient machine ?

Post #1

Post by Bugmaster »

Topic

Many threads regarding dualism, theism, and philosophy in general, often run into this topic. Is it even hypothetically possible to build a computer that will be sapient -- i.e., an artificial being that will think, feel, and socialize as humans do ? Well, here's our chance: to resolve this debate once and for all ! Smoke 'em if you got 'em, people, this post is gonna be a long one.

I claim that creating Strong AI (which is another name for a sapient computer) is possible. We may not achieve this today, or tomorrow, but it's going to happen sooner rather than later.

First, let me go over some of the arguments in favor of my position.

Pro: The Turing Test

Alan Turing, the father of modern computing (well, one of them), proposed this test in ages long past, when computers as we know them today did not yet exist. So, let me re-cast his argument in modern terms.

Turing's argument is a thought experiment, involving a test. There are three participants in the test: subject A, subject B, and the examiner E. A and B are chatting on AIM, or posting on this forum, or text-messaging each other on the phone, or engaging in some other form of textual communication. E is watching their conversations, but he doesn't get to talk. E knows that one of the subjects -- either A or B -- is a bona-fide human being, and the other one is a computer, but he doesn't know which one is which. E's job is to determine which of the subjects is a computer, based on their chat logs. Of course, in a real scientific setting, we'd have a large population of test subjects and examiners, not just three beings, but you get the idea.

Turing's claim is that if E cannot reliably determine which being -- A or B -- is human, then they both are. Let me say this again: if E can't tell which of the subjects is a computer, then they're both human, with all rights and privileges and obligations that humanity entails.

This seems like a pretty wild claim at first, but consider: how do you know that I, Bugmaster, am human ? And how do I know that you're human, as well ? All I know about you is the content of your posts; you could be a robot, or a fish, it doesn't really matter. As long as you act human, people will treat you as such (unless, of course, they're jerks who treat everyone like garbage, but that's another story). You might say, "well, you know I'm human because today's computers aren't advanced enough to post intelligently on the forums", but doesn't prove much, since our technology is advancing rapidly all the time (and we're talking about the future, anyway).

So, if you're going to deny one of Turing's subjects his humanity, then you should be prepared to deny this humanity to everyone, which would be absurd. Therefore, a computer that acts human, should be treated as such.

Pro: The Reverse Turing Test

I don't actually know the proper name for this argument, but it's sort of the opposite of the first one, hence the name.

Let's say that tomorrow, as you're crossing the street to get your morning coffee, you get hit by a bus. Your wounds are not too severe, but your pinky is shattered. Not to worry, though -- an experimental procedure is available, and your pinky is replaced with a robotic equivalent. It looks, feels, and acts just like your pinkie, but it's actually made of advanced polymers.

Are you any less human than you were before the treatment ?

Let's say that, after getting your pinkie replaced, you get hit by a bus again, and lose your arm... which gets replaced by a robo-arm. Are you human now ? What if you get hit by a bus again, and your left eye gets replaced by a robotic camera -- are you less human now ? What if you get a brain tumor, and part of your brain gets replaced ? And what if your tumor is inoperable, and the doctors (the doctors of the future, of course) are forced to replace your entire brain, as well as the rest of your organs ? Are you human ? If so, then how are you different from an artificial being that was built out of the same robotic components that your entire body now consists of ?

Note that this isn't just idle speculation. People today already have pacemakers, glasses, prosthetic limbs, and yes, even chips implanted in their brains to prevent epileptic seizures (and soon, hopefully, Alzheimers). Should we treat these people as less human than their all-natural peers ? I personally don't think so.

Ok, I know that many of you are itching to point out the flaws in these arguments, so let me go over some common objections.

(to be continued below)

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Is it possible to build a sapient machine ? Part 2

Post #2

Post by Bugmaster »

Con: Searle's Chinese Room

John Searle devised a thought experiment, called the Chinese Room, specifically to shut down the Turing Test. The argument goes something like this:

Let's say we assemble a cubicle, enclosed on all sides, with no doors or windows, but only two slots: an "Input" slot, and an "Output" slot. Inside the cubicle, we put a hapless intern who doesn't speak a word of Chinese. We give the intern some paper, some pencils, and a thick book of instructions. The instructions say things like this: "when the following squiggle gets pushed through the Input slot, write down the number 17 on the piece of paper, add it to the numbers you have already written there (if any), and go to the page given by the sum". And on that page, the rulebook will instruct the intern to write down some more numbers, do some more math, open up some more pages... and so on, until, eventually, the rules tell him to write some squiggles on a clean sheet of paper, and to push it through the Output slot.

The intern executes the rules flawlessly, but he has no idea what the squiggles mean -- he just looks at the input squiggles, and produces output squiggles, that's it. Unbeknownst to him, the input squiggles are questions in Chinese, such as "how do you do ?", and the output squiggles are Chinese responses, such as, "I'm fine, thank you". However, the intern obviously does not speak Chinese. We could go one step further, and have the intern memorize the rules, and to recognize and pronounce some Chinese sounds (assuming he's a very diligent intern, of course). Now, he can walk around and talk in Chinese, but he still has no idea what any of it means.

Searle's argument is that, just as the intern does not speak Chinese, a computer that executes a program designed to mimic human behavior (such as posting on this forum) does not think. It's merely pushing symbols around.

There are at least two major problems with Searle's argument, however.

Firstly, the fact that the person inside the Chinese room does not speak Chinese is irrelevant. The rulebooks don't speak Chinese either, and neither do his pencils or scraps of paper. However, when you put all these parts together, you get a system that does speak Chinese; the intern is just a component of this system.

Think about it this way: your CPU cannot, by itself, run Quake (or Unreal Tournament, if you prefer it). Neither can your video card, or your RAM, or your power supply, or your mouse, etc. However, when you assemble a computer out of all these components, you get a system that can run Quake. The system is greater than the sum of its parts.

Secondly, Searle is basically asserting that, since we know exactly how the Chinese Room operates -- down to each individual line in the rulebook -- then it can't possibly be sapient; all it's doing is pushing symbols around. Human minds have this mysterious quality known as "consciousness", or "semantics", and that's what makes us human. That's a pretty weak argument, though... what happens when we map out the human brain, and discover how it works ? Will everyone suddenly turn into a mindless zombie ? I doubt it.

Another version of Searle's argument states that human behavior is partially chaotic, due to some quantum effects in the brain, whereas computers are deterministic. Personally, I don't believe that non-deterministic quantum effects have any significant bearing on our brains, but it doesn't actually matter. We could easily hook up a white noise generator -- an electronic circuit that creates random numbers based on quantum effects -- to our computer (or to Searle's Chinese room), thus making it as "quantum" as the human brain.

Con: Biological Naturalism

I think this argument also belongs to Searle, but I'm not sure. It goes something like this: "Human (or, perhaphs, human and animal) brains are uniquely capable of producing sapience; computers are not brains, therefore computers will never become sapient". That's kind of an empty argument, though. What is it, exactly, that makes human brains uniquely capable of being human ? The mere fact that they're wet and squishy ? Well, what's stopping us from modeling a wet and squishy brain in software, assuming that we have enough processing power to do this ?

Another version of this argument simply states, "computers can't think, they can only do what they're programmed to do". But, if that's true, then we should be able to program the computer to live, grow and learn just as humans do. Other versions of this argument state that a human body -- along with eyesm ears, hands, and other organs -- is required in order to produce a functioning human brain, and, since computers don't have bodies, they'll never learn to think. But, again, nothing is stopping us from eventually learning to build entire artificial bodies, so this point is moot; furthermore, it denies basic humanity to one-eyed, peg-legged, yet nonetheless human pirates, as I'd mentioned above, and that's just uncalled for.

Con: Theism

I think that this is the only defensible argument against Strong AI. According to theists, human beings have immaterial yet very real souls. Only God (or gods, or the Universuum, or the wheel of death and rebirth, whatever) has the power to infuse souls into living creatures; humans do not have this power. Therefore, any machine that humans produce, no matter how clever it acts, will nonetheless be soulless and inhuman.

The major problem with this argument is that it requires faith in God (or gods, etc.); thus, it's not very convincing to atheists such as myself. Another problem is that souls are immaterial, and totally undetectable by any of our senses or scientific instruments. How, then, do we know whether any given being -- be it squishy or metallic -- has a soul ? Do white people have souls ? There are quite a few Muslims out there who will answer "no" to this question, and I think you can see how this can create a big problem.

Interestingly enough, the Bible does claim that God created men in his own image. Some people (I forget which sect of Christianity) claim that this can't possibly mean that God has 2 legs, 2 arms, and hairy nostrils. Instead, this must mean that men are capable of creation, just as God himself is... in which case, men could learn to, eventually, create souls, as as God did.

(to be continued below)
Last edited by Bugmaster on Tue Nov 29, 2005 2:26 am, edited 1 time in total.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Is it possible to build a sapient machine ? Part 3

Post #3

Post by Bugmaster »

Implications and conclusions

Interestingly enough, while we waste our time discussing philosophy on these forums, there are some other people who are actively developing AI. Modern computers can already perform many mental tasks that were previously believed to be the sole domain of human beings. In fact, computers perform many of these tasks -- such searching large volumes of text, filtering spam, playing chess, designing radio antennae, plotting a route through a city -- much better than a human being ever could. As we speak, Google is developing an improved engine that will be able to translate any language to any other language at least as well as an average bilingual human being. When asked about the lawsuits that book publishers raised against the company, a Google employee remarked: "this is silly, it's not like we're scanning all these books to be read by people, anyway". Other people are actively working on developing cars that can drive themselves, ATMs that can recognize the user's face and voice, digital receptionists that can handle the customer's orders (actually, PacBell already has an eerily cheerful version answering their phones), and other similar things. Strong AI is not merely a possibility; it's a distinctly lucrative possibility, which leads me to believe that it will be developed sooner rather than later.

If you were convinced by my arguments, you might be screaming in fear: "OMGWTFBBQ, what if machines want to take over ?!" I don't think this question is very interesting, though. Humans are trying to take over all the time, after all. But what happens when machines start demanding the right to vote (as women once did), and minimal wage, and the right to worship whatever weird gods they come up with ? This is when, IMO, the arguments I've summarized here will become critically important... Assuming, of course, that humanity doesn't destroy itself before it develops a working Strong AI.

User avatar
QED
Prodigy
Posts: 3798
Joined: Sun Jan 30, 2005 5:34 am
Location: UK

Post #4

Post by QED »

Very interesting topic BM. I think it's also helpful to consider this question from the bottom up as well. A while ago Harvey introduced me to a paper by Christophe Menant in which he focuses on a simple living organism (the Paramaceum) in order to explore the meaning of meaning. The idea behind studying such a simple from of life is to strip away all the potential distractions when exploring the thorny issue of meaning in human subjects.

Ultimately this simple living organism has evolved a motor response to the sensing of acidity in the water surrounding it in such a way as to move it away from the danger it presents. Needless to say it would be impossible for the organism to have evolved the opposite behaviour. In this case the meaning of acidity is hardwired into it's genes. More complex behaviour is a natural extension of this process and it should be quite easy to see how meaning might be moderated by additional sensory inputs. Furthermore, more sophisticated organisms would be able to evolve temporary storage for meaning such that it could be dynamically acquired as well as hardwired.

So, working from the bottom up, I think it is quite easy to see how natural intelligence would evolve hand-in-hand with the sense of meaning that Searle felt was missing. The only reason that Searle's objection might seem valid to anyone is that his Chinese room belittles the complexity it attempts to model. I would argue that consciousness is only an emergent phenomenon of sufficiently high levels of complex processing and is itself subject to a gradient. If it is otherwise then we are faced with the question of at what point consciousness "kicks-in" on the continuum that lies between the Parameceum and the human being. It sure looks like it's a grey-scale in between. Therefore it seems highly reasonable to me to say that the construction of an artificial consciousness is possible in principle.

User avatar
Scrotum
Banned
Banned
Posts: 1661
Joined: Fri Sep 09, 2005 12:17 pm
Location: Always on the move.

Post #5

Post by Scrotum »

I would love to see the answers to this topic, especially the Christian ones.

Religious people believing in "souls" and such will have a problem, as if they accept we Humans (and presumably animals) have souls, does this not mean that Joe-The-Robot which is exchangeable for a human, can feel, think and evolve, does he not have a Soul then?

All this would ofcourse crush any "God" argument they would have, as we would be able to create a being out of none material matter, hence, we would be above any God. So i suspect they would refuse to accept it, go on and on how no soul there is, like the Animals...

I want to hear Harvey or Al about this.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #6

Post by Bugmaster »

QED wrote:If it is otherwise then we are faced with the question of at what point consciousness "kicks-in" on the continuum that lies between the Parameceum and the human being. It sure looks like it's a grey-scale in between.
Yeah, I should've mentioned that, but I forgot. In reality, consciousness is probably a continuum, not a boolean scalar; some things are more conscious than others. For example, what about an AI that passes the Turing Test 73% of the time ? I'd say that it's 73% conscious -- or actually about 80%, since even biological human beings fail the test occasionally.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #7

Post by harvey1 »

Unfortunately I feel tapped out on the current discussions, but I'll offer my two cents here. I'll address this issue as phenomenal consciousness (p-consciousness), if you don't mind.

I personally do not know if strong AI is possible. Obviously we're here and we evolved, so I see no "in principle" problem with it. However, that's not to say that we can do everything that evolution can do.

I do have some issues with the strong AI postulation. One of my chief objections is that strong AI is often based on epiphenomenalism, and I have already presented my objections to epiphenomenalism in our other on-going thread on how the immaterial can affect the material. I suppose a non-epiphenomenalist strong AI is conceivable, and so that objection would vanish.

Another objection is that complexity is not always duplicatable in a lab. There are many complex situations that evolve, and we may not be able to simulate the key combination that brought about key evolutionary steps on the way to the conscious experience. It would seem that we could, but who would have guessed in 1950 that we wouldn't have flying cars in 2005? For example, humans may never discover how life originated. There might be a combination of 5000 element combinations, and 500,000 environmental conditions happening in 50,000 sequences with each sequence requiring 50 different conditions, over 500 varying time scales for each sequence, etc.. It's in principle possible to discover, but because of the complexity of the interaction, we just cannot discover it. I don't think this will necessarily be the case, but we don't know enough of the world to say that it won't be the case.

My last objection, the most serious objection, is that I don't think the systems response is right. I think the virtual response to Searle's CA is right, and therefore it raises the problem of how we will know we have a p-conscious AI program versus a zombie AI program. It is out of a principle of charity that we assume that each of us have p-consciousness, but can we extend this principle to AI machines? I'm sure some people will say yes, but is it not possible that we are wrong when we think that we've created a virtual AI mind when in fact we've just created a zombie AI mind that answers as if it has p-consciousness?
if they accept we Humans (and presumably animals) have souls, does this not mean that Joe-The-Robot which is exchangeable for a human, can feel, think and evolve, does he not have a Soul then?
It all depends on how we define a soul. In my view a soul might be a platonic attractor or some kind of wavefunction that describes "us" in some objective way. If an AI robot has this, then it may have a soul.
Last edited by harvey1 on Tue Nov 29, 2005 3:38 pm, edited 1 time in total.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #8

Post by Bugmaster »

harvey1 wrote:Unfortunately I feel tapped out on the current discussions, but I'll offer my two cents here.
I'll get back to the other discussion shortly; it requires more than a two-liner response, so I can't do it now... sorry...
However, that's not to say that we can do everything that evolution can do.
This sounds like a weaker form of Biological Naturalism to me. Are you saying that evolution is the only possible means of producing sapience ?
I suppose a non-epiphenomenalist strong AI is conceivable, and so that objection would vanish.
Right: hypothetically, if souls exist, a soul could inhabit a machine body -- unless God or some other principle prevents it for some theistic reason.
My last objection, the most serious objection, is that I don't think the systems response is right. I think the virtual response to Searle's CA is right, and therefore it raises the problem of how we will know we have a p-conscious AI program versus a zombie AI program. ... is it not possible that we are wrong when we think that we've created a virtual AI mind when in fact we've just created a zombie AI mind that answers as if it has p-consciousness?
How do you know whether I, Bugmaster, am a real intelligence, or a zombie AI mind that behaves exactly as a real intelligence would ? Is there any practical difference ?

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #9

Post by harvey1 »

Bugmaster wrote:
My last objection, the most serious objection, is that I don't think the systems response is right. I think the virtual response to Searle's CA is right, and therefore it raises the problem of how we will know we have a p-conscious AI program versus a zombie AI program. ... is it not possible that we are wrong when we think that we've created a virtual AI mind when in fact we've just created a zombie AI mind that answers as if it has p-consciousness?
How do you know whether I, Bugmaster, am a real intelligence, or a zombie AI mind that behaves exactly as a real intelligence would ? Is there any practical difference ?
There's a few people on this site that I'm willing to extend the principle of charity to. You are one of them. However, if you were abducted and returned by aliens (assuming well-documented) and they did some funny, unknown stuff to you; I might withdraw that extension of charity if I saw your eyes flickering and heard your head buzzing and stuff like that...
Bugmaster wrote:This sounds like a weaker form of Biological Naturalism to me. Are you saying that evolution is the only possible means of producing sapience ?
No. It just means that attributing sapience to an entity comes from having good reason to do so. We don't have good reason to suppose that the complexity issue is surmountable by us since our knowledge is not at that level to make this assumption. In the case of us, we have good reason to believe that evolution surmounted this barrier because we are here. We don't know why or how evolution did this, so we cannot assume that this is a trivial barrier to surmount.

User avatar
ST88
Site Supporter
Posts: 1785
Joined: Sat Jul 03, 2004 11:38 pm
Location: San Diego

Re: Is it possible to build a sapient machine ? Part 2

Post #10

Post by ST88 »

The Chinese Room sounds suspicously like the old Basic program from the 1970s called ELIZA. Essentially, it was the precursor to the so-called Natural Language Processor. It scanned entries from a human user for key words and phrases (and sometimes syntax) and brought up appropriate responses such as a therapist might say. It worked by having a relatively large database of key words and responses writ whole. That is, it didn't put sentences together, it put conversations together via sentence units.

The best objection to this is to ask what makes human sentience so special? Like QED, I would suspect that consciousness is a matter of scalar complexity. Moore's Law says (roughly) that the complexity of computers doubles every three years. If true, how long will we have to wait before computers have as many connections if not more than are present in the brain?

Another question is how to define Sentience (or "Sapience", a term I hadn't heard before). If a computer achieved such a thing, how would we know? ELIZA can't carry on a conversation forever, but imagine a computer that really could -- and also could get bored with a topic and start a new one, or refuse to speak. It is possible for a computer to "act" human, and to learn new functions and "behaviors"; I am even prepared to accept that a computer would be able to act fully human given an anatomically correct case. However, what would be the difference between the autonomous robot and the human? I am an I because I can think it. How would that be different from the computer that executes that command after writing it, itself?

So far, computers excel at situations where there are a finite amount of outcomes (e.g., chess). It does not take particularly high reasoning skills to excel at such tasks, only amazingly fast and complex calculations. (Kasparov even said that part of his strategy with a human was intimidation, something which can't work against Deep Blue.)

However, looking at the world as a human, in any given moment, there are, in fact, a finite number of things that can happen. Even though this finite number is astronomically large, it would stand to reason that a computer would be able to successfully map outcomes to whatever degree the processor was designed for. Is that sentience? Who would know? We can't be sure that other humans exist the same way we do, because we are alone in our own meat puppet; how could we know about the metal puppets?
Every concept that can ever be needed will be expressed by exactly one word, with its meaning rigidly defined and all its subsidiary meanings forgotten. -- George Orwell, 1984

Post Reply