Is it possible to build a sapient machine ?

For the love of the pursuit of knowledge

Moderator: Moderators

Post Reply
User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Is it possible to build a sapient machine ?

Post #1

Post by Bugmaster »

Topic

Many threads regarding dualism, theism, and philosophy in general, often run into this topic. Is it even hypothetically possible to build a computer that will be sapient -- i.e., an artificial being that will think, feel, and socialize as humans do ? Well, here's our chance: to resolve this debate once and for all ! Smoke 'em if you got 'em, people, this post is gonna be a long one.

I claim that creating Strong AI (which is another name for a sapient computer) is possible. We may not achieve this today, or tomorrow, but it's going to happen sooner rather than later.

First, let me go over some of the arguments in favor of my position.

Pro: The Turing Test

Alan Turing, the father of modern computing (well, one of them), proposed this test in ages long past, when computers as we know them today did not yet exist. So, let me re-cast his argument in modern terms.

Turing's argument is a thought experiment, involving a test. There are three participants in the test: subject A, subject B, and the examiner E. A and B are chatting on AIM, or posting on this forum, or text-messaging each other on the phone, or engaging in some other form of textual communication. E is watching their conversations, but he doesn't get to talk. E knows that one of the subjects -- either A or B -- is a bona-fide human being, and the other one is a computer, but he doesn't know which one is which. E's job is to determine which of the subjects is a computer, based on their chat logs. Of course, in a real scientific setting, we'd have a large population of test subjects and examiners, not just three beings, but you get the idea.

Turing's claim is that if E cannot reliably determine which being -- A or B -- is human, then they both are. Let me say this again: if E can't tell which of the subjects is a computer, then they're both human, with all rights and privileges and obligations that humanity entails.

This seems like a pretty wild claim at first, but consider: how do you know that I, Bugmaster, am human ? And how do I know that you're human, as well ? All I know about you is the content of your posts; you could be a robot, or a fish, it doesn't really matter. As long as you act human, people will treat you as such (unless, of course, they're jerks who treat everyone like garbage, but that's another story). You might say, "well, you know I'm human because today's computers aren't advanced enough to post intelligently on the forums", but doesn't prove much, since our technology is advancing rapidly all the time (and we're talking about the future, anyway).

So, if you're going to deny one of Turing's subjects his humanity, then you should be prepared to deny this humanity to everyone, which would be absurd. Therefore, a computer that acts human, should be treated as such.

Pro: The Reverse Turing Test

I don't actually know the proper name for this argument, but it's sort of the opposite of the first one, hence the name.

Let's say that tomorrow, as you're crossing the street to get your morning coffee, you get hit by a bus. Your wounds are not too severe, but your pinky is shattered. Not to worry, though -- an experimental procedure is available, and your pinky is replaced with a robotic equivalent. It looks, feels, and acts just like your pinkie, but it's actually made of advanced polymers.

Are you any less human than you were before the treatment ?

Let's say that, after getting your pinkie replaced, you get hit by a bus again, and lose your arm... which gets replaced by a robo-arm. Are you human now ? What if you get hit by a bus again, and your left eye gets replaced by a robotic camera -- are you less human now ? What if you get a brain tumor, and part of your brain gets replaced ? And what if your tumor is inoperable, and the doctors (the doctors of the future, of course) are forced to replace your entire brain, as well as the rest of your organs ? Are you human ? If so, then how are you different from an artificial being that was built out of the same robotic components that your entire body now consists of ?

Note that this isn't just idle speculation. People today already have pacemakers, glasses, prosthetic limbs, and yes, even chips implanted in their brains to prevent epileptic seizures (and soon, hopefully, Alzheimers). Should we treat these people as less human than their all-natural peers ? I personally don't think so.

Ok, I know that many of you are itching to point out the flaws in these arguments, so let me go over some common objections.

(to be continued below)

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #151

Post by Bugmaster »

harvey1 wrote:Why, do you think anyone you know would look at you odd if you only had 5,000 expressions of pain?
Yes, actually. Here's an example:

Bugmaster: Ow ow my pinkie hurts !
Examiner: How did you hurt your pinkie ?
Bugmaster: I was moving my desk to the opposite corner of the room, and accidentally crushed my pinkie with it. Now it hurts like a bitch.

Chances are, that particular expression of pain wasn't on the list.
harvey1 wrote:
Bugmaster wrote:In this case, he would still need an algorithm to determine which response to pick when, wouldn't he?
The SWAT guy knows you, that's why he picks certain responses for the animatronic head when the situation is apt.
That seems like a "yes" answer to me. So, ultimately, the SWAT guy would have to emulate my response to any situation, in order to pass as me. I would argue that a system that acts identically to myself in every situation is, functionally, myself.
No. I'm not talking about a Chinese room, you are.
Your robo-head is the exact same thing. Only you've replaced the I/O slot with the head, the room with a SWAT van, the intern inside with the SWAT guy, and the task of speaking Chinese with the task of feeling pain. You've dressed up the argument in different clothing, but it's still the same argument.

If you disagree, then please show me how your robo-head differs substantially from the Chinese Room, as I have presented it in my opening argument.
You are saying the animatronic head is in pain...
Wrong, I have repeatedly said that the entire system (head, SWAT guy, van, connecting wires, whatever) is in pain. There's a big difference. Read my opening statement, please.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #152

Post by harvey1 »

Bugmaster wrote:Here's an example... Chances are, that particular expression of pain wasn't on the list.
So, if a conversation was not included, just the verbal and physical expressions of pain, are you saying that intelligent observers cannot be fooled? This seems to contradict what you said prior where you acknowledged that an animatronic system could fool us into believing it is in pain. Which is it? Can the animatronic system fool observers, or can't it?
Bugmaster wrote:
harvey1 wrote:
Bugmaster wrote:In this case, he would still need an algorithm to determine which response to pick when, wouldn't he?
The SWAT guy knows you, that's why he picks certain responses for the animatronic head when the situation is apt.
That seems like a "yes" answer to me. So, ultimately, the SWAT guy would have to emulate my response to any situation, in order to pass as me. I would argue that a system that acts identically to myself in every situation is, functionally, myself.
Are you saying the SWAT guy is in pain just because he pushes button "4878"? If so, then why must the SWAT guy be in pain in order to push that button?
Bugmaster wrote:
No. I'm not talking about a Chinese room, you are.
Your robo-head is the exact same thing. Only you've replaced the I/O slot with the head, the room with a SWAT van, the intern inside with the SWAT guy, and the task of speaking Chinese with the task of feeling pain. You've dressed up the argument in different clothing, but it's still the same argument. If you disagree, then please show me how your robo-head differs substantially from the Chinese Room, as I have presented it in my opening argument.
I fail to see the similarities. For one, pain is a feeling and the Chinese room does not talk about the cause of feelings, it is based on what it means to understand something. You already admitted that the animatronic head does not feel pain, but the analogy of a Chinese room argument seems to say that "something" does feel pain--which you say is the "system." How can a system of mechanical parts feel pain? Are you saying that factories, companies, and organizations actually feel pain even though they lack neural connections? Does the term "feeling" have any real meaning to you?
Bugmaster wrote:
You are saying the animatronic head is in pain...
Wrong, I have repeatedly said that the entire system (head, SWAT guy, van, connecting wires, whatever) is in pain. There's a big difference. Read my opening statement, please.
Okay, BM, why do you think the entire system feels pain? Earlier you said that the AI community has not yet discovered the algorithms for the feeling of pain. The system suggested here is rather simple (head, SWAT guy, van, wireless connection, mechanical stuff inside the head), so what mysterious algorithm is present in this rather simply architecture that has eluded the AI community?

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #153

Post by Bugmaster »

harvey1 wrote:So, if a conversation was not included, just the verbal and physical expressions of pain, are you saying that intelligent observers cannot be fooled? This seems to contradict what you said prior where you acknowledged that an animatronic system could fool us into believing it is in pain.
Um, conversation was always included in all of my examples. So, yes, the animatronic system could "fool" us into believing it is in pain, through conversation. I see no contradiction.
Are you saying the SWAT guy is in pain just because he pushes button "4878"? If so, then why must the SWAT guy be in pain in order to push that button?
This is the same mistake that Searle makes; see below.
I fail to see the similarities. For one, pain is a feeling and the Chinese room does not talk about the cause of feelings, it is based on what it means to understand something.
Hm, I guess I was not clear enough in my opening argument (you did read it, right ?). Searle claims that the Chinese Room does not understand Chinese, even though it appears to speak it; you claim that your robo-head does not feel pain, even though it appears to express it.

Let's compare the room and your robo-head:

Code: Select all

Component    | Chinese Room             | Robo-head
-------------+--------------------------+----------------------
Input/Output | Notecards throuh a slot  | Animatronic head
Algorithm    |     Rulebook             |   Table of responses
CPU          | Hapless intern           | SWAT guy
Tests for:   | Understanding of Chinese | Internal feeling of pain
Appears to:  | Speak Chinese            | Feel pain
-------------+--------------------------+----------------------
Searle claims that the intern does not understand Chinese, although he appears to speak it; you claim that the SWAT guy does not feel pain, although the robo-head appears to express it. Both of you are right, and both of you are missing the same point: it's the entire system (I/O, CPU, memory, etc.) that speaks Chinese, or feels pain, not any particular component of it. Similarly, you need your entire computer to play Quake, you can't play it on the naked CPU alone.

The only difference between Searle's Chinese Room and yours is that your room is powered by a flat table of responses, whereas Searle's Chinese Room is powered by a read/write rulebook where some rules call for rewriting the rulebook. Searle actually used a flat table such as yours in an earlier version of his example, but he has since abandoned it -- probably because he realized that not even simple computer programs use a fixed lookup table.
How can a system of mechanical parts feel pain?
In the same way that a system of biological parts (i.e., humans) feels pain. Remember, I don't believe in souls, so I don't think that humans are fundamentally different from any other machine.
Are you saying that factories, companies, and organizations actually feel pain even though they lack neural connections?
I don't know, do factories, companies and organizations act as though they are in pain (literally, not metaphorically) ?
Okay, BM, why do you think the entire system feels pain?
For the same reason I think humans feel pain: because it acts as though it does.
Earlier you said that the AI community has not yet discovered the algorithms for the feeling of pain. The system suggested here is rather simple (head, SWAT guy, van, wireless connection, mechanical stuff inside the head)...
I challenge you to write a lookup table for the SWAT guy that would allow him to convince most humans that the SWAT/head system is conscious. I claim that you cannot do this, regardless of how "simple" it seems to you. To prove me wrong, show your work :-)

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #154

Post by harvey1 »

Bugmaster wrote:Um, conversation was always included in all of my examples. So, yes, the animatronic system could "fool" us into believing it is in pain, through conversation. I see no contradiction.
Why must a person in pain speak elaborate sentences to convince you they are in pain? Also, this seems like another contradiction:
Bugmaster wrote:conversation was always included in all of my examples... So, yes, the animatronic system could "fool" us into believing it is in pain, through conversation... I challenge you to write a lookup table for the SWAT guy that would allow him to convince most humans that the SWAT/head system is [in pain]. I claim that you cannot do this, regardless of how "simple" it seems to you.
If this cannot be done then why did you say the animatronic head could fool us that it is in pain?
Bugmaster wrote:
Are you saying the SWAT guy is in pain just because he pushes button "4878"? If so, then why must the SWAT guy be in pain in order to push that button?.. Okay, BM, why do you think the entire system feels pain?
This is the same mistake that Searle makes; see below... you claim that the SWAT guy does not feel pain, although the robo-head appears to express it... it's the entire system (I/O, CPU, memory, etc.) that.. feels pain, not any particular component of it... [a system of mechanical parts feel pain] in the same way that a system of biological parts (i.e., humans) feels pain... [The entire system feels pain] for the same reason I think humans feel pain: because it acts as though it does.
Okay, so the SWAT guy doesn't feel pain, you've already admitted that the animatronic head does not feel pain. The only equipment that is not part of the animatronic head is a few bits sent by an RF signal from the SWAT van. What is the mysterious algorithm that you said that the AI community was still some distance from understanding? Why does pushing a button "4878" create pain for gears and mechanical devices? What is the algorithm of pain that the AI community has failed in arriving at which you appear to have succeeded at finding in this thought experiment?
Bugmaster wrote:Searle claims that the Chinese Room does not understand Chinese, even though it appears to speak it; you claim that your robo-head does not feel pain, even though it appears to express it...
Searle claims that the intern does not understand Chinese, although he appears to speak it; you claim that the SWAT guy does not feel pain, although the robo-head appears to express it. Both of you are right, and both of you are missing the same point: it's the entire system (I/O, CPU, memory, etc.) that speaks Chinese, or feels pain, not any particular component of it. Similarly, you need your entire computer to play Quake, you can't play it on the naked CPU alone.
So, the Chinese room is discussing feelings? The Chinese room is not about "understanding" as an abstract concept in terms of how meaning is attached to symbols? Also, what does the Chinese room have to do with your Turing test argument? Why does the Systems Reply require a belief in your Turing test argument for pain? I don't see the correlation here.
Bugmaster wrote:I challenge you to write a lookup table for the SWAT guy that would allow him to convince most humans that the SWAT/head system is conscious. I claim that you cannot do this, regardless of how "simple" it seems to you. To prove me wrong, show your work
Why does the SWAT guy need a look-up table? He's very familiar with all 5,000 buttons and can push them instantly whenever he thinks that this is the reaction which would fool those around the animatronic head.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #155

Post by Bugmaster »

harvey1 wrote:Why must a person in pain speak elaborate sentences to convince you they are in pain?
I don't know, can you convince me, right now, that you feel pain ? Without conversing ? I doubt it.
harvey1 wrote:
Bugmaster wrote:I challenge you to write a lookup table for the SWAT guy that would allow him to convince most humans that the SWAT/head system is [in pain]. I claim that you cannot do this, regardless of how "simple" it seems to you.
If this cannot be done then why did you say the animatronic head could fool us that it is in pain?
Because lookup tables are an incredibly poor programming tool. We have better ones at our disposal, thanks to Dijkstra et al.

I never claimed that creating a Strong AI is impossible; I merely claimed that it's impossible if a lookup table is your only tool. I also claim that it is a very difficult problem that neither you, Harvey, nor me, Bugmaster, will be able to solve. But that doesn't make it impossible to solve, because there are tons of people out there who are a lot smarter than you and me put together, working on this problem as we speak.
Okay, so the SWAT guy doesn't feel pain, you've already admitted that the animatronic head does not feel pain.
I think you might be confused again, so let me recap:
Bugmaster wrote:1). K-bot: cannot carry on a conversation. Classification: Not human.
2). K-1000, a.k.a. Strong AI: can carry on a conversation as well as humans can. Classification: Human.
3). Stephen Hawking: a system consisting of robotic and human parts which can carry on a conversation as well as humans can. Classification: Human.
With that out of the way, let's proceed:
The only equipment that is not part of the animatronic head is a few bits sent by an RF signal from the SWAT van. What is the mysterious algorithm that you said that the AI community was still some distance from understanding?
It's whatever algorithm the SWAT guy is executing, by looking at his rulebook, or his secret SWAT re-writable videotape, or whatever, in order to push the right buttons in response to the right stimuli. As I said earlier, your lookup table approach won't work, but that's just because lookup tables are not a very good programming tool.
So, the Chinese room is discussing feelings? The Chinese room is not about "understanding" as an abstract concept in terms of how meaning is attached to symbols?
That was precisely Searle's argument: human brains can attach meaning to symbols, computers cannot, therefore computers will never become sentient. However, I do not believe that "symbols" or "meanings" exist as dualistic entities; therefore, the syntax vs. semantics dichotomy doesn't impress me. Furthermore, to answer your next question:
Also, what does the Chinese room have to do with your Turing test argument? Why does the Systems Reply require a belief in your Turing test argument for pain? I don't see the correlation here.
It kind of saddens me that you ask this question, because it means that you haven't read my opening argument:
Bugmaster's Opening Statement wrote:This seems like a pretty wild claim at first, but consider: how do you know that I, Bugmaster, am human ? And how do I know that you're human, as well ? All I know about you is the content of your posts; you could be a robot, or a fish, it doesn't really matter. As long as you act human, people will treat you as such (unless, of course, they're jerks who treat everyone like garbage, but that's another story). ...

So, if you're going to deny one of Turing's subjects his humanity, then you should be prepared to deny this humanity to everyone, which would be absurd. Therefore, a computer that acts human, should be treated as such.
The Turing Test argument is just that: an argument. It does not require faith; it's a simple thought experiment (which can be performed as a real experiment as soon as someone builds an AI that can behave as humans do). The entire point of the Turing Test is that you do not have a consciousness detector. You don't know what your fellow humans are thinking inside their heads -- especially not when you're talking over the phone, or typing online. Words are the only evidence you have for a person's sentience; thus, you must either deny humanity to everyone, or grant humanity to anyone who seems human. In your previous posts, you have chosen to deny humanity to everyone, which I think is absurd.

One sure-fire way to defeat the Turing Test argument is to create a consciousness detector. I eagerly await its invention.

Another sure-fire way to defeat the Turing Test argument is to prove that it is logically impossible for anything but a human brain to exhibit consciousness; you have actually denied this in the past, so we at least agree on this point.

A third way is to simply use faith, claim that humans are uniquely capable of having souls (or consciousness, or minds, or whatever), and leave it at that. Faith-based arguments are true by definition, but are notoriously unpersuasive.
Why does the SWAT guy need a look-up table? He's very familiar with all 5,000 buttons and can push them instantly whenever he thinks that this is the reaction which would fool those around the animatronic head.
How does he know which button to push ? Is there something that tells him which says, "push button N when sensor X is activated" ? Something such as... a table ?

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #156

Post by harvey1 »

Bugmaster wrote:
Okay, so the SWAT guy doesn't feel pain, you've already admitted that the animatronic head does not feel pain.
It's whatever algorithm the SWAT guy is executing, by looking at his rulebook, or his secret SWAT re-writable videotape, or whatever, in order to push the right buttons in response to the right stimuli. As I said earlier, your lookup table approach won't work, but that's just because lookup tables are not a very good programming tool... How does he know which button to push ? Is there something that tells him which says, "push button N when sensor X is activated"? Something such as... a table?
As I said, the SWAT guy memorizes the function of each button, and has become intimately familiar with each button. When the SWAT guy presses a particular button he does so because he knows that the animatronic head will show an emotion of pain that will fool the person witnessing the agonizing pain that the head is in. The button sends a few bits of RF data to the chipsets inside the animatronic head, which causes the mechanical reaction that the SWAT guy intended.

Now, the SWAT guy is not in pain. If he was executing a pain algorithm as you suggest, then he would feel pain--but he doesn't. So, why are you suggesting something absurd such as the animatronic head feels pain? It seems that you are ideologically so committed to the Turing Test that you cannot see that the animatronic head is a mechanical device.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #157

Post by Bugmaster »

harvey1 wrote:As I said, the SWAT guy memorizes the function of each button, and has become intimately familiar with each button.
What does "intimately familiar" mean ? I don't think you understand the difference between lookup tables and stateful algorithms, so let's play a little game.

Let's try simulating something really simple, such as a dog (which is a lot simpler than a human). Give me a list of buttons, along with the stimuli that activate these buttons. Your buttons will control a robo-dog, similar to the robo-head (maybe it's Sony's AIBO 2.0). I'm looking for something like this:

Code: Select all

Stimulus            |  Button | Response
--------------------+---------+----------
 Kicked in the head |    A    |   Yelp
 Kicked in the shin |    B    |   Stagger
   ...
After you give me this table, I will attempt to devise a series of stimuli which will cause your robo-dog to act in a non-doglike manner, thus exposing its robotic nature.
When the SWAT guy presses a particular button he does so because he knows that the animatronic head will show an emotion of pain that will fool the person witnessing the agonizing pain that the head is in. ...
Now, the SWAT guy is not in pain. If he was executing a pain algorithm as you suggest, then he would feel pain--but he doesn't.
No, that's incorrect. What you have just told me is equivalent to "The intern inside the Chinese room does not understand Chinese", or "My CPU is not playing Quake all by itself". This is true, but irrelevant, because we're evaluating the entire system (guard+head+rulebook, room+intern+rulebook, CPU+RAM+GPU+HD, etc.), and the entire system does exhibit the characteristic we want (feeling pain, understanding Chinese, running Quake).

More specifically, I have never claimed that the SWAT guy feels pain in your scenario, so why do you keep bringing it up ?

I think the problem here is that you are unable to see the SWAT guy as just a component in the system, as opposed to being the entirety of the system. You are so used to your notion that only humans can be sentient, that when you see a human anywhere in the system, you automatically assume that everything else is irrelevant. This might make sense in your worldview, but it does not make sense in mine.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #158

Post by harvey1 »

Bugmaster wrote:More specifically, I have never claimed that the SWAT guy feels pain in your scenario, so why do you keep bringing it up?
Because I just want to make it clear that if the feeling of pain is present, it is not present "in" the SWAT guy, or "in" the SWAT van, or "in" the radio frequency signal, or "in" the mechanical assembly of the animatronic head. Now you say the answer lies in the Systems Reply, but even the Systems Reply says that the effect occurs "in" the brain, right? So, where does this feeling of pain occur?
Bugmaster wrote:After you give me this table, I will attempt to devise a series of stimuli which will cause your robo-dog to act in a non-doglike manner, thus exposing its robotic nature.
But, you won't get that table (that the SWAT guy has memorized by heart) since this is behind the metaphysical curtain. We agreed that all you have is behavior to judge whether something feels pain. So, 5000 different behaviors of pain would convince anyone that someone has pain. However, you are attributing pain to a purely mechanical device that in principle is no more sophisticated than a toaster. That is, it operates on the same mechanical principles that allow all mechanical devices to operate. Are you saying that all mechanical devices experience the feeling of pain?
Bugmaster wrote:No, that's incorrect... This is true, but irrelevant
Okay, how is that not a contradiction? How can something be incorrect and true at the same time?
Bugmaster wrote:we're evaluating the entire system (guard+head+rulebook, room+intern+rulebook, CPU+RAM+GPU+HD, etc.), and the entire system does exhibit the characteristic we want (feeling pain, understanding Chinese, running Quake).
Yes, the mechanical assembly takes the bits of information sent over radio frequency, and produces certain mechanical effects. Think of it like a factory arm that places products inside a package on a conveyor belt. How does that translate into the animatronic head feeling pain? You said at one time that the feeling of pain is an unknown algorithm. What is the algorithm that we can easily identify in this scenario (assuming it exists) that we can say without any scientific evidence whatsoever that this machine feels the withering pain that it's wheels and pulleys are imitating at the push of a button?
Bugmaster wrote:I think the problem here is that you are unable to see the SWAT guy as just a component in the system, as opposed to being the entirety of the system. You are so used to your notion that only humans can be sentient, that when you see a human anywhere in the system, you automatically assume that everything else is irrelevant. This might make sense in your worldview, but it does not make sense in mine.
You are misinterpreting the Systems Reply by not offering a "in" location for the feeling of pain. Even the Systems Reply says that the "in" location of pain is in the brain, not across the street in some street sign.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #159

Post by Bugmaster »

harvey1 wrote:Because I just want to make it clear that if the feeling of pain is present, it is not present "in" the SWAT guy, or "in" the SWAT van, or "in" the radio frequency signal, or "in" the mechanical assembly of the animatronic head.
Bingo !
Now you say the answer lies in the Systems Reply, but even the Systems Reply says that the effect occurs "in" the brain, right?
What brain ? All we have here is a Chinese Room with an intern inside. His brain is just part of the system, just as your CPU is only a part of your computer (though an important one).
So, where does this feeling of pain occur?
This is really the wrong question to ask, because I never claimed to know where the feeling of pain occurs in anything, even in humans -- because I don't believe that the feeling of pain is this spiritual thing that's separate from our bodies. I do not believe that there's this event called "pain", and there's this agent called "self" that observes the event and manufactures the "feeling of pain"; to me, pain is just another process.

The right question to ask is, "are we justified in believing that the system feels pain/is conscious ?" My answer is "yes", because the alternative is to deny humanity to everyone, even humans, as you have done in the past.
But, you won't get that table (that the SWAT guy has memorized by heart) since this is behind the metaphysical curtain. We agreed that all you have is behavior to judge whether something feels pain. So, 5000 different behaviors of pain would convince anyone that someone has pain...
The whole reason I've asked you to give me the lookup table is to demonstrate why I believe that a lookup table -- "5000 different behaviors !" -- is not sufficient to simulate even a dog, let alone a human being. It is a straw-man argument.
However, you are attributing pain to a purely mechanical device that in principle is no more sophisticated than a toaster.
Oh, I think that even TRS-80s are a lot more sophisticated than toasters. Modern computers are more sophisticated than TRS-80s, and human brains are more sophisticated than modern computers (though the gap is narrowing). What's your point ? If you are claiming that a human brain is categorically different from a machine, then you need to show me how that difference can be observed. Otherwise, you're just multiplying entities without need.
That is, it operates on the same mechanical principles that allow all mechanical devices to operate. Are you saying that all mechanical devices experience the feeling of pain?
That's a good question. I think I agree (*) with QED's point of view on this subject: consicousness (or pain, whetever) is not a boolean thing, but a continuum. So, all mechanical devices (including human brains) feel pain, just to different degrees. Keep in mind, though, that when I say "X feels pain", I'm merely describing my model of X. I don't believe that there's an actual, dualistic pain for X to feel.

(*) Originally, I wanted to answer "no" to this question, but QED's examples with the thermostat were very persuasive.
Bugmaster wrote:No, that's incorrect... This is true, but irrelevant
Okay, how is that not a contradiction? How can something be incorrect and true at the same time?
Your reasoning is incorrect. The conclusion you state is true, but irrelevant, and does not prove your argument.
Yes, the mechanical assembly takes the bits of information sent over radio frequency, and produces certain mechanical effects. Think of it like a factory arm that places products inside a package on a conveyor belt. How does that translate into the animatronic head feeling pain?
I don't know, how do chemical reactions and electric signals translate into the feeling of pain for humans ? To me, both this question and the question you ask sound kind of absurd. The mechanical (or electrochemical) effects are the feeling of pain, from my viewpoint.

However, I actually don't think it matters. Let's pretend that dualism is true, for a moment. How would you know whether the AI in talking to you is only simulating consciousness, or whether it is conscious ? If there's no way for you to tell the difference, then the difference is irrelevant.


You said at one time that the feeling of pain is an unknown algorithm. What is the algorithm that we can easily identify in this scenario (assuming it exists) that we can say without any scientific evidence whatsoever that this machine feels the withering pain that it's wheels and pulleys are imitating at the push of a button?
You are misinterpreting the Systems Reply by not offering a "in" location for the feeling of pain. Even the Systems Reply says that the "in" location of pain is in the brain, not across the street in some street sign.
Sorry, I don't know which Systems Reply you're reading. The one I've read, and, more importantly, the one I've written down in my opening statement (please ! read it already !), does state where the "feelong of pain" occurs -- it occurs in the entire system. Remember, the Chinese Room doesn't have a brain, it has pencils and rulebooks and interns and notecards and such.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #160

Post by harvey1 »

Bugmaster wrote:What brain? All we have here is a Chinese Room with an intern inside. His brain is just part of the system, just as your CPU is only a part of your computer (though an important one).
The brain of the person having conscious thought. That's what the Chinese Room is about. You're in effect saying that the feeling of pain does not occur in the brain but can occur with non-embodied minds. Is that what you believe, that non-embodied minds exist once we start making animatronic heads that mechanically imitate expressions of pain?
Bugmaster wrote:I never claimed to know where the feeling of pain occurs in anything, even in humans -- because I don't believe that the feeling of pain is this spiritual thing that's separate from our bodies.
But, this is what you are implying by saying the feeling of pain actually occurs in this thought experiment, but it in this case it occurs over a geographic location. When humans feel pain, it is the human that is feeling the pain. In this thought experiment, who in your opinion is feeling the pain? (BM, why must you go to these levels to maintain your beliefs? I don't get it.)
Bugmaster wrote:The whole reason I've asked you to give me the lookup table is to demonstrate why I believe that a lookup table -- "5000 different behaviors !" -- is not sufficient to simulate even a dog, let alone a human being. It is a straw-man argument.
5000 behaviors of the feeling of pain is sufficient to convince someone that an individual (e.g.,. animatronic head) is in pain. Your response was that, "give me the table so I can figure out how to defeat the convincing animatronic head as feeling pain, but we've already agreed that we can only go based on responses not what is happening internally (i.e., the premise of the Turing Test). So, here's what it boils down to. If you are fooled because you lack the information from this table, then something in the geographical location feels pain. If you secretly get the table info that the SWAT guy has memorized, then are you saying there is no longer a feeling of pain going on? Doesn't that strike you as absurd?
Bugmaster wrote:
That is, it operates on the same mechanical principles that allow all mechanical devices to operate. Are you saying that all mechanical devices experience the feeling of pain?
That's a good question. I think I agree (*) with QED's point of view on this subject: consicousness (or pain, whetever) is not a boolean thing, but a continuum. So, all mechanical devices (including human brains) feel pain, just to different degrees. Keep in mind, though, that when I say "X feels pain", I'm merely describing my model of X. I don't believe that there's an actual, dualistic pain for X to feel.
But, if you use the Turing Test to establish that something indeed feels pain, then you must also say that the machine feels the same intensity of pain as we do since you said that the algorithm must be the same if the expressions are the same, right? Are you saying that machines that have been mechanically assembled to duplicate 5000 different expressions of pain have 5000 different feelings of pain in exact intensity as the expression indicates?
Bugmaster wrote:The mechanical (or electrochemical) effects are the feeling of pain, from my viewpoint.
But, you said that the feeling of pain was based on an unknown algorithm that has yet to be discovered. Are you now saying this is incorrect?

Post Reply