Topic
Many threads regarding dualism, theism, and philosophy in general, often run into this topic. Is it even hypothetically possible to build a computer that will be sapient -- i.e., an artificial being that will think, feel, and socialize as humans do ? Well, here's our chance: to resolve this debate once and for all ! Smoke 'em if you got 'em, people, this post is gonna be a long one.
I claim that creating Strong AI (which is another name for a sapient computer) is possible. We may not achieve this today, or tomorrow, but it's going to happen sooner rather than later.
First, let me go over some of the arguments in favor of my position.
Pro: The Turing Test
Alan Turing, the father of modern computing (well, one of them), proposed this test in ages long past, when computers as we know them today did not yet exist. So, let me re-cast his argument in modern terms.
Turing's argument is a thought experiment, involving a test. There are three participants in the test: subject A, subject B, and the examiner E. A and B are chatting on AIM, or posting on this forum, or text-messaging each other on the phone, or engaging in some other form of textual communication. E is watching their conversations, but he doesn't get to talk. E knows that one of the subjects -- either A or B -- is a bona-fide human being, and the other one is a computer, but he doesn't know which one is which. E's job is to determine which of the subjects is a computer, based on their chat logs. Of course, in a real scientific setting, we'd have a large population of test subjects and examiners, not just three beings, but you get the idea.
Turing's claim is that if E cannot reliably determine which being -- A or B -- is human, then they both are. Let me say this again: if E can't tell which of the subjects is a computer, then they're both human, with all rights and privileges and obligations that humanity entails.
This seems like a pretty wild claim at first, but consider: how do you know that I, Bugmaster, am human ? And how do I know that you're human, as well ? All I know about you is the content of your posts; you could be a robot, or a fish, it doesn't really matter. As long as you act human, people will treat you as such (unless, of course, they're jerks who treat everyone like garbage, but that's another story). You might say, "well, you know I'm human because today's computers aren't advanced enough to post intelligently on the forums", but doesn't prove much, since our technology is advancing rapidly all the time (and we're talking about the future, anyway).
So, if you're going to deny one of Turing's subjects his humanity, then you should be prepared to deny this humanity to everyone, which would be absurd. Therefore, a computer that acts human, should be treated as such.
Pro: The Reverse Turing Test
I don't actually know the proper name for this argument, but it's sort of the opposite of the first one, hence the name.
Let's say that tomorrow, as you're crossing the street to get your morning coffee, you get hit by a bus. Your wounds are not too severe, but your pinky is shattered. Not to worry, though -- an experimental procedure is available, and your pinky is replaced with a robotic equivalent. It looks, feels, and acts just like your pinkie, but it's actually made of advanced polymers.
Are you any less human than you were before the treatment ?
Let's say that, after getting your pinkie replaced, you get hit by a bus again, and lose your arm... which gets replaced by a robo-arm. Are you human now ? What if you get hit by a bus again, and your left eye gets replaced by a robotic camera -- are you less human now ? What if you get a brain tumor, and part of your brain gets replaced ? And what if your tumor is inoperable, and the doctors (the doctors of the future, of course) are forced to replace your entire brain, as well as the rest of your organs ? Are you human ? If so, then how are you different from an artificial being that was built out of the same robotic components that your entire body now consists of ?
Note that this isn't just idle speculation. People today already have pacemakers, glasses, prosthetic limbs, and yes, even chips implanted in their brains to prevent epileptic seizures (and soon, hopefully, Alzheimers). Should we treat these people as less human than their all-natural peers ? I personally don't think so.
Ok, I know that many of you are itching to point out the flaws in these arguments, so let me go over some common objections.
(to be continued below)
Is it possible to build a sapient machine ?
Moderator: Moderators
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Re: Is it possible to build a sapient machine ? Part 2
Post #11Zero times a million zeros is still zero. So, although waving the wand of complexity might give us reason to believe that p-consciousness can be accounted for, it doesn't actually explain it. I think it is important that we point that out.ST88 wrote:Like QED, I would suspect that consciousness is a matter of scalar complexity.
Qualia.ST88 wrote:It is possible for a computer to "act" human, and to learn new functions and "behaviors"; I am even prepared to accept that a computer would be able to act fully human given an anatomically correct case. However, what would be the difference between the autonomous robot and the human?
A computer processes commands using its instruction set, and there is nothing in the instruction set that attributes qualia to a machine, so the assumption is that it is not there. It would be magical for that property to exist in a computer without a logical cause for it to exist.ST88 wrote:How would that be different from the computer that executes that command after writing it, itself?
Sure, but we know why computers can do this. It has to do with the electrical properties of certain materials that allow for memory, conduction of electrical current, resistance of electrical current, etc.. When you talk about human qualia, however, there is no reason to believe this exists inside a computer anymore than there is reason to suspect that earth is conscious, or the galaxy is conscious. Just having lots of stuff going on that is complex is not good reason to believe that a computer can attain p-consciousness unless we can show how this is possible.ST88 wrote:So far, computers excel at situations where there are a finite amount of outcomes (e.g., chess). It does not take particularly high reasoning skills to excel at such tasks, only amazingly fast and complex calculations. (Kasparov even said that part of his strategy with a human was intimidation, something which can't work against Deep Blue.)
No. Basically a computer is not doing anything much more complex than an abascus. Is an abascus a sentient device? If not, then why not?ST88 wrote:...it would stand to reason that a computer would be able to successfully map outcomes to whatever degree the processor was designed for. Is that sentience?
Knowledge is a relative term. When we use the term "know" we often mean to say that our knowledge is a justified belief (i.e., based on following certain well-established and well-functioning rules; e.g., logic, scientific method, trustworthy eyewitnesses, etc.). Can we say that we are justified in believing that humans exist having the same p-conscious mindsets as us? If we can't, then the question reverts to, "what does it mean to know if we aren't justified in that very basic belief about other people?" As you can see, our justified set of beliefs are based on such charitable attributions of others being like ourselves in significant respects (including as fact that there are other people, etc.), so in that sense we know that other people have p-consciousness like ourselves. Now, do we have the same level of justification that abascuses are not p-conscious? I think so. We have no philosophical justification from first principles that abascuses should have consciousness, nor do we have any personal experiences of abascuses sharing a fond moment with us, etc.. So, we are justified in believing that abascuses, computers, planets, galaxies, and most other inanimate objects in the universe (acting in a collective) are not p-conscious. That is, we know it.ST88 wrote:Who would know? We can't be sure that other humans exist the same way we do, because we are alone in our own meat puppet; how could we know about the metal puppets?
- Dilettante
- Sage
- Posts: 964
- Joined: Sun Dec 19, 2004 7:08 pm
- Location: Spain
Post #12
I haven't given the topic of AI much thought, so I may be off base here, but I'll offer my two cents (more like one cent perhaps).
Aren't we forgetting that intelligence is an organic thing? We may one day be able to create a computer so powerful that it can fool some people some of the time and pass a few Turing tests, but it would still be plastic and screws and electrical circuits, not flesh and bone. Being human includes having a human body, in my view.
Aren't we forgetting that intelligence is an organic thing? We may one day be able to create a computer so powerful that it can fool some people some of the time and pass a few Turing tests, but it would still be plastic and screws and electrical circuits, not flesh and bone. Being human includes having a human body, in my view.
Re: Is it possible to build a sapient machine ? Part 2
Post #13So pointed. But until we can define consciousness in humans, we can't discount the possibility that it is present in non-human objects.harvey1 wrote:Zero times a million zeros is still zero. So, although waving the wand of complexity might give us reason to believe that p-consciousness can be accounted for, it doesn't actually explain it. I think it is important that we point that out.ST88 wrote:Like QED, I would suspect that consciousness is a matter of scalar complexity.
That describes the problem, but it doesn't solve it. That there is something which we might call qualia is only evident upon self-reporting. We can't dive into someone else's mind and prove that it exists, we can only assume it does because others report similar mental experiences. My point was that an inner life might be mechanically simulated to such an extent that there would be no way to distinguish human from robot.harvey1 wrote:Qualia.ST88 wrote:It is possible for a computer to "act" human, and to learn new functions and "behaviors"; I am even prepared to accept that a computer would be able to act fully human given an anatomically correct case. However, what would be the difference between the autonomous robot and the human?
Why can't "qualia" be programmed into the machine? Presumably, we could program a machine with the same mental abilities as a newborn human child: learning capacity, reasoning skills, etc. After all, qualia is not a learned skill, it is an inherent trait. Why wouldn't this be possible to program? It would be possible to program a computer to mimic OUTWARD features of human behavior, why wouldn't it be possible to program the INNER life of one?harvey1 wrote:A computer processes commands using its instruction set, and there is nothing in the instruction set that attributes qualia to a machine, so the assumption is that it is not there. It would be magical for that property to exist in a computer without a logical cause for it to exist.ST88 wrote:How would that be different from the computer that executes that command after writing it, itself?
Why, harvey1, you are a mechano-qualia atheist! Without the least knowledge of the process or the mechanics involved, you are prepared to disbelieve! But seriously, we know what the brain is made of, we know, roughly, its structure, its hardware. We just don't know how it works. Really, all we can assume is that it has to do with how complex the connections are. For a materialist like myself, this is not a problem. Consciousness is just an expression of the connections.harvey1 wrote:Sure, but we know why computers can do this. It has to do with the electrical properties of certain materials that allow for memory, conduction of electrical current, resistance of electrical current, etc.. When you talk about human qualia, however, there is no reason to believe this exists inside a computer anymore than there is reason to suspect that earth is conscious, or the galaxy is conscious. Just having lots of stuff going on that is complex is not good reason to believe that a computer can attain p-consciousness unless we can show how this is possible.ST88 wrote:So far, computers excel at situations where there are a finite amount of outcomes (e.g., chess). It does not take particularly high reasoning skills to excel at such tasks, only amazingly fast and complex calculations. (Kasparov even said that part of his strategy with a human was intimidation, something which can't work against Deep Blue.)
Again, it is a matter of scale. Obviously, an abacus is not sentient. Neither is a single brain cell.harvey1 wrote:No. Basically a computer is not doing anything much more complex than an abascus. Is an abascus a sentient device? If not, then why not?ST88 wrote:...it would stand to reason that a computer would be able to successfully map outcomes to whatever degree the processor was designed for. Is that sentience?
And, I have to differ with you slightly. A computer operates millions and millions of abaci at once. A cell is made up of millions and millions of elementary particles. Where is the distinction?
Well now, of course we can make that claim with other humans, because we identify them as other humans. We recognize ourselves as same because we share common characteristics. Therefore we can justifiably suspect that other human minds behave as ours do. HOWEVER, given another device, another object that behaves the same way but does not share our morphology, how can we be sure what is going on? My point was that Cartesian identity only goes as far as the brain -- I can only state with absolute certainty that I exist. Justifying my belief in other similar beings is not much of a problem -- but it is still a problem. Now how much more of a problem is it when those other beings do not share my physical characteristics?harvey1 wrote:Knowledge is a relative term. When we use the term "know" we often mean to say that our knowledge is a justified belief (i.e., based on following certain well-established and well-functioning rules; e.g., logic, scientific method, trustworthy eyewitnesses, etc.). Can we say that we are justified in believing that humans exist having the same p-conscious mindsets as us? If we can't, then the question reverts to, "what does it mean to know if we aren't justified in that very basic belief about other people?" As you can see, our justified set of beliefs are based on such charitable attributions of others being like ourselves in significant respects (including as fact that there are other people, etc.), so in that sense we know that other people have p-consciousness like ourselves.ST88 wrote:Who would know? We can't be sure that other humans exist the same way we do, because we are alone in our own meat puppet; how could we know about the metal puppets?
Every concept that can ever be needed will be expressed by exactly one word, with its meaning rigidly defined and all its subsidiary meanings forgotten. -- George Orwell, 1984
Post #14
I'm flattered, of course, but I think your reasoning is flawed.There's a few people on this site that I'm willing to extend the principle of charity to. You are one of them. However, if you were abducted and returned by aliens (assuming well-documented) and they did some funny, unknown stuff to you; I might withdraw that extension of charity if I saw your eyes flickering and heard your head buzzing and stuff like that...
Firstly, why are you being charitable to me, and not anyone else ? Why do you think that I'm likely to be human, but other posters here are not ?
Secondly, why does it matter whether my head is buzzing or not ? Again, this sounds like biological naturalism. What about people with fake glass eyes, or pacemakers, or anti-epileptic implants ? Where do you draw the line ?
Well, I would of course say that we have good reasons to believe that the complexity issue is surmountable, given that we've surmounted a great deal of it already (Google, Big Blue, etc.). But I understand that you believe in dualistic properties which may make Strong AI impossible a priori; that's a topic for the other thread, though.We don't have good reason to suppose that the complexity issue is surmountable by us since our knowledge is not at that level to make this assumption.
Er, natural selection would be my first guess. We can actually observe a continuum of intelligence in the animals we have today; turtles are pretty stupid, whereas dolphins and humand are pretty smart, and dogs are somewhere in between.We don't know why or how evolution did this...
Re: Is it possible to build a sapient machine ? Part 2
Post #15It was actually a LISP program, I think, but you're right. Eliza would not pass the Turing Test today, but it's actually powerful enough to keep some people occupied for a while.ST88 wrote:The Chinese Room sounds suspicously like the old Basic program from the 1970s called ELIZA. ...
Post #16
Right, so, just as Searle and harvey1, you're saying that only brains can be intelligent (or sapient, or sentient, if you prefer). You then point out that, since a computer is not a brain, it cannot be intelligent.Dilettante wrote:Aren't we forgetting that intelligence is an organic thing? We may one day be able to create a computer so powerful that it can fool some people some of the time and pass a few Turing tests, but it would still be plastic and screws and electrical circuits, not flesh and bone. Being human includes having a human body, in my view.
Why, though ? Why are you assuming that only brains can be intelligent ? What's so special about the brain -- merely the fact that it's wet and squishy ? That sounds pretty weak to me.
Basically, you and harvey1 are appealing to mystery:
harvey1 wrote:Sure, but we know why computers can do this. It has to do with the electrical properties of certain materials that allow for memory, conduction of electrical current, resistance of electrical current, etc..
So, since we know exactly how computers work, they can't be intelligent... Is that what you're saying ? Well, as I asked in my post, what happens when we map out the entire human brain, and discover how it works -- will everyone turn into a zombie ? Mysteries sound very profound, but they're a poor way to reason about things.
Post #17
Possible - but only if the continuum is reproduced.QED wrote:If it is otherwise then we are faced with the question of at what point consciousness "kicks-in" on the continuum that lies between the Parameceum and the human being. It sure looks like it's a grey-scale in between. Therefore it seems highly reasonable to me to say that the construction of an artificial consciousness is possible in principle.
The 'kicking-in' of consciousness does not negate the previous 'levels'.
All that lads to human consciousness: the prehension of atoms and molecules taken up and into cell irritibility, which is taken up and into the sensations of neuronal organisms, which is taken up and into the perception of animals with neural cords, which is taken up and into the impulses of animals with reptilian brain stems, which is taken up and into the emotions and feelings of animals with limbic systems, which is taken up and into the symbols and concepts of animals with a neo-cortex, and at which point the comlex neocortex, in certain human brains, can produce formal operational thinking or logic.
Each and everyone of these whole/parts did not replace its successor but includes and transcends it. Each is a crucial part of the net result, human consciousness.
In order to produce AI that is truly human-like we would have to be able to recreate the consciousness of each and every whole/part in the continuum. We would have to create and animate everything from single cell irritibilty to neocortex rationality and connectivity.
AI focuses on behaviour and rules - the singular objective, where as human consciousess is also singular subjective - intentionality. Nor do I think will the singular objective produce intersubjective cultural values.
"Whatever you are totally ignorant of, assert to be the explanation of everything else"
William James quoting Dr. Hodgson
"When I see I am nothing, that is wisdom. When I see I am everything, that is love. My life is a movement between these two."
Nisargadatta Maharaj
William James quoting Dr. Hodgson
"When I see I am nothing, that is wisdom. When I see I am everything, that is love. My life is a movement between these two."
Nisargadatta Maharaj
Post #18
Except that we can in principle run our own evolution to solve such problems (genetic programming).harvey1 wrote: I personally do not know if strong AI is possible. Obviously we're here and we evolved, so I see no "in principle" problem with it. However, that's not to say that we can do everything that evolution can do.
OK, so let's say we build a matter transporter that deconstructs an re-creates all of your atoms -- are you worried that in doing so you will turn into a zombie even though you seem to function as normal to outward observers?harvey1 wrote: My last objection, the most serious objection, is that I don't think the systems response is right. I think the virtual response to Searle's CA is right, and therefore it raises the problem of how we will know we have a p-conscious AI program versus a zombie AI program. It is out of a principle of charity that we assume that each of us have p-consciousness, but can we extend this principle to AI machines? I'm sure some people will say yes, but is it not possible that we are wrong when we think that we've created a virtual AI mind when in fact we've just created a zombie AI mind that answers as if it has p-consciousness?
I can understand why people define things like Qualia to capture the experience of seeing through our eyes and so on but I don't see any reason to suppose that this cannot be recreated in other ways. Why wouldn't the Qualia be reconstructed in an exact molecular copy of my brain? You migth guess that I would be thinking in terms of an accumulation of micro-qualia.
- Dilettante
- Sage
- Posts: 964
- Joined: Sun Dec 19, 2004 7:08 pm
- Location: Spain
Post #19
I don't know what harvey1 is appealing to. As for myself, I'm not appealing to mystery, I'm appealing to evolution. Intelligence, as we know it, is connected with an organic body, it is a manual intelligence, a hands-on thing, whose evolution is inseparable from the evolution of our manual dexterity. AI is interesting as science-fiction, but the "logos" is evolutionarily linked to manual operations, to manipulation of things and instruments. Plato already talks about the relationship between speech and manipulation in one of his dialogues.Basically, you and harvey1 are appealing to mystery:
Post #20
Well, the end result of evolution is a collection of neurons and such that perform a certain function -- namely, acting intelligently. Why do you feel that a mechanism that performs this function can only be created through evolution ?Dilettante wrote:I'm appealing to evolution....
To give you an analogy: birds have evolved the power of flight over a long period of time. During that time, insects have evolved a completely different way of flying. Today, we have developed several artificial ways of flight; the latest inventions -- airplanes and helicopters -- are actually heavier than air, just like birds and insects.
So, even though flight was originally evolved in various ways, we have learned to create machines that fly, without evolving them from scratch. If we can engineer flight, why not intelligence ?
Are you saying that all we need to produce intelligence is to create a robot body ? We're mostly there on that front today; in fact, most of the artificial organs that we can produce perform much better than the biological equivalents (infrared cameras, ultrasonic microphones, etc.).Intelligence, as we know it, is connected with an organic body, it is a manual intelligence, a hands-on thing, whose evolution is inseparable from the evolution of our manual dexterity.
Personally, I don't believe that a physical body is absolutely required for intelligence, but if that's your only objection, then AI is still possible.