Is it possible to build a sapient machine ?

For the love of the pursuit of knowledge

Moderator: Moderators

Post Reply
User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Is it possible to build a sapient machine ?

Post #1

Post by Bugmaster »

Topic

Many threads regarding dualism, theism, and philosophy in general, often run into this topic. Is it even hypothetically possible to build a computer that will be sapient -- i.e., an artificial being that will think, feel, and socialize as humans do ? Well, here's our chance: to resolve this debate once and for all ! Smoke 'em if you got 'em, people, this post is gonna be a long one.

I claim that creating Strong AI (which is another name for a sapient computer) is possible. We may not achieve this today, or tomorrow, but it's going to happen sooner rather than later.

First, let me go over some of the arguments in favor of my position.

Pro: The Turing Test

Alan Turing, the father of modern computing (well, one of them), proposed this test in ages long past, when computers as we know them today did not yet exist. So, let me re-cast his argument in modern terms.

Turing's argument is a thought experiment, involving a test. There are three participants in the test: subject A, subject B, and the examiner E. A and B are chatting on AIM, or posting on this forum, or text-messaging each other on the phone, or engaging in some other form of textual communication. E is watching their conversations, but he doesn't get to talk. E knows that one of the subjects -- either A or B -- is a bona-fide human being, and the other one is a computer, but he doesn't know which one is which. E's job is to determine which of the subjects is a computer, based on their chat logs. Of course, in a real scientific setting, we'd have a large population of test subjects and examiners, not just three beings, but you get the idea.

Turing's claim is that if E cannot reliably determine which being -- A or B -- is human, then they both are. Let me say this again: if E can't tell which of the subjects is a computer, then they're both human, with all rights and privileges and obligations that humanity entails.

This seems like a pretty wild claim at first, but consider: how do you know that I, Bugmaster, am human ? And how do I know that you're human, as well ? All I know about you is the content of your posts; you could be a robot, or a fish, it doesn't really matter. As long as you act human, people will treat you as such (unless, of course, they're jerks who treat everyone like garbage, but that's another story). You might say, "well, you know I'm human because today's computers aren't advanced enough to post intelligently on the forums", but doesn't prove much, since our technology is advancing rapidly all the time (and we're talking about the future, anyway).

So, if you're going to deny one of Turing's subjects his humanity, then you should be prepared to deny this humanity to everyone, which would be absurd. Therefore, a computer that acts human, should be treated as such.

Pro: The Reverse Turing Test

I don't actually know the proper name for this argument, but it's sort of the opposite of the first one, hence the name.

Let's say that tomorrow, as you're crossing the street to get your morning coffee, you get hit by a bus. Your wounds are not too severe, but your pinky is shattered. Not to worry, though -- an experimental procedure is available, and your pinky is replaced with a robotic equivalent. It looks, feels, and acts just like your pinkie, but it's actually made of advanced polymers.

Are you any less human than you were before the treatment ?

Let's say that, after getting your pinkie replaced, you get hit by a bus again, and lose your arm... which gets replaced by a robo-arm. Are you human now ? What if you get hit by a bus again, and your left eye gets replaced by a robotic camera -- are you less human now ? What if you get a brain tumor, and part of your brain gets replaced ? And what if your tumor is inoperable, and the doctors (the doctors of the future, of course) are forced to replace your entire brain, as well as the rest of your organs ? Are you human ? If so, then how are you different from an artificial being that was built out of the same robotic components that your entire body now consists of ?

Note that this isn't just idle speculation. People today already have pacemakers, glasses, prosthetic limbs, and yes, even chips implanted in their brains to prevent epileptic seizures (and soon, hopefully, Alzheimers). Should we treat these people as less human than their all-natural peers ? I personally don't think so.

Ok, I know that many of you are itching to point out the flaws in these arguments, so let me go over some common objections.

(to be continued below)

User avatar
Scrotum
Banned
Banned
Posts: 1661
Joined: Fri Sep 09, 2005 12:17 pm
Location: Always on the move.

Post #21

Post by Scrotum »

Bugmasters simile is a reansoable one. And i cant see why anyone would be against the fact of AI, Unless, your religious (as this destroys the religious concept as earlier stated).

But else, there is aLogical to be against it, that is, believe its not possible, we already done it in lower levels.


What are we Humans? We are programmed from start, language (one language for americans, several for other nationalities), what is "right and "wrong" depending on culture (United States murder, i mean, death pentality is ok, whiles this is the onyl western nation that does it, its onyl subjective).

All this is more or less, accepted by us, we learn nothing, we accept it. Now, next step is contemplation, why, wom and so forth do these questions, why would there be any problem programming a computer to question the answers? The only thing needed is to give him/her/it a tool and ability to do so.



The plain truth of it is that we can create, or will, something that most likely will be Above us in what we claim to be "intelligence". Thing about "Data" in Star Trek the Next Generation, he is smarter, faster stronger then a human, but still a robot. But he is above us in everyway. If your religious, does he suddenly get a soul? Whops, thats contradiction to earlier dogmas. If not, how could you possible know the difference?

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #22

Post by harvey1 »

Bugmaster wrote:Basically, you and harvey1 are appealing to mystery
You must have missed my opening paragraph:
I personally do not know if strong AI is possible. Obviously we're here and we evolved, so I see no "in principle" problem with it.
Bugmaster wrote:So, since we know exactly how computers work, they can't be intelligent... Is that what you're saying ? Well, as I asked in my post, what happens when we map out the entire human brain, and discover how it works -- will everyone turn into a zombie ? Mysteries sound very profound, but they're a poor way to reason about things.
You and QED seem to be appealing to magic. If you don't program them, they don't come. If Kemper Financial writes a financial program, does that automatically mean that the financial program is computing how the universe got here? No. Similarly, just because people are writing programs on chess, and all sorts of interesting AI programs, it doesn't mean the computers are programmed to have p-conscious experiences.
Bugmaster wrote:
There's a few people on this site that I'm willing to extend the principle of charity to. You are one of them. However, if you were abducted and returned by aliens (assuming well-documented) and they did some funny, unknown stuff to you; I might withdraw that extension of charity if I saw your eyes flickering and heard your head buzzing and stuff like that...
I'm flattered, of course, but I think your reasoning is flawed. Firstly, why are you being charitable to me, and not anyone else ? Why do you think that I'm likely to be human, but other posters here are not ?
I should have said I was joking, sorry about saying that you were one of the few. From what I can tell, I think that everyone posting on this site experiences a p-conscious experience. In any case, I'm willing to extend that charitable interpretation to each person until I have reason to doubt it.
Bugmaster wrote:Secondly, why does it matter whether my head is buzzing or not ? Again, this sounds like biological naturalism. What about people with fake glass eyes, or pacemakers, or anti-epileptic implants ? Where do you draw the line ?
When I have good reason to draw a line, I do so. So, for example, I think I have good reason to think that Deep Blue does not have a p-conscious experience. If I saw the pseudo code for p-consciousness, and that study gave me confidence that such a p-conscious experience could emerge from reading that code, then I would probably conclude that Deep Blue could in fact have p-consciousness.
Bugmaster wrote:Well, I would of course say that we have good reasons to believe that the complexity issue is surmountable
Good reason would be to show in principle how it is achievable. So far philosophers have not solved this problem, and are admittedly baffled.
Bugmaster wrote:given that we've surmounted a great deal of it already (Google, Big Blue, etc.). But I understand that you believe in dualistic properties which may make Strong AI impossible a priori; that's a topic for the other thread, though.
As I stated in my opening paragraph (which I don't see how you missed it), I think in principle it is possible to have strong AI. In any case, treating calculating machines in the same category as machines that have qualia is greatly underestimating the difficulty of the problems confronting strong AI. Of course, we can all just gleefully say, "whee, I believe in strong AI--Moore's law shows that computers will be fast enough to process qualia experiences, etc., whee...." However, this is not addressing the problem. Show me how in principle qualia can emerge without waving magic wands of complex programs. If you cannot show it, then it is faith on your part to believe that it is a trivial problem.
Bugmaster wrote:
We don't know why or how evolution did this...
Er, natural selection would be my first guess. We can actually observe a continuum of intelligence in the animals we have today; turtles are pretty stupid, whereas dolphins and humand are pretty smart, and dogs are somewhere in between.
I'm talking about the flow of events, not the overall evolutionary process or its mechanisms.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #23

Post by harvey1 »

QED wrote:that we can in principle run our own evolution to solve such problems (genetic programming).
Sure, but then why hasn't genetic programming solved the whole origin of life thing?
QED wrote:OK, so let's say we build a matter transporter that deconstructs an re-creates all of your atoms -- are you worried that in doing so you will turn into a zombie even though you seem to function as normal to outward observers?
Not really, although I wouldn't step into that thing until I checked to make sure that all my life insurance payments were paid up.

Of course the reason I wouldn't have such concerns is because I think that an exact molecular copy of me contains all the hardware needed to be me. However, if that matter transporter left indications of some changes, and there were good reasons to think it had interferred with p-consciousness, then we would be perfectly in our right to refuse to extend a charitable interpretation of that person being p-conscious. Charity is something we extend because we think we are justified in extending it. If we no longer feel justifed, then we naturally do not extend that charity.
QED wrote:I can understand why people define things like Qualia to capture the experience of seeing through our eyes and so on but I don't see any reason to suppose that this cannot be recreated in other ways.
In principle, I agree. However, I don't see any reason why we might fail to do so. Afterall, who in 1950 would have thought that we wouldn't have flying cars in 2005?
QED wrote:Why wouldn't the Qualia be reconstructed in an exact molecular copy of my brain? You migth guess that I would be thinking in terms of an accumulation of micro-qualia.
Here's the problem. Suppose that an evil person like Hitler would have won, and experimented on people's p-conscious experience. Those nazi researchers might have started making advanced surgical procedures in their not so far distant future. Perhaps they would be taking out brain module after brain module, replacing them each time with a robotic mechanical part. After they had completely replaced the whole brain, they would claim success by saying that the patient was completely the same person as they were before the surgeries. However, we don't know that without a full understanding of what p-consciousness is in software terms. We'd have to understand how p-consciousness was formed before we could know if the mechanical part replacement had merely been constructed to act as if the new brain had p-consciousness. Once we see that significant areas of the brain had been replaced, we should be skeptical that the person's new brain had kept them as the same individual. Our extension of charity should cease to be.

As of right now, philosophers have not shown how p-consciousness emerges. All the theories have failed to put forth a winning argument. Therefore, why should we believe programmers at Kemper Financial that their computers are conscious because they have made some cool financial programs? We shouldn't. We need philosophical answers to the problems, and until then we can oh and ah at all the robots and cool programs, but they are just gadgets.

User avatar
QED
Prodigy
Posts: 3798
Joined: Sun Jan 30, 2005 5:34 am
Location: UK

Post #24

Post by QED »

Why would I be appealing to magic here? You know I think it just a question of scale. A financial program written in a few tens of thousands of lines of code might come close to the qualia experience of a bacterium. If it didn't, then I simply think it would need to be a bigger program. I've played about with pattern recognition software. I agree that it's hard to envisage a simple correlator algorithm as having some sort of experience, but I would suggest that it does by very small degree. It would be the summation of trillions of processes such as this that would produce the qualia we humans all know about.

But why should this be more than a guess? Because we've all agreed that evolution is the force that has produced our physical processing equipment and we can see it working out from the bottom up. I can't stress enough how an understanding of the evolution of meaning and intelligence in our favourite Paramaceum gives us a leg-up into understanding how this can develop into the far more sophisticated biology that is us.

User avatar
QED
Prodigy
Posts: 3798
Joined: Sun Jan 30, 2005 5:34 am
Location: UK

Post #25

Post by QED »

Ah, missed some posts while I was typing!
harvey1 wrote:
QED wrote:that we can in principle run our own evolution to solve such problems (genetic programming).
Sure, but then why hasn't genetic programming solved the whole origin of life thing?
How many molecules were there floating around on the lifeless planet for how long...? The numbers of potential interactions are more than astronomical. Are you really worried by this though? I think we ought to be able to see the principle of the thing.
harvey1 wrote:
QED wrote:I can understand why people define things like Qualia to capture the experience of seeing through our eyes and so on but I don't see any reason to suppose that this cannot be recreated in other ways.
In principle, I agree. However, I don't see any reason why we might fail to do so. Afterall, who in 1950 would have thought that we wouldn't have flying cars in 2005?
But you clever Americans have!
Image
harvey1 wrote:
QED wrote:Why wouldn't the Qualia be reconstructed in an exact molecular copy of my brain? You might guess that I would be thinking in terms of an accumulation of micro-qualia.
Here's the problem. Suppose that an evil person like Hitler would have won, and experimented on people's p-conscious experience. Those nazi researchers might have started making advanced surgical procedures in their not so far distant future. Perhaps they would be taking out brain module after brain module, replacing them each time with a robotic mechanical part. After they had completely replaced the whole brain, they would claim success by saying that the patient was completely the same person as they were before the surgeries. However, we don't know that without a full understanding of what p-consciousness is in software terms. We'd have to understand how p-consciousness was formed before we could know if the mechanical part replacement had merely been constructed to act as if the new brain had p-consciousness. Once we see that significant areas of the brain had been replaced, we should be skeptical that the person's new brain had kept them as the same individual. Our extension of charity should cease to be.
So you're saying that unless we can find an acceptable way of proving that any given AI had achieved consciousness we'll never know if we've done it. I'm tempted to say that on the way to creating it we would be inspired to devise an appropriate test, but I see your point.
harvey1 wrote: As of right now, philosophers have not shown how p-consciousness emerges. All the theories have failed to put forth a winning argument. Therefore, why should we believe programmers at Kemper Financial that their computers are conscious because they have made some cool financial programs? We shouldn't. We need philosophical answers to the problems, and until then we can oh and ah at all the robots and cool programs, but they are just gadgets.
You might have hit upon some legal considerations for the future there Harvey.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #26

Post by harvey1 »

QED wrote:
harvey1 wrote:
QED wrote:
harvey1 wrote:I personally do not know if strong AI is possible. Obviously we're here and we evolved, so I see no "in principle" problem with it. However, that's not to say that we can do everything that evolution can do.
Except that we can in principle run our own evolution to solve such problems (genetic programming).
Sure, but then why hasn't genetic programming solved the whole origin of life thing?
How many molecules were there floating around on the lifeless planet for how long...? The numbers of potential interactions are more than astronomical. Are you really worried by this though? I think we ought to be able to see the principle of the thing.
It would seem that you've answered your own objection. We might not be able to do everything that evolution can do, even though in principle there is a solution out there lurking to be uncovered.
QED wrote:But you clever Americans have!
Well, I'm referring to the Back to the Future II kind of flying car... (or Jetson's car, take your pick)...
QED wrote:Why would I be appealing to magic here? You know I think it just a question of scale. A financial program written in a few tens of thousands of lines of code might come close to the qualia experience of a bacterium. If it didn't, then I simply think it would need to be a bigger program. I've played about with pattern recognition software. I agree that it's hard to envisage a simple correlator algorithm as having some sort of experience, but I would suggest that it does by very small degree. It would be the summation of trillions of processes such as this that would produce the qualia we humans all know about.
Functionality does not magically come about. You have to show how that function comes about, or you are relying on magical processes. For example, I imagine it would take a while for current computers to calculate all the digits of pi until it reached the 99 trillionth digit, however I think that complex task is not relying on magic since we already know how that we could reach the 99 trillionth digit of pi if we had the time and machines to do it. P-consciousness, though, is not like that.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #27

Post by Bugmaster »

harvey1 wrote:You must have missed my opening paragraph:
I actually did miss it... sorry :-( But you seem to be arguing against Strong AI, despite your reservations, so I'll proceed.
No. Similarly, just because people are writing programs on chess, and all sorts of interesting AI programs, it doesn't mean the computers are programmed to have p-conscious experiences.
Well, from my (and Turing's) point of view, it's behavior that matters, not inner experiences. I understand that, in your dualistic view, computers won't neccessarily have "souls" (or mental properties, or qualia, or whatever)... But I see that as irrelevant. If the computer walks like a human and talks like a human, then we should treat it as a human -- especially since we can't detect all those qualia, anyway.
In any case, I'm willing to extend that charitable interpretation to each person until I have reason to doubt it.
It sounds like you agree with Turing, then.
Good reason would be to show in principle how it is achievable. So far philosophers have not solved this problem, and are admittedly baffled.
Er... I don't think Strong AI will be solved by philosophers :-) That's not what we have programmers for.
However, this is not addressing the problem. Show me how in principle qualia can emerge without waving magic wands of complex programs. If you cannot show it, then it is faith on your part to believe that it is a trivial problem.
Well, you'd first have to show me that qualia exist at all; so far, I'm not convinced that they do. But, as soon as you agree to extend the principle of charity to everyone you meet (until you have good evidence to believe otherwise), I think the qualia actually become irrelevant.
I'm talking about the flow of events, not the overall evolutionary process or its mechanisms.
Er... You lost me here. What flow of events ?

In any case, I should again point out that evolution is not necessarily the only way to develop intelligence. There are other ways; Big Blue was not evolved, for example. Right now, you're saying, "the only kind of intelligence we know has evolved, therefore there's no way to develop intelligence other than evolution", but that sounds like false generalization to me. Remember my flight analogy: we have engineered flying machines without relying on evolution, despite the fact that the only natural flying machines we know of (birds) were, indeed, evolved. And our flying machines are actually better than birds in many ways...

User avatar
QED
Prodigy
Posts: 3798
Joined: Sun Jan 30, 2005 5:34 am
Location: UK

Post #28

Post by QED »

harvey1 wrote: Functionality does not magically come about. You have to show how that function comes about, or you are relying on magical processes. For example, I imagine it would take a while for current computers to calculate all the digits of pi until it reached the 99 trillionth digit, however I think that complex task is not relying on magic since we already know how that we could reach the 99 trillionth digit of pi if we had the time and machines to do it. P-consciousness, though, is not like that.
OK I see why you are not satisfied. Yes we know for sure (can have faith)that the cumulative steps of the computer program will eventually lead it to reach the 99 trillion digit of PI. But what you are saying is that we cannot be so sure about the evolutionary steps to full-blown qualia that are taken between the Parameceum and Porpoise or Human for example (don't forget I'm proposing that the Parameceum is experiencing micro-qualia). I am perfectly satisfied with this assumption for the reasons already stated: We can see no discontinuities in the evolution of neural systems all the way from the bottom up. We can examine the brains of sea squirts (so long as we get to them before they eat them!) and we can understand the evolutionary process that has developed the full variety of sensors and their processors.

Another confidence booster is the fact that these sensors (eyes, ears etc.) are common across all nanotech platforms (oops... animals) and it would be extraordinary to expect that they were somehow producing different result inside the minds of humans to every other animal. The clues lie within the architecture of the brains of different animals which are largely similar but banded into identifiable regions that are given names like Cerebrum, Cerebellum, Limbic System and Brain Stem. The presence or absence of particular regions or sub regions is directly related to the evolutionary lineage of the animal in question. So we can see milestones on the path to consciousness as well.

I'm quite content to say that the only ones appealing to magic are those who think that despite all the knowledge we've obtained so far, we are still missing some vital ingredient that's needed to breath life into the nanotechnology.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #29

Post by harvey1 »

Bugmaster wrote:If the computer walks like a human and talks like a human, then we should treat it as a human -- especially since we can't detect all those qualia, anyway.
Ouch... You mean if my computer projects the voices of Claire Danes, projects the image of Claire Danes, then I should treat my computer as if she is Claire Danes? Hmm... Sounds like people might start doing weird stuff to their computer.
Bugmaster wrote:Er... I don't think Strong AI will be solved by philosophers :-) That's not what we have programmers for.
Programmers aren't always considering the philosophical issues if they do so at all. What they are doing is writing programs that have certain functions. If a programmer writes a program with the function to act like Claire Danes, then I can guarantee that the programmer might walk away satisfied, but they have no more re-created the person she is than a chemist who has made sodium chloride.
Bugmaster wrote:Well, you'd first have to show me that qualia exist at all; so far, I'm not convinced that they do.
Okay. Try this. Have someone you know drop a large rock on your bare feet from 3-4 feet high. After you see the rock hit your feet, ask yourself if you feel pain. If you do, then qualia are real.
Bugmaster wrote:But, as soon as you agree to extend the principle of charity to everyone you meet (until you have good evidence to believe otherwise), I think the qualia actually become irrelevant.
Why? I extend charity because I have sufficient reason to think that other humans are like myself and have qualia experiences. I might be wrong in this assumption, but it is a very good assumption that works well for me. The only reason I would deny extending it is if I thought the assumption might lead me to make severe mistakes in my judgement. I'd need a lot of evidence to think that. For example, if I tried to befriend Deep Blue (e.g., write it letters of admiration for beating a chess master), I think I would be making a severe mistake in judgement in thinking that I'm actually conversing with a personable entity. I don't think it experiences any kind of qualia experience, and therefore I would never extend that charity to Deep Blue. However, qualia have not become irrelevant in general since I experience it, and therefore charity allows me to think it is probably relevant for every individual that I meet. (Of course, if they are sleeping, dead, etc., then I might withhold some level of charity in those instances...).
Bugmaster wrote:
I'm talking about the flow of events, not the overall evolutionary process or its mechanisms.
Er... You lost me here. What flow of events ?
Natural selection is a mechanism by which evolutionary changes occur, however there is a story for every adaptation in terms of how a certain trait actually evolved. What was the sequence of events that allowed a particular trait to evolve. These sequence of these events play a very significant role in terms of how the adaptation emerged. It may not be possible for us to figure out to get the sequence right, especially with the brain, since there are no fossils for us to analyze as to how p-conscious experiences were selected for. We have to analyze our brains for such information, but there is no indication that this approach will help us succeed in understanding how p-consciousness emerges from the brain.
Bugmaster wrote:In any case, I should again point out that evolution is not necessarily the only way to develop intelligence.
What do you mean by "intelligence"?
Bugmaster wrote:There are other ways; Big Blue was not evolved, for example. Right now, you're saying, "the only kind of intelligence we know has evolved, therefore there's no way to develop intelligence other than evolution"
I don't recall saying that. All I'm saying is that we know we have good reasons to believe that p-consciousness evolved, but there are no good reasons to believe that we will trivially solve how p-consciousness works. We can certainly hope that we can figure it out, but I see no reason to believe that this is a trivial issue.
Bugmaster wrote:Remember my flight analogy: we have engineered flying machines without relying on evolution, despite the fact that the only natural flying machines we know of (birds) were, indeed, evolved. And our flying machines are actually better than birds in many ways...
Back before modern engineering, solving the problems of flight were made more difficult because our modern approaches to engineering were not implemented. There were also very few people working on the problem compared to the large number of engineers that work on LCDs today, for example.

Well, many, many people work on and think about AI. That's not to say that they won't be successful, but right now the brightest are stumped on some basic questions. The problems are so insolvable that more practical approaches to AI are often encouraged. This doesn't lead me to say that strong AI is impossible, far from it. It just leads me to be skeptical that this is a trivial problem. It isn't just a matter of writing a million lines of code on a quantum computer. I'm surprised that you seem to think that it is.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #30

Post by harvey1 »

QED wrote:We can see no discontinuities in the evolution of neural systems all the way from the bottom up. We can examine the brains of sea squirts (so long as we get to them before they eat them!) and we can understand the evolutionary process that has developed the full variety of sensors and their processors.
Possibly. We might be able to read their genetic code like a computer program, and have a great deal of aha experiences along the way. However, if you've ever debugged a large program, you know that it is not always easy to learn how a program function works to create the results that it is creating. It might be possible, but until we are at that point where we can read genetic code like it is a computer program we simply do not know. We could be talking a level of complexity that is far beyond us to analyze in that way. Why don't we just treat this like a scientific hypothesis and put all our faith aside on the issue?
QED wrote:Another confidence booster is the fact that these sensors (eyes, ears etc.) are common across all nanotech platforms (oops... animals) and it would be extraordinary to expect that they were somehow producing different result inside the minds of humans to every other animal. The clues lie within the architecture of the brains of different animals which are largely similar but banded into identifiable regions that are given names like Cerebrum, Cerebellum, Limbic System and Brain Stem. The presence or absence of particular regions or sub regions is directly related to the evolutionary lineage of the animal in question. So we can see milestones on the path to consciousness as well.
I would agree that the processes going on in the proto-neurological systems of some primitive Archean creatures set the platform to our current neurological systems (and so on all the way up to our immediate ancestors), however that doesn't tell us that we can succeed at strong AI by simply examining genetic codes. Deciphering protein folds is no easy task, and that is to understand simple biological functions of the phenotype. Analyzing cognitive behavior from protein folding is way out there, and when you get to complex organisms we might need a few digits added to human IQs in order to keep pace with the complex nature of the protein folding algorithms. Like I said, it seems like in principle we can get there, but it does not appear to be a trivial problem.
QED wrote:I'm quite content to say that the only ones appealing to magic are those who think that despite all the knowledge we've obtained so far, we are still missing some vital ingredient that's needed to breath life into the nanotechnology.
And, why would you have that kind of confidence? I suppose you think that way about the origin of the universe too? We know everything, is that it?

Post Reply