Can God make a conscious robot?
What is it about the physical properties of silicon that prevent an omnipotent God from giving a robot consciousness, and what are the physical properties of fatty acids and sugars that mean that God can make conscious brains?
Are conscious robots possible?
Moderator: Moderators
-
- Apprentice
- Posts: 179
- Joined: Wed Jun 30, 2004 5:33 pm
- bdbthinker
- Student
- Posts: 89
- Joined: Thu Jan 20, 2005 11:50 am
- Location: indiana
Re: Are conscious robots possible?
Post #2That's a contradiction. You give this god the omnipotent attribute but then you bring his ability to do something into question. Here is the answer to your question:stevencarrwork wrote:Can God make a conscious robot?
What is it about the physical properties of silicon that prevent an omnipotent God from giving a robot consciousness, and what are the physical properties of fatty acids and sugars that mean that God can make conscious brains?
Yes. An omnipotent god can make a conscience robot. If he could not, then he wouldn't be omnipotent.
-
- Apprentice
- Posts: 179
- Joined: Wed Jun 30, 2004 5:33 pm
Re: Are conscious robots possible?
Post #3So if a conscious robot can be made, and there are no physical properties about silicon which prevent such a thing, could we make a conscious robot?bdbthinker wrote: Here is the answer to your question:
Yes. An omnipotent god can make a conscience robot. If he could not, then he wouldn't be omnipotent.
- bdbthinker
- Student
- Posts: 89
- Joined: Thu Jan 20, 2005 11:50 am
- Location: indiana
Re: Are conscious robots possible?
Post #4I don't think we could now. But one day? I don't see why not.stevencarrwork wrote:So if a conscious robot can be made, and there are no physical properties about silicon which prevent such a thing, could we make a conscious robot?bdbthinker wrote: Here is the answer to your question:
Yes. An omnipotent god can make a conscience robot. If he could not, then he wouldn't be omnipotent.
Re: Are conscious robots possible?
Post #5If God could make a conscious robot because He's God, why would it follow that humans can make a conscious robot? Some would argue that humans are conscious robots.stevencarrwork wrote:So if a conscious robot can be made, and there are no physical properties about silicon which prevent such a thing, could we make a conscious robot?bdbthinker wrote: Here is the answer to your question:
Yes. An omnipotent god can make a conscience robot. If he could not, then he wouldn't be omnipotent.
Re: Are conscious robots possible?
Post #6God can't, because in order to be able to make anything, you have to exist.stevencarrwork wrote:Can God make a conscious robot?

My view is that humans can--or rather, will eventually be able to. But it's very tricky, as this link on consciousness discusses.

spetey
Re: Are conscious robots possible?
Post #7Yes but conscious how? Self conscious?spetey wrote:God can't, because in order to be able to make anything, you have to exist.stevencarrwork wrote:Can God make a conscious robot?
My view is that humans can--or rather, will eventually be able to. But it's very tricky, as this link on consciousness discusses.
That would be hard to conceive in my opinion, but I think it is possible. But how would we be able to tell if robots are indeed self conscious? If they can talk and interact with people, aren't they just doing what they are programmed to do?
We have already made artificially intelligent computer programs which can carry on decent conversations with humans.
see here
I think self conscious robots are quite possible in the future...
- The Happy Humanist
- Site Supporter
- Posts: 600
- Joined: Tue Dec 21, 2004 4:05 am
- Location: Scottsdale, AZ
- Contact:
Re: Are conscious robots possible?
Post #8Who's to say that we're any different? (Here we go with "Is free will an illusion" again...)aren't they just doing what they are programmed to do?
Jim, the Happy Humanist!
===
Any sufficiently advanced worldview will be indistinguishable from sheer arrogance --The Happy Humanist (with apologies to Arthur C. Clarke)
===
Any sufficiently advanced worldview will be indistinguishable from sheer arrogance --The Happy Humanist (with apologies to Arthur C. Clarke)
Post #9
I think if we created conscious robots we would probably use something that we already know works -- neural networks. Organic brains are pretty much just a really complex neural network.
This seems to be the route they are going to take to develop AI. We just need a lot more powerful computers to obtain human-like consciousness.
Developers will probably use neural networks with evolution. That is, by starting with a pretty much random arrangement of artificial neurons and having mutations occur often, mutations such as how they are arranged and how they act. Combine this with using some form of selection to choose which AI "brains" get to be copied or mutated into the next generation, and you will get a neural network well suited to the selection parameters.
If they want an intelligent conscious neural network they would probably set the selection parameters to test for intelligence, consciousness, and thought.
This could take a very long time if they didn't have a very fast computer or group of computers. This is because the computers evolving the neural networks would need to have thousands of simultaneous networks and would have to run thousands of generations. The generations would have to be long enough to allow the neural network to learn and develop, like a human, before it could be subject to the selection parameters. They might be able to get around this by having the network created with the memory of its parent. A computer that runs one of these networks would only need to be a small fraction of the size of the computers that created the networks.
This seems to be the route they are going to take to develop AI. We just need a lot more powerful computers to obtain human-like consciousness.
Developers will probably use neural networks with evolution. That is, by starting with a pretty much random arrangement of artificial neurons and having mutations occur often, mutations such as how they are arranged and how they act. Combine this with using some form of selection to choose which AI "brains" get to be copied or mutated into the next generation, and you will get a neural network well suited to the selection parameters.
If they want an intelligent conscious neural network they would probably set the selection parameters to test for intelligence, consciousness, and thought.
This could take a very long time if they didn't have a very fast computer or group of computers. This is because the computers evolving the neural networks would need to have thousands of simultaneous networks and would have to run thousands of generations. The generations would have to be long enough to allow the neural network to learn and develop, like a human, before it could be subject to the selection parameters. They might be able to get around this by having the network created with the memory of its parent. A computer that runs one of these networks would only need to be a small fraction of the size of the computers that created the networks.
Not if they use neural networks with evolution. If a person created a robot this way, they wouldn't be sure what it would do, or what it is thinking. They could have some idea based on the selection parameters they set. That would be like having an idea on how a close family member would act in a situation.Bonnie wrote:aren't they just doing what they are programmed to do?
Those ones do not apply thought to what they say. They are pretty much programmed with responses and the logic on how to use them. They are impressive though.Bonnie wrote:We have already made artificially intelligent computer programs which can carry on decent conversations with humans.
-
- Student
- Posts: 31
- Joined: Wed Jul 07, 2004 11:09 am
- Location: Denmark
Post #10
By definition, God would be able to make conscious robots as he is all powerful (whether he him/her/itself exists is another debate). Similarly I do not believe that there is anything limiting about silicon that prohibits the construction of conscious robots, atleast from a pure scientific point of view. From an engineering point of view, there might very well be practical limitations such as the size of or the heat generated by a silicon-brain, but, as said, these are practical obstacles. It seems that constructing or programming an artificial brain is possible in the realm of computers, that is, a digital brain. By extension, with a complicated network of Turing machines, that programming could be recreated in the physical realm. Though Turing-machines are not physical machines, but mathematical ones, most mathematicians, electrical engineers and computer scientists agree that they are as buildable with mechanical parts as any digital device is. Whether datamanipulation is done with electrical charges or cogs and wheels doesn't really make a difference.
So there are really two obstacles. One is the matter of constructing some kind of machine to house the programming responsible for conscious thinking, the other is the actual programming. The first issue could be addressed by computers such as university or military mainframes. And once the second obstacle, the actual programming, has been overcome, it would (in theory) be possible hardcode that programming into the construction of perhaps a silicon-brain (or some other material).
Then, is programming consciousness possible? Perhaps.
The basis for answering that question is a definition of what consciousness really is. What I believe to be the common definition (and the one that I agree with) is that consciousness is being aware of one's surroundings and position (or state) within these surroundings, and in particular of one's knowledge (or knowledge base). To be truly conscious would also imply the ability to reason about everything one is aware of. We must demand some form of introspection or self-reflection (and thus awareness) in any agent in order to call it conscious. Logic programming languages do exist that enable us to formalize logical statements and let machines do reasoning about the consequences of actions expressed in logic programming, provided that the knowledge base is sufficiently large. Applying graph-theory to AI has provided tools for dealing with logical paradoxes, such as the Knower-paradox (One cannot know something false, only believe it. Then I cannot know a sentence such as "This sentence is not know by FreddieFreeloader") and other paradoxes that could effectively render a thinking agent useless. If logical programming succeeds in creating a reasoning, introspective agent, that agent would be able (given adequate senses) to expand it's knowledge base about it self and it's surroundings, and many would claim that we have achieved the goal of creating a thinking computer.
As answered by tbpckisa, that wouldn't really be the case for neural networks. They aren't really programmed for thinking, but for training themselves to think, by modifying their own states (i think). There is the argument that instead of being programmed, neural networks program themselves. Still, I find it hard to believe that there is much difference in how a reasoning, introspective agent created with logical programming works compared to how a reasoning, introspective agent created with at neural network does. Still, doing what one is programmed to do is an argument against free will, not consciousness. Successfully creating a thinking machine might be.
So there are really two obstacles. One is the matter of constructing some kind of machine to house the programming responsible for conscious thinking, the other is the actual programming. The first issue could be addressed by computers such as university or military mainframes. And once the second obstacle, the actual programming, has been overcome, it would (in theory) be possible hardcode that programming into the construction of perhaps a silicon-brain (or some other material).
Then, is programming consciousness possible? Perhaps.
The basis for answering that question is a definition of what consciousness really is. What I believe to be the common definition (and the one that I agree with) is that consciousness is being aware of one's surroundings and position (or state) within these surroundings, and in particular of one's knowledge (or knowledge base). To be truly conscious would also imply the ability to reason about everything one is aware of. We must demand some form of introspection or self-reflection (and thus awareness) in any agent in order to call it conscious. Logic programming languages do exist that enable us to formalize logical statements and let machines do reasoning about the consequences of actions expressed in logic programming, provided that the knowledge base is sufficiently large. Applying graph-theory to AI has provided tools for dealing with logical paradoxes, such as the Knower-paradox (One cannot know something false, only believe it. Then I cannot know a sentence such as "This sentence is not know by FreddieFreeloader") and other paradoxes that could effectively render a thinking agent useless. If logical programming succeeds in creating a reasoning, introspective agent, that agent would be able (given adequate senses) to expand it's knowledge base about it self and it's surroundings, and many would claim that we have achieved the goal of creating a thinking computer.
Bonnie wrote:aren't they just doing what they are programmed to do?
As answered by tbpckisa, that wouldn't really be the case for neural networks. They aren't really programmed for thinking, but for training themselves to think, by modifying their own states (i think). There is the argument that instead of being programmed, neural networks program themselves. Still, I find it hard to believe that there is much difference in how a reasoning, introspective agent created with logical programming works compared to how a reasoning, introspective agent created with at neural network does. Still, doing what one is programmed to do is an argument against free will, not consciousness. Successfully creating a thinking machine might be.