Singularity !

Argue for and against religions and philosophies which are not Christian

Moderator: Moderators

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Singularity !

Post #1

Post by Bugmaster »

The concept of a Technological Singluarity is an interesting one to idly ponder. It's also a very popular concept in science fiction; Cory Doctorow in particular seems to love it. However, I have a feeling that some people tend to have religious faith in Singularity, which would make Singularity the latest modern religion.

Ok, so what is Singluarity ? Well, to put it briefly, the argument for it goes something like this:

1). Moore's Law dictates that computing power increases exponentially over time.
2). Moore's Law will continue for the foreseeable future.
3). This means that, eventually (and rather soon), computers will achieve enormous computational power, which will dwarf our human brains.
3a). Alternatively, the new advances in quantum computing may lead to this computational power in one leap, bypassing Moore's Law entirely.
4a). At this point, computers will achieve intelligence, and that intelligence will dwarf ours by whole orders of magnitude.
4b). Alternatively, we may find a way of merging with computers (through "uploading" our minds, perhaps), thus magnifying our intelligence by whole orders of magnitude and becoming "transhuman".
5). Initially, this new intelligence will allow us to solve the basic problems facing humanity today: hunger, disease, scarcity of luxury goods, etc.
5a). Most likely, this will be achieved through self-replicating nanotechnology.
6). Eventually, our enourmous new intelligence will solve all problems in physics that remain to be solved.
7). Thus, we will gain complete control over time and space, becoming de facto gods (or "weakly godlike entities", as Cory Doctorow puts it).
8). The point at which this happens is called the Singularity, and it is inevitable.
9). We should be seriously worried about how Singularity will occur, who gets to participate, whether the super-intelligence would be evil, etc. etc.

I have to admit, Singularity is a pretty neat concept, and I for one do hope that it happens. However, I don't think that Singularity is inevitable; I don't even think that it's particularly likely.

While Moore's Law has held so far (1), I see no reason to predict that it will continue indefinitely (2). In fact, there's a very real physical limit on the minimum size of an electronic circuit; shrink the circuit any further, and electrons begin to tunnel all over the place, ruining your computation. Quantum computers are very neat (3a), but they are far from omniscient; they will not magically grant us answers to all our questions. Thus, virtually unlimited computing power is not inevitable (3).

I personally do believe that Strong AI (4a) and "uploading" (4b) are possible, and even likely; however, they are far from inevitable, as well. It could very well turn out that Strong AI is a very difficult problem to solve, and that merely throwing computing power at it won't achieve much.

Even assuming that we manage to create (or grow, or become, whatever) a super-intelligence, it's somewhat rash to conclude that this super-intelligence will solve all our problems (5). Most of them, such as hunger, disease, overpopulation, war, etc., cannot be solved by merely thinking about it. The solutions would involve a lot of work -- planting fields, building spaceships, gathering viral RNA, etc. -- and work doesn't do itself, no matter how smart you are.

Nanotechnology (5a) would be quite useful here, but I am not convinced that it is even possible. It's possible that the laws of physics (such as the Uncertainty Principle) prohibit us from building self-replicating machines that can move individual atoms around, just as they prohibit us from moving faster than the speed of light. Of course, it's always possible that our current understanding of physics is wrong, but there's no indication that it's most likely wrong -- which means that becoming 1000x smarter won't solve anything (6), and that it's quite possible that we will never be able to fully control all time and space (7). Sadly, the more we know, the more limitations we discover; I personally would like that pesky "speed of light" limit to go away, but it looks like it's there to stay.

Thus, Singularity is neat, but it's far from inevitable, and it's far from likely. I don't think we need to worry about it in the foreseeable future.

User avatar
methylatedghosts
Sage
Posts: 516
Joined: Sun Oct 08, 2006 8:21 pm
Location: Dunedin, New Zealand

Post #31

Post by methylatedghosts »

I was reading something on the first or second page about creating a machine with the capabilities of the human brain.

I don't believe this is actually possible. This would imply the machine learning things. Learning in the human brain occurs via a network of neurons, and whenever a new thing is learnt, new synapses form between neurons and thus, the brain grows. Can a machine really learn something? I doubt it[/i]
Ye are Gods

User avatar
The Persnickety Platypus
Guru
Posts: 1233
Joined: Sat May 28, 2005 11:03 pm

Post #32

Post by The Persnickety Platypus »

Machines with the ability to learn have been around for years. For example:

Computers that program themselves
A computer that can teach itself a new language

There is nothing that the human brain can do that we could not potentially replicate with inorganic components (electric circuits).

User avatar
methylatedghosts
Sage
Posts: 516
Joined: Sun Oct 08, 2006 8:21 pm
Location: Dunedin, New Zealand

Post #33

Post by methylatedghosts »

Ok, I wasn't aware of that.

Can they learn things they were not programmed to learn though? Do they have "revelation" moments, like we do?
Ye are Gods

User avatar
The Persnickety Platypus
Guru
Posts: 1233
Joined: Sat May 28, 2005 11:03 pm

Post #34

Post by The Persnickety Platypus »

Sort of.

As explained in the first article, Mr. Holland does not program any behavior into the computer, but creates genetic algorithms, allowing the computer to determine on it's own the appropriate behavior. The computer, through trial and error, picks the pattern that best fits it's given circumstance. Change the circumstance, and the computer will "evolve" a new program to fulfil it's requirements, just as a living organism evolves to survive in a changing environment.

However, there is no way for a computer to evolve capabilities beyond it's given constraints. I could program a genetic algorithm into a computer and allow it to create it's own programs, but it could never "have a revelation"- create a program more complicated than the binary digits I gave it to work with.

This does not destroy any possibilitys at artificial intelligence, of course. There are evolutionary constraints in nature as well. Humans never would have gotten as smart as we are now if our past environments had never imposed conditions on us that made higher intelligence a more suitible trait. Give a computer such as Holland's a demanding enough requirement and a broad enough algorithm, and there is litterally no limit to the things it can do.

The challenge, of course, it creating such an algorithm.

Post Reply