BrainMeta'                 

Welcome Guest ( Log In | Register )

 
Reply to this topicStart new topic
> Singularity-Transhumanism, a thread dedicated to discussing singularity
Culture
post Sep 14, 2006, 06:22 AM
Post #1


Overlord
****

Group: Basic Member
Posts: 355
Joined: Jan 11, 2006
From: all over the place
Member No.: 4711



For those who are unsure of what is meant by the singularity I would recommend Ray Kurzweil's "The Singularity is Near" The book goes quite in-depth about how Moore's Law will be transcended with three dimensional chips and how one will/should achieve processing speeds capable of emulating the functions of the human brain.

Now how does this all fit in with transhumanism or extropy for that matter.

A quick look at The Proactionary Principle
http://extropy.org/proactionaryprinciple.htm

People’s freedom to innovate technologically is highly valuable, even critical, to humanity. This implies several imperatives when restrictive measures are proposed: Assess risks and opportunities according to available science, not popular perception. Account for both the costs of the restrictions themselves, and those of opportunities foregone. Favor measures that are proportionate to the probability and magnitude of impacts, and that have a high expectation value. Protect people’s freedom to experiment, innovate, and progress.
_________________________________________________________________________

Unpacking the Proactionary Principle

Looking deeper into the Principle, we arrive at these factors to take into account:

1. People’s freedom to innovate technologically is valuable to humanity. The burden of proof therefore belongs to those who propose restrictive measures. All proposed measures should be closely scrutinized.
2. Evaluate risk according to available science, not popular perception, and allow for common reasoning biases.
3. Give precedence to ameliorating known and proven threats to human health and environmental quality over acting against hypothetical risks.
4. Treat technological risks on the same basis as natural risks; avoid underweighting natural risks and overweighting human-technological risks. Fully account for the benefits of technological advances.
5. Estimate the lost opportunities of abandoning a technology, and take into account the costs and risks of substituting other credible options, carefully considering widely distributed effects and follow-on effects.
6. Consider restrictive measures only if the potential impact of an activity has both significant probability and severity. In such cases, if the activity also generates benefits, discount the impacts according to the feasibility of adapting to the adverse effects. If measures to limit technological advance do appear justified, ensure that the extent of those measures is proportionate to the extent of the probable effects.
7. When choosing among measures to restrict technological innovation, prioritize decision criteria as follows: Give priority to risks to human and other intelligent life over risks to other species; give non-lethal threats to human health priority over threats limited to the environment (within reasonable limits); give priority to immediate threats over distant threats; prefer the measure with the highest expectation value by giving priority to more certain over less certain threats, and to irreversible or persistent impacts over transient impacts.


Techo-singularity?
link
A large part of the technological singularity theory concerns the idea that the total amount of intelligence, both human and machine, is together increasing exponentially. At the heart of this idea is an understanding that as intelligence increases in power so does its ability to imagine new possibilities and its power to realize those possibilities – including more intelligence for itself. But what is this allusive “intelligence” that is going to become so powerful? And just what does it mean to speak about levels of intellectual power? Furthermore just what is the difference between the intelligence of a bacterium, a man, and a super-human intelligence?

Well, to start there is no definitive answer to these questions but I think that I can can get very close to the matter at hand. First off in the most general sense intelligence is manifested as a special kind of action. This action is teleological, or goal directed, behavior. Anything that can be called intelligent must have the ability to initiate action based on a desired outcome. This much is true of anything from a microbe to a search engine to yourself.

Now all intelligent actions exist as a series of choices. Each choice is selected out of two or sometimes many choices and that choice is selected based on a desired outcome. So at the very heart of the matter intelligence is an ability to successfully make choices based on a desired outcome. Having laid this foundation of thought let us move on now to see how this can be used to further understand the difficult questions I have asked.

All organisms have intelligence and all organisms evolved from a most basic form of intelligence. At the most basic level an organism must make choices and initiate actions that will allow it to stay alive long enough to reproduce. This means that the organism must be able to identify and avoid threats, get food, and, in the case of sexual reproduction, find a mate. These are the 'goals' that define its intelligence. In order for it to be successful at accomplishing its goals it must have a basic 'understanding' of how cause and effect work in its environment. The more simple the environment and the goals then the less intelligence is required. Present day artificial intelligence is very similar to the kinds of intelligence displayed by primitive organisms. They can only operated in very simple environments and they have very simple goals that define how they make decisions.

If we were to examine the levels of intelligence in the various organisms we would find that those levels have also grown exponentially. For a very long time there wasn't much going on in this area, but then the doubling effect began to work its magic and intelligence began to grow incredibly fast. All of this growth can be attributed to the gradual growth in the difficulty of achieving the already established goals. For instance if I am a predator and one of my goals is to find food then I have to have enough intelligence to get my prey. But if my prey has the goal of avoiding being eaten then it needs to be smart enough to avoid being caught. May the smartest organism win. So you see what I mean. As intelligence increases these basic goals become more difficult to achieve and nature begins to aggressively select for intelligence. But what is happening on a fundamental level as these organisms gain in intelligence?

What is happening is a greater and greater ability to understand how cause and effect operate so that actions may be initiated with the desired outcomes. This is normally accomplished through what is called instinct. The organism may not 'understand' in the way that a human does but the system that defines the organism 'understands' what kind of actions are desired in the appropriate circumstances.

So instinct is a kind of intelligence that is more rigid and is probably analogous to most of the intelligent algorithms that we can produce today. However as we move up the chain of life to the higher mammals and especially man intelligence gets much less rigid and much more powerful. This is because at this stage in the game intelligence no longer needs to wait for nature to select the appropriate instincts but rather the model of cause and effect in the environment has become so sophisticated and accurate that the organism can begin to be inventive. This means that the organism can begin to 'understand' in the sense that we normally think of when using that term. An intelligence on this level now understands how the world works so well that it can predict what the ramifications of its actions will be and can even think of new ways to accomplish its goals.

It is this power which, when fully realized, gives birth to technology. And it is for this reason that man is the technological animal. He is able to improve on his 'cause and effect model of the world' and then transmit that model to other humans through his corresponding acquired ability of language. With this power man is able to continually improve on his model and transmit that knowledge from generation to generation. As this model(science and metaphysics) becomes more powerful and accurate so does man's ability to accomplish his goals.

This is where things get interesting because man no longer has only those original goals that were discussed above. Man now has the burden of responsibility. This is where the concept of value comes in. Man began with the original goals that made evolution possible to begin with but somewhere along the way he also came into possession of values. In order for there to be ethical action certain outcomes must be deemed of can more value than others. Human beings have developed sophisticated culture which deals with the issue of vale. This occurred during the great axiological age. It was during this time that most of the great religions appeared and also when the first systems of codified law appeared. The significance of this is that we have, as a species, created new goals that are still based in the old goals but which transcend them in the sense that we now understand certain goals to be good and others to be evil. We can not change what is good and evil. It is hard wired into our brains. We could change that hard wiring itself but based on what. The only goals we know are this good and evil. On what premise would we change this basic part of our humanness. Would we claim that some other good was more good? This is not possible.

For this reason I have come to the conclusion that human-equivalent machine intelligence should not be treated as something other than man. It must have the same goals and values. If a machine with human level intelligence had precisely the same super-goals as human biological intelligence then it would be essentially human. However if these kinds of machines had values or goals that differed from man's goals then there would inevitably be conflict and the machines would win that conflict.

Therefor it is of utmost importance that we fully master how the brain works and from whence human values come before Strong AI arrives on the scene.

User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Cybert
post Sep 14, 2006, 08:10 AM
Post #2


Awakening
***

Group: Basic Member
Posts: 162
Joined: Sep 13, 2006
Member No.: 5650



There is no strong AI. There is no weak AI. We'll find the patterns of sentience and that will end this conversation. I think sentience is a quite binary thing. We will become superintelligent. Or we will "birth" superintelligents just like we do when having a baby. It's the same sentience.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Culture
post Sep 14, 2006, 11:34 AM
Post #3


Overlord
****

Group: Basic Member
Posts: 355
Joined: Jan 11, 2006
From: all over the place
Member No.: 4711



QUOTE(Cybert @ Sep 14, 2006, 08:10 AM) *

There is no strong AI. There is no weak AI. We'll find the patterns of sentience and that will end this conversation. I think sentience is a quite binary thing. We will become superintelligent. Or we will "birth" superintelligents just like we do when having a baby. It's the same sentience.


Cybert, what are you referring to as patterns?

Until the leap is made (which will require some pretty big obstacles to be overcome) conversations like these do not end. It is not solely dependant on advancement in science/technology, these are only two parts of the whole, the singularity will need to stand on its own ground philosophically and economically to mention a few.

User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Cybert
post Sep 14, 2006, 01:57 PM
Post #4


Awakening
***

Group: Basic Member
Posts: 162
Joined: Sep 13, 2006
Member No.: 5650



QUOTE(Culture @ Sep 14, 2006, 07:34 PM) *

QUOTE(Cybert @ Sep 14, 2006, 08:10 AM) *

There is no strong AI. There is no weak AI. We'll find the patterns of sentience and that will end this conversation. I think sentience is a quite binary thing. We will become superintelligent. Or we will "birth" superintelligents just like we do when having a baby. It's the same sentience.


Cybert, what are you referring to as patterns?

Until the leap is made (which will require some pretty big obstacles to be overcome) conversations like these do not end. It is not solely dependant on advancement in science/technology, these are only two parts of the whole, the singularity will need to stand on its own ground philosophically and economically to mention a few.


No, it doesn't. Technology rules above all else. Just like we discovered DNA, we'll discover the patterns that generate sentience. Do we discuss if certain creatures "sort of" use DNA? No. Same thing with sentience. Maybe the line will be at vertebrates...maybe mammals. We'll see.

The singularity depends only on technology. Philosophy is useless, and economics can only help or hinder it. It is inevitable.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
code buttons
post Sep 14, 2006, 02:21 PM
Post #5


Supreme God
*******

Group: Basic Member
Posts: 2450
Joined: Oct 05, 2005
Member No.: 4556



QUOTE(Cybert @ Sep 14, 2006, 01:57 PM) *

The singularity depends only on technology. Philosophy is useless, and economics can only help or hinder it. It is inevitable.

Ok, now I'm confused. Which Singularity are you talking about? Technological, Consciousness, cosmic...?
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Cybert
post Sep 14, 2006, 02:50 PM
Post #6


Awakening
***

Group: Basic Member
Posts: 162
Joined: Sep 13, 2006
Member No.: 5650



QUOTE(code buttons @ Sep 14, 2006, 10:21 PM) *

QUOTE(Cybert @ Sep 14, 2006, 01:57 PM) *

The singularity depends only on technology. Philosophy is useless, and economics can only help or hinder it. It is inevitable.

Ok, now I'm confused. Which Singularity are you talking about? Technological, Consciousness, cosmic...?


The one where I get to be a brain the size of jupiter.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Culture
post Sep 15, 2006, 02:08 AM
Post #7


Overlord
****

Group: Basic Member
Posts: 355
Joined: Jan 11, 2006
From: all over the place
Member No.: 4711





QUOTE(Cybert @ Sep 14, 2006, 01:57 PM) *

The singularity depends only on technology. Philosophy is useless, and economics can only help or hinder it. It is inevitable.


QUOTE(code buttons @ Sep 14, 2006, 10:21 PM) *

Ok, now I'm confused. Which Singularity are you talking about? Technological, Consciousness, cosmic...?



QUOTE(Cybert @ Sep 14, 2006, 01:57 PM) *

The one where I get to be a brain the size of jupiter.


Cybert I do not think you are even sure what the singularity (in any form is about) It goes beyond sentience.
There ARE economic implications. There are philosophiocal implications...philosophy by its definition is something that sentients are able to use.

Unless you ready yourself (which I thought you were trying to do, until the last few posts) you will unfortunately miss the bus. Do you realise how ready (mentally fit) one would have to be to harness
the technology that will drive thinking capacity. If you ignore factors/questions that today still have no answers and you are lucky enough to get "a brain the size of Jupiter" it would be useless if there is no
matter inside, if there is no core driving the revolution, A brain the size of Jupiter is useless if it has nothing inside.
vel
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
code buttons
post Sep 15, 2006, 06:09 AM
Post #8


Supreme God
*******

Group: Basic Member
Posts: 2450
Joined: Oct 05, 2005
Member No.: 4556



There will be no "size" issues after CS. And no time or space issues, for that matter, Cybert. These are all illusions of our present, imprissoned state of consciousness. I'll make you a brain the size of Jupiter just for kicks when I get there. Hell, I'll make you as many trillions of Jupiters as you wish, untill you say stop. You need to make sure and read-up on what the implications of this event will be like. Because from here it looks like you got your thinking on the subject all mixed-up.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Cybert
post Sep 15, 2006, 08:10 AM
Post #9


Awakening
***

Group: Basic Member
Posts: 162
Joined: Sep 13, 2006
Member No.: 5650



QUOTE(Culture @ Sep 15, 2006, 10:08 AM) *

QUOTE(Cybert @ Sep 14, 2006, 01:57 PM) *

The singularity depends only on technology. Philosophy is useless, and economics can only help or hinder it. It is inevitable.


QUOTE(code buttons @ Sep 14, 2006, 10:21 PM) *

Ok, now I'm confused. Which Singularity are you talking about? Technological, Consciousness, cosmic...?



QUOTE(Cybert @ Sep 14, 2006, 01:57 PM) *

The one where I get to be a brain the size of jupiter.


Cybert I do not think you are even sure what the singularity (in any form is about) It goes beyond sentience.
There ARE economic implications. There are philosophiocal implications...philosophy by its definition is something that sentients are able to use.

Unless you ready yourself (which I thought you were trying to do, until the last few posts) you will unfortunately miss the bus. Do you realise how ready (mentally fit) one would have to be to harness
the technology that will drive thinking capacity. If you ignore factors/questions that today still have no answers and you are lucky enough to get "a brain the size of Jupiter" it would be useless if there is no
matter inside, if there is no core driving the revolution, A brain the size of Jupiter is useless if it has nothing inside.
vel


There are NO ECONOMIC IMPLICATIONS to a posthuman. When one is self-sufficient, travelling between solar systems, how do economics come into play at all? One is ALONE.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Rick
post Sep 15, 2006, 09:58 AM
Post #10


Supreme God
*******

Group: Basic Member
Posts: 5916
Joined: Jul 23, 2004
From: Sunny Southern California
Member No.: 3068



Suppose a transhuman encounters a being from another star system who is trying to destroy him? Is that economics? Maybe it's politics.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Cybert
post Sep 15, 2006, 03:21 PM
Post #11


Awakening
***

Group: Basic Member
Posts: 162
Joined: Sep 13, 2006
Member No.: 5650



QUOTE(Rick @ Sep 15, 2006, 05:58 PM) *

Suppose a transhuman encounters a being from another star system who is trying to destroy him? Is that economics? Maybe it's politics.

Neither. It's weapon offense and defense systems.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post

Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



Lo-Fi Version Time is now: 22nd October 2014 - 05:36 PM


Home     |     About     |    Research     |    Forum     |    Feedback  


Copyright BrainMeta. All rights reserved.
Terms of Use  |  Last Modified Tue Jan 17 2006 12:39 am

Consciousness Expansion · Brain Mapping · Neural Circuits · Connectomics  ·  Neuroscience Forum  ·  Brain Maps Blog