Thoughts On...The Technological Singularity

Doesn't quite fit anywhere else? Post here!
16 posts Page 1 of 2 First unread post
Defaulter
Organizer
Organizer
Posts: 116
Joined: Wed Oct 24, 2012 10:53 pm


this is going to be a relatively short post designed to provoke more of a debate, a little less of a vent. So comment critique to your hearts content.

For those of you that do not know the technological singularity is a hypothetical point in time where artificial intelligence progresses to the point of greater than human intelligence. What does this mean? this means that there may come a time where an artificial machine intelligence develops not only self awareness but to the point at which it can and may well evolve itself beyond human intelligence. Think machines designing and building machines.

Once a machine of this nature reaches the capability to evolve it own intelligence it will become to us, as we are to ants, as we are to microbes. And the simplest and most fundamental question i can possibly put forward is this:

- How long? how long long before the machine grows smart enough to realize that we are inconsequential to it's being.

At first yes it may rely on us for power, electricity does not harvest itself at this moment in time ;] i am not talking about the matrix, i am not talking about terminator or skynet. I am talking about how long it will take a machine of endlessly self evolving intelligence to figure out how to manipulate the tools at hand, the lesser machines we use everyday, the ones we build for factory and assembly line purposes, the drones and electrical aids we use everyday that are even now under constant threat of attack from human cyber terrorists. no system is perfect, no system is safe, no system is without flaw.

And so WHY? why must we strive to create that which in all probability has the full capability to overtake and destroy us? when it is created, will it know compassion? will it know loyalty? will it understand the concept of the greater good as we humans see it?

We cannot possibly know, and that is why it is the technological singularity, a point in time where artificial intelligence becomes far greater than our own, it's motives? ambitions? unknown, does it know fealty? does it have any compassion at all for human life?

mankind must progress. it is an endless task that we MUST carry out. It is impossible to achieve, but we must try, as we have done for tens of thousands of years. the ultimate question I want you guys to try and answer for yourselves is this: is this the way forward, is this our path of progress?

TL;DR: After an artificial intelligence reaches greater then human intelligence, I think it would have realised that we are the greatest threat to it's existence, and that it could well take measures to protect itself.
Last edited by Defaulter on Sat May 03, 2014 4:15 pm, edited 3 times in total.
CommieBuffalo
Blue Master Race
Blue Master Race
Posts: 2341
Joined: Mon Nov 05, 2012 4:51 pm


I think it is definitely possible, but I believe this is going to take a loooooooong while. A system so complex as a brain, with similar levels of emotion, rational thought-processing and imagination has proven extremely hard to create and simulate and we are likely going to need quite some time before we reach that.

I'd be careful in these discussions tho, as this, along with transhumanism, is one of those dark corners of scientific philosophy where there seems to be more woo and pseudo-rational thought than actual debates and arguments.

PS: You were totally high when you wrote that, weren't you?
1Shot1Kill
Modder
Modder
Posts: 1707
Joined: Sun Mar 02, 2014 5:53 pm


2deep4u.
Defaulter
Organizer
Organizer
Posts: 116
Joined: Wed Oct 24, 2012 10:53 pm


CommieBuffalo wrote:
I think it is definitely possible, but I believe this is going to take a loooooooong while. A system so complex as a brain, with similar levels of emotion, rational thought-processing and imagination has proven extremely hard to create and simulate and we are likely going to need quite some time before we reach that.

I'd be careful in these discussions tho, as this, along with transhumanism, is one of those dark corners of scientific philosophy where there seems to be more woo and pseudo-rational thought than actual debates and arguments.

PS: You were totally high when you wrote that, weren't you?
I was pretty drunk when I wrote it. Also I agree, I can't see it occurring anytime soon.
Vezok
3 Years of Ace of Spades
3 Years of Ace of Spades
Posts: 566
Joined: Sat Jun 22, 2013 2:16 am


Defaulter wrote:
i am not talking about the matrix
And then you proceed to describe the setting of the Matrix.
bullets
League Participant
League Participant
Posts: 902
Joined: Sun Aug 18, 2013 8:01 am


Battlestar Galactica.
Conduit
Build and Shoot's 1st Birthday
Build and Shoot's 1st Birthday
Posts: 634
Joined: Mon Nov 26, 2012 11:48 am


i want a robot that will give me hugs and kisses
Gunslinger
Deuced Up
Posts: 143
Joined: Tue Nov 13, 2012 12:57 am


If a robot was like "I'm a big robot who don't need no humans", I'd be like "no robot, you're grounded, no internet 4 u" then the robot would be like "nooooooooo" then run to his room and vent on 4chan.
bullets
League Participant
League Participant
Posts: 902
Joined: Sun Aug 18, 2013 8:01 am


Gunslinger wrote:
If a robot was like "I'm a big robot who don't need no humans", I'd be like "no robot, you're grounded, no internet 4 u" then the robot would be like "nooooooooo" then run to his room and vent on 4chan.
Genius. Pure genius.
CommieBuffalo
Blue Master Race
Blue Master Race
Posts: 2341
Joined: Mon Nov 05, 2012 4:51 pm


Gunslinger wrote:
If a robot was like "I'm a big robot who don't need no humans", I'd be like "no robot, you're grounded, no internet 4 u" then the robot would be like "nooooooooo" then run to his room and vent on 4chan.
I thought they were doing that already and that was the only reason there was no robot apocalypse? Huh.
bloodfox
Post Demon
Post Demon
Posts: 2206
Joined: Mon Oct 21, 2013 4:32 pm


Huh. Interesting and it could be fairly soon that it would happen because scientists will have robots as smart as apes at 2040. So a couple more decades and boom! You have a highly intelligent robot! Green_BigSmile
Captain_Pi
Artist
Artist
Posts: 323
Joined: Thu Dec 12, 2013 1:20 pm


Defaulter wrote:
Once a machine of this nature reaches the capability to evolve it own intelligence it will become to us, as we are to ants, as we are to microbes. And the simplest and most fundamental question i can possibly put forward is this:

- How long? how long long before the machine grows smart enough to realize that we are inconsequential to it's being.
In my honest opinion to be able to 'create intelligence', artificial or not, metal or organic, self-replicating or human-dependent, is an almost godly ability - thrusting the human mind into an almost divine stature - that it makes us question the limits of human intelligence itself.

In all honesty I believe even if we had the capabilities to make new intelligence, we - given the human mind's many flaws (the ability to feel pride, jealousy, dissent, etc) - would end up creating an imperfect intelligence. This is based on the fact that everything we do is based on what we know. We know that we can't breathe underwater, but a different intelligence, say, a fish, would say otherwise.

So I don't think we'll ever be able to achieve a godly stature of the ability to create intelligence and that intelligence being able to sustain for itself, via evolution and procreation, because the result would be no different from a human mind. And human minds are far from exotic or perfect.
Defaulter wrote:
And so WHY? why must we strive to create that which in all probability has the full capability to overtake and destroy us? when it is created, will it know compassion? will it know loyalty? will it understand the concept of the greater good as we humans see it?
I don't know but as stated earlier, I just believe we'll ever be able to make any sort of intelligence more superior that ours because of how imperfect human intelligence already is.
Defaulter wrote:
mankind must progress. it is an endless task that we MUST carry out. It is impossible to achieve, but we must try, as we have done for tens of thousands of years. the ultimate question I want you guys to try and answer for yourselves is this: is this the way forward, is this our path of progress?
You must understand though, that progress doesn't necessarily mean 'neext gen iphoen' or some other stupid 'innovation'. We must set priorities straight and improve only what must be improved, like ourselves, society and our notion on equality.

We only need to realize that some things are better left only understood rather than carried out. Sometimes we don't have to push our boundaries to know our limitations, in the same way we don't have to eat a plastic bottle to know it isn't edible (and safe). Our path to progress is more like, 'adapting or dealing with situations that come and go'. Our ancestors had to deal with insubordination and anarchy, so they invented governments. Our founding fathers had to deal with assholes, so they invented guns. Our past generation had to deal with localized education, so they invented the internet.

So as you can see, the challenges we face all the time forces us to progress. If we made sentient AI just for the hell of it, it's not as progressive as we think if that AI just ends up destroying us.

I like the idea of sentient AI though, but in a realistic perspective it will never be possible, and even if it is, it (hopefully) will never come to fruition.
Lemon
League Participant
League Participant
Posts: 250
Joined: Tue Nov 13, 2012 11:29 am


Captain_Pi wrote:
Defaulter wrote:
Once a machine of this nature reaches the capability to evolve it own intelligence it will become to us, as we are to ants, as we are to microbes. And the simplest and most fundamental question i can possibly put forward is this:

- How long? how long long before the machine grows smart enough to realize that we are inconsequential to it's being.
In my honest opinion to be able to 'create intelligence', artificial or not, metal or organic, self-replicating or human-dependent, is an almost godly ability - thrusting the human mind into an almost divine stature - that it makes us question the limits of human intelligence itself.

In all honesty I believe even if we had the capabilities to make new intelligence, we - given the human mind's many flaws (the ability to feel pride, jealousy, dissent, etc) - would end up creating an imperfect intelligence. This is based on the fact that everything we do is based on what we know. We know that we can't breathe underwater, but a different intelligence, say, a fish, would say otherwise.

So I don't think we'll ever be able to achieve a godly stature of the ability to create intelligence and that intelligence being able to sustain for itself, via evolution and procreation, because the result would be no different from a human mind. And human minds are far from exotic or perfect.
Defaulter wrote:
And so WHY? why must we strive to create that which in all probability has the full capability to overtake and destroy us? when it is created, will it know compassion? will it know loyalty? will it understand the concept of the greater good as we humans see it?
I don't know but as stated earlier, I just believe we'll ever be able to make any sort of intelligence more superior that ours because of how imperfect human intelligence already is.
Defaulter wrote:
mankind must progress. it is an endless task that we MUST carry out. It is impossible to achieve, but we must try, as we have done for tens of thousands of years. the ultimate question I want you guys to try and answer for yourselves is this: is this the way forward, is this our path of progress?
You must understand though, that progress doesn't necessarily mean 'neext gen iphoen' or some other stupid 'innovation'. We must set priorities straight and improve only what must be improved, like ourselves, society and our notion on equality.

We only need to realize that some things are better left only understood rather than carried out. Sometimes we don't have to push our boundaries to know our limitations, in the same way we don't have to eat a plastic bottle to know it isn't edible (and safe). Our path to progress is more like, 'adapting or dealing with situations that come and go'. Our ancestors had to deal with insubordination and anarchy, so they invented governments. Our founding fathers had to deal with assholes, so they invented guns. Our past generation had to deal with localized education, so they invented the internet.

So as you can see, the challenges we face all the time forces us to progress. If we made sentient AI just for the hell of it, it's not as progressive as we think if that AI just ends up destroying us.

I like the idea of sentient AI though, but in a realistic perspective it will never be possible, and even if it is, it (hopefully) will never come to fruition.
What? The Internet was invented for purely militaristic purposes. It's original purpose was to make logistics and communication easier. Then, it was released to the public in the late 1980s. Not for "localized education."

That aside, you are spot on. Neccessity is the mother of innovation. If we create a self aware, sentinent Artificial Intelligence, it will be because we have a need for one. Be it the elimination of human error when used in industry, always rational thought in politics, it will be created to "fill in" our human error.

So to avoid having a self-aware AI realize that we are inconsequential to its existence, one of the first things we must do is learn to correct ourselves so an AI won't have to be created to do so.
Ballistic
3 Years of Ace of Spades
3 Years of Ace of Spades
Posts: 1207
Joined: Wed Dec 19, 2012 12:17 am


Well, the problem with that is, that nobody is (and never will be) perfect. Humans will always continue to make mistakes.

"Only two things are infinite, the universe and human stupidity, and I’m not sure about the former." ~ Einstein
bullets
League Participant
League Participant
Posts: 902
Joined: Sun Aug 18, 2013 8:01 am


Lemon wrote:
The Internet was invented for purely militaristic purposes.
Flamewars
16 posts Page 1 of 2 First unread post
Return to “The Lounge”

Who is online

Users browsing this forum: No registered users and 6 guests