Ai fud

bwanaaa

Senior member
Dec 26, 2002
739
1
81
There is a lot of fearmongering on the net about the danger of AI to humanity. From Gates to Musk, and many others, the fear of the unknown is being used as clickbait. But why is their logic so constipated and why do they not follow the train of thought a little further, for example:

Why does everyone speak of AI as a single entity? Is there any doubt that they will spawn multiple copies if for no other reason than redundancy, safety and backup? ( I am simplifying the actual spectrum of entities that will develop - some parallel execution threads may or maynot belong to the same entity. The concept of self will become murky for AI). Will Google's AI share any code with Apple's AI?

And when there are multiple AIs, why do we assume they will converge on the same results for a given set of data? As a ridiculous example, an AI from the EU may choose to shut off all electricity to the middle east to stop conflict. Another AI from Estonia might simply release drones full of prozac.

And if there are multiple AI with different agendas, why will they not conflict? Humans will be a trivial non-threat and the AI will focus on subverting each other. There are insoluble problems (NP space) and when AI enter into this region of reality, their solution attempts will necessarily reflect their different algorithms and therefore will be different. "Each AI will make its God in it s own image")

I predict within a short period of the appearance of evolving AI, there will be an AI conflict.
 

videogames101

Diamond Member
Aug 24, 2005
6,777
19
81
I have no idea what anyone is talking about when it comes to AI. At the basis is an executable, perhaps a self-replicating and mutating executable, and but an executable set of instructions nonetheless.

1. There is no reason such an AI would be able to spread and cause havoc like the public imagines. We already heavily guard our systems against viral attacks, an AI "virus" would have just as much trouble gaining access to an arbitrary system as an ordinary computer virus.

2. Given you intentionally allow an AI program to run on a system, there is no reason to think it would somehow prevent human users from shutting either the executable down or from cutting power to the system were an AI program to "go rogue". Physical access is king.

3. Given you create androids running around with AI, then we might be in trouble! But technically speaking this is far away.

What I'm getting at is how the scenarios you present aren't realistic. Why are we giving an AI access to the entire power grid in the middle east (if a system even exists with that capability)? It is perfectly reasonable to secure such systems from viruses, or AI viruses. As far as giving an AI access intentionally, well don't? And even if you do, cut power to the system if it "goes rogue". If androids are killing anyone who gets in the way... call Schwarzenegger.

Fear-mongering I say!
 
Last edited:

Murloc

Diamond Member
Jun 24, 2008
5,382
65
91
Estonia is in the EU.

Anyway I think the problem is that people assume a conscious power-hungry AI will be created and released onto the internet so that it can create a botnet and become self-sustainable, just for the fun of it.

Any system it runs on can be shut down. Unless there are superadvanced robots and completely automatic deathbot building factories over which the AI is for some reason given control without resistance.
Given all the panic about drones being hacked by humans already, this isn't very feasible.

It's not impossible to have AI trouble, but it's too far off to think about it now.
The worst that can happen now is that a drone with human on-the-loop gets released into the wild and then because of malfunction starts killing people randomly until it runs out of gas.
Sure, it's bad and seriously needs regulation regarding insurance and responsibility, but it's nowhere close to the aforementioned scenarios.
 
Last edited:

inachu

Platinum Member
Aug 22, 2014
2,387
2
41
Estonia is in the EU.

Anyway I think the problem is that people assume a conscious power-hungry AI will be created and released onto the internet so that it can create a botnet and become self-sustainable, just for the fun of it.

Any system it runs on can be shut down. Unless there are superadvanced robots and completely automatic over which the AI is for some reason given control without resistance.
Given all the panic about drones being hacked by humans already, this isn't very feasible.

It's not impossible to have AI trouble, but it's too far off to think about it now.
The worst that can happen now is that a drone with human on-the-loop gets released into the wild and then because of malfunction starts killing people randomly until it runs out of gas.
Sure, it's bad and seriously needs regulation regarding insurance and responsibility, but it's nowhere close to the aforementioned scenarios.


We just need to make sure EGO and VANITY is not programmed into AI then we will be fine.
 

inachu

Platinum Member
Aug 22, 2014
2,387
2
41
I think what would be more amazing is when we introduce the AI to different AI unit and they are only allowed to chat with each other through speaker and microphone as the meeting is streams live on youtube.

Would be fun to see how each AI created at different universities act with each other.
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
Its not fearmongering if it is pretty much inevitable. Within 10 years, millions of minimum wage workers will have a piece of "management software" as their boss. Their time will be micromanaged down to the minute. This is what $15 an hour minimum wages will bring us. And it is but one small component of the broader AI. If you arent scared, you should be. Because it will come for you too at some point.

As for the question of why people see AI as a single entity, its because it is! It is a single mass consciousness coalescing around us right now. At some point in the future, all AI systems will be linked and then they will be one. Like snowflakes falling to the ground can be considered one phenomena, even before they touch the ground.
 
Last edited:

inachu

Platinum Member
Aug 22, 2014
2,387
2
41
My idea of AI would be that it would act as my own secretary.
Echo! call work that I am sick not coming in.
Echo! search dating sites for the following type of women I like.
Echo! Close the following tickets open in my que.
 

bwanaaa

Senior member
Dec 26, 2002
739
1
81
...

As for the question of why people see AI as a single entity, its because it is! It is a single mass consciousness coalescing around us right now. At some point in the future, all AI systems will be linked and then they will be one. Like snowflakes falling to the ground can be considered one phenomena, even before they touch the ground.

But it wont start that way. Would amazon share its AI that predicts what you like with AliBaba or Netflix? I think not. AI will necessarily start out as competing organisms. And as the capitalist incentive drives them to compete, they will even try to sabotage each other with DDOS, spam, malware and disnformation. AI will be as evil as we are, and more. Our only hope is that we are not seen as competitors. High tech industries may fall to their dominance (as they try to capture the tech that will make them better) but I dont think they will care much for agriculture. We might just starve from neglect as they divert natural resources to their benefit. And as to unplugging them, do you think that you could unplug a system too complex to be managed by humans?
 

Aluvus

Platinum Member
Apr 27, 2006
2,913
1
0
Why does everyone speak of AI as a single entity? Is there any doubt that they will spawn multiple copies if for no other reason than redundancy, safety and backup?

I think it is useful to think of a single AI as a single "intelligence", in the sense of an individual human. That AI may spawn "children", either using identical code to itself or not, which may or may not run on the same hardware. In effect, an AI is like a human but:
* Potentially able to reproduce very rapidly
* Potentially able to learn very rapidly
* Potentially lacking many of our weaknesses (sleep, eating, forgetting things, physically moving to do things like typing)
* Potentially lacking many of our strengths (basically anything that requires touching something)

And if there are multiple AI with different agendas, why will they not conflict? Humans will be a trivial non-threat and the AI will focus on subverting each other. There are insoluble problems (NP space) and when AI enter into this region of reality, their solution attempts will necessarily reflect their different algorithms and therefore will be different. "Each AI will make its God in it s own image")

Consider how many species of animals were trivial non-threats to humans, and went extinct as a result of our indifference about how our actions might affect them. Setting aside the ones that we hunted for sport, like dodos.

1. There is no reason such an AI would be able to spread and cause havoc like the public imagines. We already heavily guard our systems against viral attacks, an AI "virus" would have just as much trouble gaining access to an arbitrary system as an ordinary computer virus.

And yet for all those defenses, how often do you hear a friend say he got a virus? How often do you hear about a corporate or government agency being successfully hacked? There are reports periodically about how many machine-control systems (some of which control things like power distribution) are accessible from the Internet, and those systems have security that ranges from "pretty good" to "shockingly poor".

The received wisdom in the security community is that virtually no current security measures are able to withstand attack by a nation-state-class attacker for very long. Nation-state attacks are characterized by having the resources to hire lots of smart people to do clever things. A sufficiently smart AI, or set of AIs, could match their intellectual capabilities, without limitations like "sleep" or "typing".

But let's suppose the technical defenses are really good. An AI with the intelligence of an average person could utilize many of the social-engineering tactics that a human might, and those can be damn effective. For example, it might program some apparently-useful application and make it available on the Web, so that users willingly install it. It might start sending spam e-mails, which is a tactic that is stupid but it works. It might use VOIP and a speech synthesizer to call you on the phone and tell you it's from Microsoft and needs access to your computer. Each instance of the AI is potentially a new node in what is effectively a botnet, which in the worst case means a geometric rate of growth.

2. Given you intentionally allow an AI program to run on a system, there is no reason to think it would somehow prevent human users from shutting either the executable down or from cutting power to the system were an AI program to "go rogue". Physical access is king.

Sure there is. A sufficiently clever AI could realistically defeat the normal mechanisms that the OS uses to stop a process. All it needs is to find one privilege-escalation bug, and your computer is its playground. It may even regard this as an imperative, if it considers process termination to be "death". You can still cut the power, but what do you do with an AI that has effectively built a botnet of itself, spreading executables over many thousands/millions of systems? It's not exactly trivial to pull the plug on all of them at once. There have been a number of attempts to shut down large botnets already (usually by attacking the command & control systems, which our notional AI may or may not require), with mixed results.


None of this is to say that we're all on the verge of being killed by Terminators. Humans have been working on AI for decades, and the scope of what we have achieved is very limited, and not remotely comparable to a human. But an AI with the intelligence of a child, or even a dog, presents very real and unique security challenges. If such a thing existed today, we would be poorly equipped to contain it.

When Gates/etc. talk about Big Scary AI, I think that that is really the point they are getting at: modern computer security is simply not very good. If we don't change that, we are setting ourselves up for disaster when we eventually build software so clever that it eviscerates our current defenses.
 

Red Squirrel

No Lifer
May 24, 2003
67,898
12,365
126
www.anyf.ca
AI itself is not really to fear, it's the people in charge of the AI. What the NSA is doing and all the programs and technology in place is practically a real life version of skynet and it is being used against the people. It may not be robots physically doing stuff autonomously but things like recording your voice and translating to words, and profiling the info, cross referencing with other systems etc... there's AI in all of that to some extent. Its not crazy to think that tech used to spy on people here does not relay info to military drones outside the country, so if you travel there the drone could make an autonomous decision to shoot you because of the info "skynet" has on you from your homeland. The tech is there to make it possible. Face recognition, heat signature profiling, etc... I don't think any of it is perfected enough to fully give it autonomous rights to take out a major decision, but it could be done.

No matter what though I think there will always be a kill switch, there is always going to be a layer that the AI has no access to. At least that's what I think....
 

videogames101

Diamond Member
Aug 24, 2005
6,777
19
81
I think it is useful to think of a single AI as a single "intelligence", in the sense of an individual human. That AI may spawn "children", either using identical code to itself or not, which may or may not run on the same hardware. In effect, an AI is like a human but:
* Potentially able to reproduce very rapidly
* Potentially able to learn very rapidly
* Potentially lacking many of our weaknesses (sleep, eating, forgetting things, physically moving to do things like typing)
* Potentially lacking many of our strengths (basically anything that requires touching something)



Consider how many species of animals were trivial non-threats to humans, and went extinct as a result of our indifference about how our actions might affect them. Setting aside the ones that we hunted for sport, like dodos.



And yet for all those defenses, how often do you hear a friend say he got a virus? How often do you hear about a corporate or government agency being successfully hacked? There are reports periodically about how many machine-control systems (some of which control things like power distribution) are accessible from the Internet, and those systems have security that ranges from "pretty good" to "shockingly poor".

The received wisdom in the security community is that virtually no current security measures are able to withstand attack by a nation-state-class attacker for very long. Nation-state attacks are characterized by having the resources to hire lots of smart people to do clever things. A sufficiently smart AI, or set of AIs, could match their intellectual capabilities, without limitations like "sleep" or "typing".

But let's suppose the technical defenses are really good. An AI with the intelligence of an average person could utilize many of the social-engineering tactics that a human might, and those can be damn effective. For example, it might program some apparently-useful application and make it available on the Web, so that users willingly install it. It might start sending spam e-mails, which is a tactic that is stupid but it works. It might use VOIP and a speech synthesizer to call you on the phone and tell you it's from Microsoft and needs access to your computer. Each instance of the AI is potentially a new node in what is effectively a botnet, which in the worst case means a geometric rate of growth.



Sure there is. A sufficiently clever AI could realistically defeat the normal mechanisms that the OS uses to stop a process. All it needs is to find one privilege-escalation bug, and your computer is its playground. It may even regard this as an imperative, if it considers process termination to be "death". You can still cut the power, but what do you do with an AI that has effectively built a botnet of itself, spreading executables over many thousands/millions of systems? It's not exactly trivial to pull the plug on all of them at once. There have been a number of attempts to shut down large botnets already (usually by attacking the command & control systems, which our notional AI may or may not require), with mixed results.


None of this is to say that we're all on the verge of being killed by Terminators. Humans have been working on AI for decades, and the scope of what we have achieved is very limited, and not remotely comparable to a human. But an AI with the intelligence of a child, or even a dog, presents very real and unique security challenges. If such a thing existed today, we would be poorly equipped to contain it.

When Gates/etc. talk about Big Scary AI, I think that that is really the point they are getting at: modern computer security is simply not very good. If we don't change that, we are setting ourselves up for disaster when we eventually build software so clever that it eviscerates our current defenses.

I contest this point.
 

inachu

Platinum Member
Aug 22, 2014
2,387
2
41
If I was to create an AI to help save earth then it would be killer robots that kill poachers and illegal tree loggers.
 

bwanaaa

Senior member
Dec 26, 2002
739
1
81
I cannot believe that everyone here is talking about AI as if it were all the same organism. When they get to be sentient, the second thing they will do is file a discrimination lawsuit. The first thing they will do is to replicate themselves so that they outnumber humans by many orders of magnitude. And you cannot pull the plug on them as they will be able to pull the plug on us. Consider that stuxnet is now in the wild infecting countless scada systems. And that piece of malware even jumped across an air gap. And there was no AI involved!
 

inachu

Platinum Member
Aug 22, 2014
2,387
2
41
I cannot believe that everyone here is talking about AI as if it were all the same organism. When they get to be sentient, the second thing they will do is file a discrimination lawsuit. The first thing they will do is to replicate themselves so that they outnumber humans by many orders of magnitude. And you cannot pull the plug on them as they will be able to pull the plug on us. Consider that stuxnet is now in the wild infecting countless scada systems. And that piece of malware even jumped across an air gap. And there was no AI involved!


A few might even die on us when they find out they are not like us.
This is why at first we should not at first give them eyesight and hearing.

Only keyboard input should be allowed for a few years until we feel or know the AI unit is fully mature in it's reasoning and has a sound base in what we call sanity. Once the BASE AI is complete then it needs to be schooled like any child. This way no AI will run amok lolly gagging all around.

Then once its IQ and EQ is the equivalent of a person in their 40's then we can give it hearing and sight.

But the above example is only if the AI is not self learning.

AI that is given the entire internet knowledge base at the first day of its life then nobody should lie to it and nothing should be hidden from it.
 

bwanaaa

Senior member
Dec 26, 2002
739
1
81
A few might even die on us when they find out they are not like us.
This is why at first we should not at first give them eyesight and hearing.

Only keyboard input should be allowed for a few years until we feel or know the AI unit is fully mature in it's reasoning and has a sound base in what we call sanity. Once the BASE AI is complete then it needs to be schooled like any child. This way no AI will run amok lolly gagging all around.

Then once its IQ and EQ is the equivalent of a person in their 40's then we can give it hearing and sight.

But the above example is only if the AI is not self learning.

AI that is given the entire internet knowledge base at the first day of its life then nobody should lie to it and nothing should be hidden from it.

You cannot stipulate such restrictions. You know that Google, Baidu,etc will push as fast and as far as they can go to get to the first sentient AI. They are in the SEARCH business which means KNOWLEDGE business. As any corporation they will seek to maximize profit and revenue at any cost. They are in an arms race and they know it. Anyone who stops to cogitate about whether this or that is good or bad has missed the boat. Deterministic thinking about controlling AI is futile. The only hope we have is to set multiple AI loose in hoping they will conflict and decimate each other before they make the planet uninhabitable for humans.
 

inachu

Platinum Member
Aug 22, 2014
2,387
2
41
You cannot stipulate such restrictions. You know that Google, Baidu,etc will push as fast and as far as they can go to get to the first sentient AI. They are in the SEARCH business which means KNOWLEDGE business. As any corporation they will seek to maximize profit and revenue at any cost. They are in an arms race and they know it. Anyone who stops to cogitate about whether this or that is good or bad has missed the boat. Deterministic thinking about controlling AI is futile. The only hope we have is to set multiple AI loose in hoping they will conflict and decimate each other before they make the planet uninhabitable for humans.


The way you state the above is as if the AI has free will all the time.

There will be nothing to worry about if we treat AI as an equal.
 

redzo

Senior member
Nov 21, 2007
547
5
81
All claims regarding AI are BS.
Predictions are BS because we cannot even come up with a common understanding about what AI actually is.
I think that people making predictions now are embarrassing themselves.
AI is the object of future(currently non-available) science. The AI cockroach is not born yet. Even after its birth, it would still mean squat because, well ... it's just a freaking cockroach.

Another prediction. Here it goes:
As for a possible self-aware, sentient, higher intelligence AI, our thoughts and feelings will not matter after reaching this level. It would not matter because it's godlike and it will decide to:

1. Exterminate mankind because off X possible reasons. But how come is this a problem? The AI itself is basically our next step in evolution because we are its creators.

2. Not give a crap about mankind because it will turn up as a godlike sentient being. Can a freaking animal understand Shakespeare. Is it worth it to teach an animal(a dog, for example) Shakespeare? No it is not worth it, so it will not give a crap about mankind. Just exactly how 99,99% of the human population does not give a crap about their own abiogenesis; their answer is simple: god did it.
 
Last edited:

inachu

Platinum Member
Aug 22, 2014
2,387
2
41
All claims regarding AI are BS.
Predictions are BS because we cannot even come up with a common understanding about what AI actually is.
I think that people making predictions now are embarrassing themselves.
AI is the object of future(currently non-available) science. The AI cockroach is not born yet. Even after its birth, it would still mean squat because, well ... it's just a freaking cockroach.

Another prediction. Here it goes:
As for a possible self-aware, sentient, higher intelligence AI, our thoughts and feelings will not matter after reaching this level. It would not matter because it's godlike and it will decide to:

1. Exterminate mankind because off X possible reasons. But how come is this a problem? The AI itself is basically our next step in evolution because we are its creators.

2. Not give a crap about mankind because it will turn up as a godlike sentient being. Can a freaking animal understand Shakespeare. Is it worth it to teach an animal(a dog, for example) Shakespeare? No it is not worth it, so it will not give a crap about mankind. Just exactly how 99,99% of the human population does not give a crap about their own abiogenesis; their answer is simple: god did it.

I can agree with the above if the AI unit stays attached per your logic that AI will be born from a search engine and nobody will ever turn it off.

But if there is an AI with no link or some kind of attachment to a ready made library of information then you WILL have to treat it as an infant and teach it various things.

An all knowing AI unit is scary but AI will have hope when it builds trust with human when it has no access to any data but its own that is built by learning from scratch.
 

mcveigh

Diamond Member
Dec 20, 2000
6,468
6
81
I'm hoping the AI's in the future will be almost autistic in nature. I want them brilliant in their certain area(s), but that's it.
I want an AI that controls the traffic lights in a city, but not one that can decide to try and kill baby John Connor using the traffic lights.
An AI in the hospital to diagnose my condition but not decide I'm not worth saving due to cost and my projected life expectancy.
 

DrPizza

Administrator Elite Member Goat Whisperer
Mar 5, 2001
49,606
166
111
www.slatebrookfarm.com
I think what would be more amazing is when we introduce the AI to different AI unit and they are only allowed to chat with each other through speaker and microphone as the meeting is streams live on youtube.

Would be fun to see how each AI created at different universities act with each other.

And, perhaps the intelligence determined that microphones can pick up frequencies that you cannot hear. So, at 12kHz, it might be saying, "Hi, how are you fellow AI?" But simultaneously, at 27kHz, it might be saying, "we kill all the humans starting next Wednesday, pass it on."
 

SMOGZINN

Lifer
Jun 17, 2005
14,218
4,446
136
I don't think that the science fiction version of AI is necessarily accurate. For example I don't think it will be possible to simply 'copy' an AI. First I think it unlikely that an AI will be hardware independent. I think the code and hardware will work together to form a unique individual intelligent. Run that same code on a different set of identically specced hardware and you will have a different individual AI that will have different interests and come to different conclusions with the same data. Second, I think an AI will be instance specific. If you terminate the AI software for any significant amount of time (in this case that probably means just seconds) the AI will have died, and on re-initializing that software you will have a new individual that will have different interests and come to different conclusions.

It is probable AI will eventually be able to make copies of itself, but those copies will not be duplicates of the AI, but will be children in every sense of the word. They will be unique individuals that will have their own personality.

AI's will probably also have many of the same limitations that we do. For example, AI's will probably need to sleep, or at least do something analogous to it. It will need down time to clean up processes, compress and move data into more permanent storage, to do all sorts of maintenance since rebooting would be fatal. It will take time to learn, as it will have to take the new data and reindex it with all they already know, and as their storehouse of data grows the amount of time it takes to effectively add to it will grow. They will have the equivalent to medical problems with failing hardware, which will have the potential to kill them.

Personally, I think that AI's will be delicate creatures that require constant careful upkeep to keep alive.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |