There is a lot of fearmongering on the net about the danger of AI to humanity. From Gates to Musk, and many others, the fear of the unknown is being used as clickbait. But why is their logic so constipated and why do they not follow the train of thought a little further, for example:
Why does everyone speak of AI as a single entity? Is there any doubt that they will spawn multiple copies if for no other reason than redundancy, safety and backup? ( I am simplifying the actual spectrum of entities that will develop - some parallel execution threads may or maynot belong to the same entity. The concept of self will become murky for AI). Will Google's AI share any code with Apple's AI?
And when there are multiple AIs, why do we assume they will converge on the same results for a given set of data? As a ridiculous example, an AI from the EU may choose to shut off all electricity to the middle east to stop conflict. Another AI from Estonia might simply release drones full of prozac.
And if there are multiple AI with different agendas, why will they not conflict? Humans will be a trivial non-threat and the AI will focus on subverting each other. There are insoluble problems (NP space) and when AI enter into this region of reality, their solution attempts will necessarily reflect their different algorithms and therefore will be different. "Each AI will make its God in it s own image")
I predict within a short period of the appearance of evolving AI, there will be an AI conflict.
Why does everyone speak of AI as a single entity? Is there any doubt that they will spawn multiple copies if for no other reason than redundancy, safety and backup? ( I am simplifying the actual spectrum of entities that will develop - some parallel execution threads may or maynot belong to the same entity. The concept of self will become murky for AI). Will Google's AI share any code with Apple's AI?
And when there are multiple AIs, why do we assume they will converge on the same results for a given set of data? As a ridiculous example, an AI from the EU may choose to shut off all electricity to the middle east to stop conflict. Another AI from Estonia might simply release drones full of prozac.
And if there are multiple AI with different agendas, why will they not conflict? Humans will be a trivial non-threat and the AI will focus on subverting each other. There are insoluble problems (NP space) and when AI enter into this region of reality, their solution attempts will necessarily reflect their different algorithms and therefore will be different. "Each AI will make its God in it s own image")
I predict within a short period of the appearance of evolving AI, there will be an AI conflict.