Tuesday, April 16, 2019

AI will be dangerous only if they are designed to be.

Prof Nageswaran and Raghuraman wrote that "Artificial Intelligence (AI) would have to make moral choices in the very near future". "But can humans do a good job of coaching AI to exercise moral choices?" "We rely on heuristics or short-cuts for making decisions, and continue to do so even if we are told that the decisions are sub-optimal." "To top it all various biases plague our choices." "On the morning of 26 September 1983" "Soviet satellites that comprised Russia's early warning system were reporting that five Minuteman intercontinental ballistic missiles (ICBMs) had just been launched from an American base" towards the Soviet Union, wrote R Matthan. The cold war was at its height and, just 3 weeks earlier, the Soviets had shot down a Korean Airlines aircraft killing US Senator Larry McDonald. "It was against this background that Colonel Stanislav Petrov, the duty officer at Serpukhov-15 had to take a call." He went with his gut feeling that this was a malfunction and saved the world from a nuclear winter. But, human beings aren't infallible and it is "because of these human frailties that we decided to build automated alternatives that are fairer". At present, AI is extremely easy to fool by 'adversarial attacks', wrote S Samuel. A Tesla car was fooled into changing direction towards oncoming traffic by placing 3 stickers on the road to suggest that the road was veering to the left. By using a 'perturbation image', AI can be made to diagnose a benign mole as malignant. The danger of AI is that it may use any means, not anticipated by its programmers, to arrive at its goal, wrote K Piper. Thus, finding a means of cheating at a video game may not be dangerous but removing human controllers to enhance its capability may appear to be entirely logical to it. "The problem isn't that AI will suddenly decide we all need to die; the problem is that we might give it instructions that are vague or incomplete and that lead to AI following our orders in ways we didn't intend," wrote D Matthews. What if some people are gaining from AI malfunction? They will resist attempts to terminate the program until it is to late. Technology has been evolving alongside human beings, wrote T Chatfield, and this is getting faster. There is a danger of reaching a point of 'Singularity': "a technological point of no return beyond which, it's argued, the evolution of technology will reach a tipping point where self-design and self-improvement take over, cutting humanity permanently out of the loop." If all this is alarming, E Siegel thinks that, "Artificial Intelligence is a fraudulent hoax". AI may not want to conquer the world but rogue nations like China certainly do. A university in China is recruiting students to develop killer robots. These will kill indiscriminately. AI may not be dangerous. Humans certainly are.    

No comments: