Sunday, July 22, 2018

Will AI destroy itself without humans?

"Warnings about the risks posed by artificial intelligence (AI) seem to be everywhere. From Elon Musk to Henry Kissinger, people are sounding the alarm that super-smart computers could wipe us out, like in the film 'The Terminator'," wrote C Crider. Prof Stephen Hawking was not sure if we could be "destroyed by it". "It brings dangers , like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy." Elon Musk goes even further. He wants a public body "that has insight and then oversight to confirm that everyone is developing AI safely" "I think the danger of AI is much greater than the danger of nuclear warheads by a lot and nobody would suggest that we allow anyone to build nuclear warheads if they want. That would be insane." Experts have made a list of risks that could be posed by AI. "These warnings matter, but they gloss over a more urgent problem: weaponised AI is already here. As you watch this, powerful interests -- from corporations to State agencies, like the military and police -- are using AI to monitor people, assess them, and to make decisions about their lives." The US uses data-driven targeting to kill its enemies with drones. "The US doesn't have deep human intelligence sources in Yemen, so it relies heavily on massive sweeps of signal data. AI processes this data -- and throws up red flags in a targeting algorithm." These strikes are called "signature strikes" and they "make up the majority of drone strikes". "Meanwhile, civilian airstrike deaths have become more numerous under US President Donald Trump -- more than 6,000 last year in Iraq and Syria alone." "Facial recognition, which the police are currently testing in cities such as London, has been wrong as much as 98% of the time." Prof T Cowen feels that we tend to fear technology because we do not understand it and so feel out of control. "We think we can protect our lives against many kinds of risks, perhaps irrationally to some extent, but how do you protect being assassinated by a small, poison-equipped drone." That is why we also fear driverless cars. The problem is that human beings are designed to be irrational, wrote O Goldhill, and that helps in making decisions. Superiority is being defined in terms of intelligence only, but "Being human is not about superintelligence and super capacity. It is about living with others and learning to live within our limitations," wrote S Sarukkai. That is the real danger. AI is an algorithm and an algorithm must be based on logic. An armed autonomous robot will see humans as illogical, therefore inferior, and try to eliminate us. Since robots can have no sense of pleasure or pain eliminating humans will mean AI will have no reason to exist. So, will it self destruct?

No comments: