Monday, August 03, 2015

Should scientists be allowed to invent whatever they can?

Indian journalists suffer from a delusion of knowing everything but does that mean people with differing opinions are to be treated with disdain? A journalist who has written a book seems to think so. He thinks that scientists are scared of Artificial Intelligence because they maybe stupid. " This fear inspired a letter last week, which was signed by hundreds of eminent people including the theoretical physicist Stephen Hawking, who says many things that would look foolish if you said the same, and the chief of SpaceX and Tesla Motors, the yet to be despised billionaire, Elon Musk." He does not explain why Stephen Hawking is a fool or why Elon Musk, who may revolutionise how we travel, is to be despised. A Swedish philosopher, Niklas Bostrom reasoned that an AI programmed to produce paper clips will go on producing paper clips, until all of earth's resources were used up, thereby ending civilisation. But then he turns his pop psychology on its head by writing that humans are frightened of AI because machines will become more intelligent than us. " The fear of the stupid machine is in reality a respectable front for the fear of the intelligent machine," he writes. Wow. But what if it is possible to invent such robots that are much more intelligent than us? Is it right to do something just because it is possible? Scientists must have been very excited to unleash the power of the atom over Hiroshima and Nagasaki, which has led to rogue states like Pakistan and North Korea to build there own, making the world much more dangerous.  It is one thing to be listening to the universe to solve the question of whether earth is the only spot where life exists but it is completely different to advertise our presence. If an advanced civilisation, capable of travelling in space, is able to lock on to our radio signals it may find us. What is the guarantee that it will be friendly? Science fiction writer, Isaac Asimov devised the 3 laws of robotics which would prevent robots from ever harming humans but a robot invented the zeroth law which was based on ' greater good ', an excuse used by politicians to pass restrictive laws. The biggest fear of AI is not that it will be stupid or intelligent but that they will be totally logical, while humans are illogical. A mother died saving her child in an escalator accident in China. That surely is totally illogical. A lot of effort has gone into educating this woman and she can easily produce another child but the child's future is uncertain, especially without a mother. That is the difference that makes robots so dangerous for us. A robot may produce good music but will it be able to appreciate music, or any other beautiful thing? Robots may turn out to be like the Vogons who obliterated earth to build a space highway. To understand the danger of AI we have to think illogically. Like humans.

No comments: