|
. | . |
|
by Staff Writers Paris (AFP) Dec 06, 2014
There was the psychotic HAL 9000 computer in "2001: A Space Odyssey". The humanoids which attacked their flesh-and-blood masters in "I, Robot". And, of course, "The Terminator", where a robot is sent into the past to kill a woman whose son will end the tyranny of the machines in the future. Never far from the surface, a dark, dystopian view of artificial intelligence (AI) has returned to the headlines, thanks to British physicist Stephen Hawking. "The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race," Hawking told the BBC. "Once humans develop artificial intelligence it would take off on its own, and re-design itself at an ever increasing rate," he said. But experts interviewed by AFP were divided. Some agreed with Hawking, saying that the threat, even if it were distant, should be taken seriously. Others said his warning seemed overblown. "I'm pleased that a scientist from the 'hard sciences' has spoken out. I've been saying the same thing for years," said Daniela Cerqui, an anthropologist at Switzerland's Lausanne University. Gains in AI are creating machines that outstrip human performance, Cerqui argued. The trend eventually will delegate responsibility for human life to the machine, she predicted. "It may seem like science fiction, but it's only a matter of degrees when you see what is happening right now," said Cerqui. "We are heading down the road he talked about, one step at a time." Nick Bostrom, director of a programme on the impacts of future technology at the University of Oxford, said the threat of AI superiority was not immediate. Bostrom pointed to current and near-future applications of AI that were still clearly in human hands -- things such as military drones, driverless cars, robot factory workers and automated surveillance of the Internet. But, he said, "I think machine intelligence will eventually surpass biological intelligence -- and, yes, there will be significant existential risks associated with that transition." Other experts said "true" AI -- loosely defined as a machine that can pass itself off as a human being or think creatively -- was at best decades away, and cautioned against alarmism. Since the field was launched at a conference in 1956, "predictions that AI will be achieved in the next 15 to 25 years have littered the field," according to Oxford researcher Stuart Armstrong. "Unless we missed something really spectacular in the news recently, none of them have come to pass," Armstrong says in a book, "Smarter than Us: The Rise of Machine Intelligence." Jean-Gabriel Ganascia, an AI expert and moral philosopher at the Pierre and Marie Curie University in Paris, said Hawking's warning was "over the top." "Many things in AI unleash emotion and worry because it changes our way of life," he said. "Hawking said there would be autonomous technology which would develop separately from humans. He has no evidence to support that. There is no data to back this opinion." "It's a little apocalyptic," said Mathieu Lafourcade, an AI language specialist at the University of Montpellier, southern France. "Machines already do things better than us," he said, pointing to chess-playing software. "That doesn't mean they are more intelligent than us." Allan Tucker, a senior lecturer in computer science at Britain's Brunel University, took a look at the hurdles facing AI. - BigDog and WildCat - Recent years have seen dramatic gains in data-processing speed, spurring flexible software to enable a machine to learn from its mistakes, he said. Balance and reflexes, too, have made big advances. Tucker pointed to the US firm Boston Dynamics as being in the research vanguard. It has designed four-footed robots called BigDog (https://www.youtube.com/watch?v=W1czBcnX1Ww) and WildCat (https://www.youtube.com/watch?v=dhooVgC_0eY), with funding from the Pentagon's hi-tech research arm. "These things are incredible tools that are really adaptative to an environment, but there is still a human there, directing them," said Tucker. "To me, none of these are close to what true AI is." Tony Cohn, a professor of automated reasoning at Leeds University in northern England, said full AI is "still a long way off... not in my lifetime certainly, and I would say still many decades, given (the) current rate of progress." Despite big strides in recognition programmes and language cognition, robots perform poorly in open, messy environments where there are lots of noise, movement, objects and faces, said Cohn. Such situations require machines to have what humans possess naturally and in abundance -- "commonsense knowledge" to make sense of things. Tucker said that, ultimately, the biggest barrier facing the age of AI is that machines are... well, machines. "We've evolved over however many millennia to be what we are, and the motivation is survival," he said. "That motivation is hard-wired into us. It's key to AI, but it's very difficult to implement."
Related Links All about the robots on Earth and beyond!
|
|
The content herein, unless otherwise known to be public domain, are Copyright 1995-2014 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement All images and articles appearing on Space Media Network have been edited or digitally altered in some way. Any requests to remove copyright material will be acted upon in a timely and appropriate manner. Any attempt to extort money from Space Media Network will be ignored and reported to Australian Law Enforcement Agencies as a potential case of financial fraud involving the use of a telephonic carriage device or postal service. |