Tuesday, February 24, 2015

Artificial intelligence could kill us

Artificial intelligence will be a threat because we are stupid, not because it is clever and evil, according to experts.
104. And when it is said to them: "Come to what Allah has revealed and unto the Messenger (Muhammad for the verdict of that which you have made unlawful)." They say: "Enough for us is that which we found our fathers following," even though their fathers had no knowledge whatsoever and no guidance. 
105. O you who believe! Take care of your ownselves, [do righteous deeds, fear Allah much (abstain from all kinds of sins and evil deeds which He has forbidden) and love Allah much (perform all kinds of good deeds which He has ordained)]. If you follow the right guidance and enjoin what is right (Islamic Monotheism and all that Islam orders one to do) and forbid what is wrong (polytheism, disbelief and all that Islam has forbidden) no hurt can come to you from those who are in error. The return of you all is to Allah, then He will inform you about (all) that which you used to do. 5. Surah Al-Ma'idah (The Table Spread with Food)


We could put ourselves in danger by creating artificial intelligence that looks too much like ourselves, a leading theorist has warned. “If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits,” writes theorist Benjamin H Bratton
The warning comes partly in response to similar worries voiced by leading technologists and scientists including Elon Musk and Stephen Hawking. They and hundreds of other experts signed a letter last month calling for research to combat the dangers of artificial intelligence.
But many of those worries seem to come from thinking that robots will care deeply about humanity, for better or worse. We should abandon that idea, Bratton proposes.
“Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant,” he writes. “Worse than being seen as an enemy is not being seen at all.”
Instead we should start thinking about artificial intelligence as something more than the image of human intelligence. Tests like that proposed by Alan Turing, which challenges artificial intelligence to pass as a human, reflect the fact that our thinking about what kinds of intelligence there might be is limited, according to Bratton.
“That we would wish to define the very existence of A.I. in relation to its ability to mimic how humans think that humans think will be looked back upon as a weird sort of speciesism,” he writes. “The legacy of that conceit helped to steer some older A.I. research down disappointingly fruitless paths, hoping to recreate human minds from available parts. It just doesn’t work that way.”
Other experts in artificial intelligence have pointed out that we don’t tend to build other technology to mimic biology. Planes, for instance, aren’t designed to mimic the flight of birds, and it could be a mistake to do the same with humanity.
Retaining our idea that intelligence only exists as it does in humans could also mean that we force robots to “pass” as a person in a way that Bratton likens to being “in drag as a human”.
“We would do better to presume that in our universe, ‘thinking’ is much more diverse, even alien, than our own particular case,” he writes. “The real philosophical lessons of A.I. will have less to do with humans teaching machines how to think than with machines teaching humans a fuller and truer range of what thinking can be (and for that matter, what being human can be).”

It’s easy to find lots of people who worry that artificial intelligence will create machines so smart that they will destroy a huge swath of jobs currently done by humans. As computers and robots become more adept at everything from driving to writing, say even some technology optimists such as venture capitalist Vinod Khosla, skilled jobs will quickly vanishwidening the income gap even amid unprecedented abundance.
It’s also easy to find lots of people who think those worries are hogwash. Technological advances have always improved productivity and created new jobs to replace those made obsolete, insist smart people such as VC Marc Andreessen.
But it’s rare to find people in the AI field openly fret about their work resulting in the elimination of millions upon millions of jobs. So it was interesting, indeed alarming, to find not one but two AI and machine intelligence experts raise serious concerns this week about the potential impact of recent advances on the labor market.
One was Andrew Ng, the onetime head of the Google Brain project, a founder in the online education startup Coursera, and now chief scientist at the Chinese Internet company Baidu . At two conferences this week, theRE.WORK Deep Learning Summit in San Francisco and the Big Talk Summitin Mountain View, the former Stanford University computer science professor took the opportunity to sketch out AI’s challenges to society as it replaces more and more jobs.
andrewng-big-pingwest
“Historically technology has created challenges for labor,” he noted. But while previous technological revolutions also eliminating many types of jobs and created some displacement, the shift happened slowly enough to provide new opportunities to successive generations of workers. “The U.S. took 200 years to get from 98% to 2% farming employment,” he said. “Over that span of 200 years we could retrain the descendants of farmers.”
But he says the rapid pace of technological change today has changed everything. “With this technology today, that transformation might happen much faster,” he said. Self-driving cars, he suggested could quickly put 5 million truck drivers out of work.
Retraining is a solution often suggested by the technology optimists. But Ng, who knows a little about education thanks to his cofounding of Coursera, doesn’t believe retraining can be done quickly enough. “What our educational system has never done is train many people who are alive today. Things like Coursera are our best shot, but I don’t think they’re sufficient. People in the government and academia should have serious discussions about this.”

It's a Saturday morning in June at the Royal Society in London. Computer scientists, public figures and reporters have gathered to witness or take part in a decades-old challenge. Some of the participants are flesh and blood; others are silicon and binary. Thirty human judges sit down at computer terminals, and begin chatting. The goal? To determine whether they're talking to a computer program or a real person.
The event, organized by the University of Reading, was a rendition of the so-called Turing test, developed 65 years ago by British mathematician and cryptographer Alan Turing as a way to assess whether a machine is capable of intelligent behavior indistinguishable from that of a human. The recently released film "The Imitation Game," about Turing's efforts to crack the German Enigma code during World War II, is a reference to the scientist's own name for his test.
In the London competition, one computerized conversation program, or chatbot, with the personality of a 13-year-old Ukrainian boy named Eugene Goostman, rose above and beyond the other contestants. It fooled 33 percent of the judges into thinking it was a human being. At the time, contest organizers and the media hailed the performance as an historic achievement, saying the chatbot was the first machine to "pass" the Turing test
When people think of artificial intelligence (AI) — the study of the design of intelligent systems and machines — talking computers like Eugene Goostman often come to mind. But most AI researchers are focused less on producing clever conversationalists and more on developing intelligent systems that make people's lives easier — from software that can recognize objects and animals, to digital assistants that cater to, and even anticipate, their owners' needs and desires.
But several prominent thinkers, including the famed physicist Stephen Hawking and billionaire entrepreneur Elon Musk, warn that the development of AI should be cause for concern.
Thinking machines
The notion of intelligent automata, as friend or foe, dates back to ancient times.
"The idea of intelligence existing in some form that's not human seems to have a deep hold in the human psyche," said Don Perlis, a computer scientist who studies artificial intelligence at the University of Maryland, College Park.
Reports of people worshipping mythological human likenesses and building humanoid automatons date back to the days of ancient Greece and Egypt, Perlis told Live Science. AI has also featured prominently in pop culture, from the sentient computer HAL 9000 in Stanley Kubrick's "2001: A Space Odyssey" to Arnold Schwarzenegger's robot character in "The Terminator" films. [A Brief History of Artificial Intelligence]
Since the field of AI was officially founded in the mid-1950s, people have been predicting the rise of conscious machines, Perlis said. Inventor and futurist Ray Kurzweil, recently hired to be a director of engineering at Google, refers to a point in time known as "the singularity," when machine intelligence exceeds human intelligence. Based on the exponential growth of technology according to Moore's Law (which states that computing processing power doubles approximately every two years), Kurzweil has predicted the singularity will occur by 2045.
But cycles of hype and disappointment — the so-called "winters of AI" — have characterized the history of artificial intelligence, as grandiose predictions failed to come to fruition. The University of Reading Turing test is just the latest example: Many scientists dismissed the Eugene Goostman performance as a parlor trick; they said the chatbot had gamed the system by assuming the persona of a teenager who spoke English as a foreign language. (In fact, many researchers now believe it's time to develop an updated Turing test.)
Nevertheless, a number of prominent science and technology experts have expressed worry that humanity is not doing enough to prepare for the rise of artificial general intelligence, if and when it does occur. Earlier this week, Hawking issued a dire warning about the threat of AI.
"The development of full artificial intelligence could spell the end of the human race," Hawking told the BBC, in response to a question about his new voice recognition system, which uses artificial intelligence to predict intended words. (Hawking has a form of the neurological disease amyotrophic lateral sclerosis, ALS or Lou Gehrig's disease, and communicates using specialized speech software.)
And Hawking isn't alone. Musk told an audience at MIT that AI is humanity's "biggest existential threat." He also once tweeted, "We need to be super careful with AI. Potentially more dangerous than nukes."

In March, Musk, Facebook CEO Mark Zuckerberg and actor Ashton Kutcher jointly invested $40 million in the company Vicarious FPC, which aims to create a working artificial brain. At the time, Musk told CNBC that he'd like to "keep an eye on what's going on with artificial intelligence," adding, "I think there's potentially a dangerous outcome there."
But despite the fears of high-profile technology leaders, the rise of conscious machines — known as "strong AI" or "general artificial intelligence" — is likely a long way off, many researchers argue.
"I don't see any reason to think that as machines become more intelligent … which is not going to happen tomorrow — they would want to destroy us or do harm," said Charlie Ortiz, head of AI at the Burlington, Massachusetts-based software company Nuance Communications."Lots of work needs to be done before computers are anywhere near that level," he said.
Machines with benefits
Artificial intelligence is a broad and active area of research, but it's no longer the sole province of academics; increasingly, companies are incorporating AI into their products.
And there's one name that keeps cropping up in the field: Google. From smartphone assistants to driverless cars, the Bay Area-based tech giant is gearing up to be a major player in the future of artificial intelligence.
Google has been a pioneer in the use of machine learning — computer systems that can learn from data, as opposed to blindly following instructions. In particular, the company uses a set of machine-learning algorithms, collectively referred to as "deep learning," that allow a computer to do things such as recognize patterns from massive amounts of data.
For example, in June 2012, Google created a neural network of 16,000 computers that trained itself to recognize a cat by looking at millions of cat images from YouTube videos, The New York Times reported. (After all, what could be more uniquely human than watching cat videos?)
The project, called Google Brain, was led by Andrew Ng, an artificial intelligence researcher at Stanford University who is now the chief scientist for the Chinese search engine Baidu, which is sometimes referred to as "China's Google."
Today, deep learning is a part of many products at Google and at Baidu, including speech recognition, Web search and advertising, Ng told Live Science in an email.
Current computers can already complete many tasks typically performed by humans. But possessing humanlike intelligence remains a long way off, Ng said. "I think we're still very far from the singularity. This isn't a subject that most AI researchers are working toward."
Gary Marcus, a cognitive psychologist at NYU who has written extensively about AI, agreed. "I don't think we're anywhere near human intelligence [for machines]," Marcus told Live Science. In terms of simulating human thinking, "we are still in the piecemeal era."
Instead, companies like Google focus on making technology more helpful and intuitive. And nowhere is this more evident than in the smartphone market.
Artificial intelligence in your pocket
In the 2013 movie "Her," actor Joaquin Phoenix's character falls in love with his smartphone operating system, "Samantha," a computer-based personal assistant who becomes sentient. The film is obviously a product of Hollywood, but experts say that the movie gets at least one thing right: Technology will take on increasingly personal roles in people's daily lives, and will learn human habits and predict people's needs.
Anyone with an iPhone is probably familiar with Apple's digital assistant Siri, first introduced as a feature on the iPhone 4S in October 2011. Siri can answer simple questions, conduct Web searches and perform other basic functions. Microsoft's equivalent is Cortana, a digital assistant available on Windows phones. And Google has the Google app, available for Android phones or iPhones, which bills itself as providing "the information you want, when you need it."
For example, Google Now can show traffic information during your daily commute, or give you shopping list reminders while you're at the store. You can ask the app questions, such as "should I wear a sweater tomorrow?" and it will give you the weather forecast. And, perhaps a bit creepily, you can ask it to "show me all my photos of dogs" (or "cats," "sunsets" or a even a person's name), and the app will find photos that fit that description, even if you haven't labeled them as such.
Given how much personal data from users Google stores in the form of emails, search histories and cloud storage, the company's deep investments in artificial intelligence may seem disconcerting. For example, AI could make it easier for the company to deliver targeted advertising, which some users already find unpalatable. And AI-based image recognition software could make it harder for users to maintain anonymity online.
But the company, whose motto is "Don't be evil," claims it can address potential concerns about its work in AI by conducting research in the open and collaborating with other institutions, company spokesman Jason Freidenfelds told Live Science. In terms of privacy concerns, specifically, he said, "Google goes above and beyond to make sure your information is safe and secure," calling data security a "top priority."
While a phone that can learn your commute, answer your questions or recognize what a dog looks like may seem sophisticated, it still pales in comparison with a human being. In some areas, AI is no more advanced than a toddler. Yet, when asked, many AI researchers admit that the day when machines rival human intelligence will ultimately come. The question is, are people ready for it?
In the 2014 film "Transcendence," actor Johnny Depp's character uploads his mind into a computer, but his hunger for power soon threatens the autonomy of his fellow humans. [Super-Intelligent Machines: 7 Robotic Futures]
Hollywood isn't known for its scientific accuracy, but the film's themes don't fall on deaf ears. In April, when "Trancendence" was released, Hawking and fellow physicist Frank Wilczek, cosmologist Max Tegmark and computer scientist Stuart Russell published an op-ed in The Huffington Post warning of the dangers of AI.
"It's tempting to dismiss the notion of highly intelligent machines as mere science fiction," Hawking and others wrote in the article."But this would be a mistake, and potentially our worst mistake ever."
Undoubtedly, AI could have many benefits, such as helping to aid the eradication of war, disease and poverty, the scientists wrote. Creating intelligent machines would be one of the biggest achievements in human history, they wrote, but it "might also be [the] last." Considering that the singularity may be the best or worst thing to happen to humanity, not enough research is being devoted to understanding its impacts, they said.
As the scientists wrote, "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."

Elon Musk, a chief advocate of cars smart enough to park and drive themselves, continues to escalate his spooky speech when it comes to the next level of computation -- the malicious potential of artificial intelligence continues to freak him out.
"With artificial intelligence, we are summoning the demon," Musk said last week at the MIT Aeronautics and Astronautics Department's 2014 Centennial Symposium. "You know all those stories where there's the guy with the pentagram and the holy water and he's like... yeah, he's sure he can control the demon, [but] it doesn't work out."
This has become a recurring theme in Musk's public comments, and each time he warns of the AI bogeyman it seems even more dire.
In June, Musk raised the specter of the "Terminator" franchise, saying that he invests in companies working on artificial intelligence just to be able to keep an eye on the technology. In August, he reiterated his concerns in a tweet, writing that AI is "potentially more dangerous than nukes." Just a few weeks ago, Musk half-joked on a different stage that a future AI system tasked with eliminating spammight decide that the best way to accomplish this task is to eliminate humans
But this is the first time I'm aware of that Musk has kicked up the rhetoric another notch -- perhaps anticipating this week's onslaught of Halloweencostumes -- to compare AI to something supernatural like demons.
How to deal with the demonic forces of AI in the future? In a strange move for a tech mogul, Musk suggests it might be a good idea to fight one bogeyman with another (depending on your political perspective) in the form of government regulators.
"If I were to guess at what our biggest existential threat is, it's probably that," he said, referring to artificial intelligence. "I'm increasingly inclined to think there should be some regulatory oversight, maybe at the national and international level just to make sure that we don't do something very foolish."
Indeed. Who knows what demonic hellscape could emerge if we ever let artificially intelligent machines get ahold of a Ouija board. 

No comments:

Post a Comment