If you say something about artificial intelligence (AI) and your concerns about it, well this is a very interesting topic. The question to how to build artificial intelligence that isn't going to destroy us is something not only I began to pay attention too, a rather deep and consequential problem. I went to a conference in Puerto Rico to focus on this issue organized by The Future of Life Institute and I was brought there by a friend, Elon Musk who undoubted many of you have heard of. And Elon recently said publicly that he thought AI was the greatest threat to human survival here, perhaps greater than nuclear weapons and many people took that as an incredibly hyperbolic statement.
Now knowing Elon and how close to the details he's apt to be, I took it as very interesting diagnosis of the problem. But I wasn't quite sure what I thought about it, I haven't really spent a much time focusing on progress you could make with AI and its implications. I went to this conference in San Juan held by 4 other people who were closest to doing this work, this was not open to the public. There was 1, maybe 2 or 3 interlopers there who just hadn't been invited who got themselves invited and what was fascinating about that was that the collection of people who were very worried like Elon and others who felt that this was something to pull the brakes, even though that seemed somewhat hopeless, to the people who were doing work most energetically and most wanted to convince others not to worry about having to pull the brakes. And what was interesting there is what I heard outside this conference, once you hear say on edge.org or general discussions about the prospects making real breakthroughs in artificial intelligence. You hear a time frame of 50-100 years before anything terribly scary or terribly interesting going to happen.
In this conference, that was almost never the case. Everyone who is still trying to ensure doing this as safely as possible was still conceding that a time frame of 5-10 years admitted of rather alarming progress. And when I came back from that conference, the edge question for 2015 just happened to be on the topic of artificial intelligence, so I wrote a short piece, distilling what my view now was.
Perhaps I'll just read that, it won't take too long and hopefully it won't bore you.
"Can we avoid a digital apocalypse? It seems increasingly likely that we will one day build machines that have super human intelligence. When you only continue to produce better computers which we will unless we destroy ourselves or meet our end some other way. We already know that it's possible for mere matter to acquire a quote, general intelligence.
The ability to learn new concepts, and employ them in unfamiliar context. Because the 1200 cubic centimeters (cc's) of salty porridge inside our heads is manageable. There's no reason to believe that a suitably advanced digital computer couldn't do the same. It's often said that the near term goal is to build a machine that possesses, quote, human level intelligence but unless we specifically emulate a human brain, with all of it's limitations, this is a false goal. The computer in which I'm writing these words, already possesses super human powers of memory and calculation, it also has potential access to most of the world's information. Unless we take extraordinary steps to hobble it, any future artificial general intelligence known as "AGI" will exceed human performance on every task which is considered a source of intelligence in the first place.
Whether such a machine would necessarily be conscious is an open question, but conscious or not, an AGI might very well develop goals in compatible with our own. Just how sudden and lethal this parting of the ways might be is not a subject of much colorful speculation.