On the Future of Machine Intelligence

This is a very thought-provoking read on the future of machine intelligence and how we will cope with its advancement. The author, Douglas Coupland, begins the narrative with some hypothetical apps that track data (geolocation, etc.) and then paints a dystopian view:

To summarise. Everyone, basically, wants access to and control over what you will become, both as a physical and metadata entity. We are also on our way to a world of concrete walls surrounding any number of niche beliefs. On our journey, we get to watch machine intelligence become profoundly more intelligent while, as a society, we get to watch one labour category after another be systematically burped out of the labour pool. (Doug’s Law: An app is only successful if it puts a lot of people out of work.)

The darkest thought of all may be this: no matter how much politics is applied to the internet and its attendant technologies, it may simply be far too late in the game to change the future. The internet is going to do to us whatever it is going to do, and the same end state will be achieved regardless of human will. Gulp.

Do we at least want to have free access to anything on the internet? Well yes, of course. But it’s important to remember that once a freedom is removed from your internet menu, it will never come back. The political system only deletes online options — it does not add them. The amount of internet freedom we have right now is the most we’re ever going to get.

I found the notion of Artificial Intuition (as opposed to Artificial Intelligence) worth highlighting:

Artificial Intuition happens when a computer and its software look at data and analyse it using computation that mimics human intuition at the deepest levels: language, hierarchical thinking — even spiritual and religious thinking. The machines doing the thinking are deliberately designed to replicate human neural networks, and connected together form even larger artificial neural networks. It sounds scary . . . and maybe it is (or maybe it isn’t). But it’s happening now. In fact, it is accelerating at an astonishing clip, and it’s the true and definite and undeniable human future.

Worth reading in its entirety.

###

Note: I usually don’t link to The Financial Times (because of its stringent paywall), but at the time of this post, the article is free to access.

Computer Program Named Eugene Passes the Turing Test

Some fascinating news in the artificial intelligence world: the Turing test was passed for the first time, ever, at The University of Reading this month. The news is all the more interesting because the test was passed with a program simulating a 13-year-old boy named Eugene:

The 65 year-old iconic Turing Test was passed for the very first time by supercomputer Eugene Goostman during Turing Test 2014 held at the renowned Royal Society in London on Saturday.

Screen Shot 2014-06-09 at 7.04.34 AM

‘Eugene’, a computer programme that simulates a 13 year old boy, was developed in Saint Petersburg, Russia. The development team includes Eugene’s creator Vladimir Veselov, who was born in Russia and now lives in the United States, and Ukrainian born Eugene Demchenko who now lives in Russia.

The Turing Test is based on 20th century mathematician and code-breaker Turing’s 1950 famous question and answer game, ‘Can Machines Think?’. The experiment investigates whether people can detect if they are talking to machines or humans. The event is particularly poignant as it took place on the 60th anniversary of Turing’s death, nearly six months after he was given a posthumous royal pardon.

If a computer is mistaken for a human more than 30% of the time during a series of five minute keyboard conversations it passes the test. No computer has ever achieved this, until now. Eugene managed to convince 33% of the human judges that it was human.

This historic event was organised by the University’s School of Systems Engineering in partnership with RoboLaw, an EU-funded organisation examining the regulation of emerging robotic technologies.

Professor Kevin Warwick, a Visiting Professor at the University of Reading and Deputy Vice-Chancellor for Research at Coventry University, said: “In the field of Artificial Intelligence there is no more iconic and controversial milestone than the Turing Test, when a computer convinces a sufficient number of interrogators into believing that it is not a machine but rather is a human. It is fitting that such an important landmark has been reached at the Royal Society in London, the home of British Science and the scene of many great advances in human understanding over the centuries. This milestone will go down in history as one of the most exciting.

Read more: What is the Turing Test and why does it matter?

Little Bastard: The Computer Poker Machine

A fascinating piece in New York Times Magazine on the advancement of artificial intelligence in how machines play poker:

The machines, called Texas Hold ‘Em Heads Up Poker, play the limit version of the popular game so well that they can be counted on to beat poker-playing customers of most any skill level. Gamblers might win a given hand out of sheer luck, but over an extended period, as the impact of luck evens out, they must overcome carefully trained neural nets that self-learned to play aggressively and unpredictably with the expertise of a skilled professional. Later this month, a new souped-up version of the game, endorsed by Phil Hellmuth, who has won more World Series of Poker tournaments than anyone, will have its debut at the Global Gaming Expo in Las Vegas. The machines will then be rolled out into casinos around the world.

They will be placed alongside the pure numbers-crunchers, indifferent to the gambler. But poker is a game of skill and intuition, of bluffs and traps. The familiar adage is that in poker, you play the player, not the cards. This machine does that, responding to opponents’ moves and pursuing optimal strategies. But to compete at the highest levels and beat the best human players, the approach must be impeccable. Gregg Giuffria, whose company, G2 Game Design, developed Texas Hold ‘Em Heads Up Poker, was testing a prototype of the program in his Las Vegas office when he thought he detected a flaw. When he played passively until a hand’s very last card was dealt and then suddenly made a bet, the program folded rather than match his bet and risk losing more money. “I called in all my employees and told them that there’s a problem,” he says. The software seemed to play in an easily exploitable pattern. “Then I played 200 more hands, and he never did anything like that again. That was the point when we nicknamed him Little Bastard.”

Read the rest here.

Can an Alligator Run the Hundred Meter Hurdles?

Gary Marcus, writing in The New Yorker, offers a summary of why artificial intelligence isn’t so intelligent (and has a long way to go to catch up with the human brain). He focuses on the research of Hector Levesque, who is a critic of the modern A.I.:

In a terrific paper just presented at the premier international conference on artificial intelligence, Levesque, a University of Toronto computer scientist who studies these questions, has taken just about everyone in the field of A.I. to task. He argues that his colleagues have forgotten about the “intelligence” part of artificial intelligence.

Levesque starts with a critique of Alan Turing’s famous “Turing test,” in which a human, through a question-and-answer session, tries to distinguish machines from people. You’d think that if a machine could pass the test, we could safely conclude that the machine was intelligent. But Levesque argues that the Turing test is almost meaningless, because it is far too easy to game. Every year, a number of machines compete in the challenge for real, seeking something called the Loebner Prize. But the winners aren’t genuinely intelligent; instead, they tend to be more like parlor tricks, and they’re almost inherently deceitful. If a person asks a machine “How tall are you?” and the machine wants to win the Turing test, it has no choice but to confabulate. It has turned out, in fact, that the winners tend to use bluster and misdirection far more than anything approximating true intelligence. One program worked by pretending to be paranoid; others have done well by tossing off one-liners that distract interlocutors. The fakery involved in most efforts at beating the Turing test is emblematic: the real mission of A.I. ought to be building intelligence, not building software that is specifically tuned toward fixing some sort of arbitrary test.

The crux, it seems to me, is how machines interpret the subtleties of human communication and how we talk. Marcus offers the following example in which a substitute of one word yields disparate answers:

The large ball crashed right through the table because it was made of Styrofoam. What was made of Styrofoam? (The alternative formulation replaces Stryrofoam with steel.)

a) The large ball
b) The table

Continuing, he explains:

These examples, which hinge on the linguistic phenomenon known as anaphora, are hard both because they require common sense—which still eludes machines—and because they get at things people don’t bother to mention on Web pages, and that don’t end up in giant data sets.

More broadly, they are instances of what I like to call the Long-Tail Problem: common questions can often be answered simply by trawling the Web, but rare questions can still stymie all the resources of a whole Web full of Big Data. Most A.I. programs are in trouble if what they’re looking for is not spelled out explicitly on a Web page. This is part of the reason for Watson’s most famous gaffe—mistaking Toronto for a city in the United States.

Levesque’s paper is short and easily accessible for the layman.