On the Future of Machine Intelligence

This is a very thought-provoking read on the future of machine intelligence and how we will cope with its advancement. The author, Douglas Coupland, begins the narrative with some hypothetical apps that track data (geolocation, etc.) and then paints a dystopian view:

To summarise. Everyone, basically, wants access to and control over what you will become, both as a physical and metadata entity. We are also on our way to a world of concrete walls surrounding any number of niche beliefs. On our journey, we get to watch machine intelligence become profoundly more intelligent while, as a society, we get to watch one labour category after another be systematically burped out of the labour pool. (Doug’s Law: An app is only successful if it puts a lot of people out of work.)

The darkest thought of all may be this: no matter how much politics is applied to the internet and its attendant technologies, it may simply be far too late in the game to change the future. The internet is going to do to us whatever it is going to do, and the same end state will be achieved regardless of human will. Gulp.

Do we at least want to have free access to anything on the internet? Well yes, of course. But it’s important to remember that once a freedom is removed from your internet menu, it will never come back. The political system only deletes online options — it does not add them. The amount of internet freedom we have right now is the most we’re ever going to get.

I found the notion of Artificial Intuition (as opposed to Artificial Intelligence) worth highlighting:

Artificial Intuition happens when a computer and its software look at data and analyse it using computation that mimics human intuition at the deepest levels: language, hierarchical thinking — even spiritual and religious thinking. The machines doing the thinking are deliberately designed to replicate human neural networks, and connected together form even larger artificial neural networks. It sounds scary . . . and maybe it is (or maybe it isn’t). But it’s happening now. In fact, it is accelerating at an astonishing clip, and it’s the true and definite and undeniable human future.

Worth reading in its entirety.

###

Note: I usually don’t link to The Financial Times (because of its stringent paywall), but at the time of this post, the article is free to access.

Computer Program Named Eugene Passes the Turing Test

Some fascinating news in the artificial intelligence world: the Turing test was passed for the first time, ever, at The University of Reading this month. The news is all the more interesting because the test was passed with a program simulating a 13-year-old boy named Eugene:

The 65 year-old iconic Turing Test was passed for the very first time by supercomputer Eugene Goostman during Turing Test 2014 held at the renowned Royal Society in London on Saturday.

Screen Shot 2014-06-09 at 7.04.34 AM

‘Eugene’, a computer programme that simulates a 13 year old boy, was developed in Saint Petersburg, Russia. The development team includes Eugene’s creator Vladimir Veselov, who was born in Russia and now lives in the United States, and Ukrainian born Eugene Demchenko who now lives in Russia.

The Turing Test is based on 20th century mathematician and code-breaker Turing’s 1950 famous question and answer game, ‘Can Machines Think?’. The experiment investigates whether people can detect if they are talking to machines or humans. The event is particularly poignant as it took place on the 60th anniversary of Turing’s death, nearly six months after he was given a posthumous royal pardon.

If a computer is mistaken for a human more than 30% of the time during a series of five minute keyboard conversations it passes the test. No computer has ever achieved this, until now. Eugene managed to convince 33% of the human judges that it was human.

This historic event was organised by the University’s School of Systems Engineering in partnership with RoboLaw, an EU-funded organisation examining the regulation of emerging robotic technologies.

Professor Kevin Warwick, a Visiting Professor at the University of Reading and Deputy Vice-Chancellor for Research at Coventry University, said: “In the field of Artificial Intelligence there is no more iconic and controversial milestone than the Turing Test, when a computer convinces a sufficient number of interrogators into believing that it is not a machine but rather is a human. It is fitting that such an important landmark has been reached at the Royal Society in London, the home of British Science and the scene of many great advances in human understanding over the centuries. This milestone will go down in history as one of the most exciting.

Read more: What is the Turing Test and why does it matter?

Little Bastard: The Computer Poker Machine

A fascinating piece in New York Times Magazine on the advancement of artificial intelligence in how machines play poker:

The machines, called Texas Hold ‘Em Heads Up Poker, play the limit version of the popular game so well that they can be counted on to beat poker-playing customers of most any skill level. Gamblers might win a given hand out of sheer luck, but over an extended period, as the impact of luck evens out, they must overcome carefully trained neural nets that self-learned to play aggressively and unpredictably with the expertise of a skilled professional. Later this month, a new souped-up version of the game, endorsed by Phil Hellmuth, who has won more World Series of Poker tournaments than anyone, will have its debut at the Global Gaming Expo in Las Vegas. The machines will then be rolled out into casinos around the world.

They will be placed alongside the pure numbers-crunchers, indifferent to the gambler. But poker is a game of skill and intuition, of bluffs and traps. The familiar adage is that in poker, you play the player, not the cards. This machine does that, responding to opponents’ moves and pursuing optimal strategies. But to compete at the highest levels and beat the best human players, the approach must be impeccable. Gregg Giuffria, whose company, G2 Game Design, developed Texas Hold ‘Em Heads Up Poker, was testing a prototype of the program in his Las Vegas office when he thought he detected a flaw. When he played passively until a hand’s very last card was dealt and then suddenly made a bet, the program folded rather than match his bet and risk losing more money. “I called in all my employees and told them that there’s a problem,” he says. The software seemed to play in an easily exploitable pattern. “Then I played 200 more hands, and he never did anything like that again. That was the point when we nicknamed him Little Bastard.”

Read the rest here.