Skip to main content

Advertisement

Advertisement

Computers need not think to pull off parlour tricks

The big brains of the artificial intelligence world, meeting this week for the discipline’s annual conference, are enjoying a long-awaited moment in the sun.

A technician working in a Google data centre in Oregon. Google, Facebook, Amazon and Baidu are buying up talent and investing to build in-house AI groups. Photo: AP

A technician working in a Google data centre in Oregon. Google, Facebook, Amazon and Baidu are buying up talent and investing to build in-house AI groups. Photo: AP

Follow TODAY on WhatsApp
Follow TODAY on WhatsApp

The big brains of the artificial intelligence world, meeting this week for the discipline’s annual conference, are enjoying a long-awaited moment in the sun.

In the 59 years since the term was coined, the idea of machines that think has gone in and out of style. Memories of the “AI winter” of the 1970s and 1980s still haunt the field.

With companies such as Google, Facebook, Amazon and Baidu buying up talent and investing to build in-house AI groups, it is back with a vengeance. Yet differences of opinion over the technology can still feel almost like religious schisms. The very term artificial intelligence suggests an attempt to replicate intelligence of the human kind, with all the philosophical implications that carries.

This, for instance, is Dr Oren Etzioni, head of an ambitious research programme backed by Microsoft co-founder Paul Allen, talking about a corner of the AI landscape that has become particularly fashionable: “There are plenty of exciting individual applications. But if you scratch a little deeper, the technology goes off a cliff.”

The object of his put-down is a technique known as deep learning. This is currently in the ascendant. Indeed, when technologists talk about the amazing revival in AI, it is usually this they have in mind.

BIG LEAPS IN DEEP LEARNING

The success of deep learning is a product of the times. The idea is decades old: That a batch of processors, fed with enough data, could be made to function like a network of artificial neurons. Grouping and sorting information in progressively more refined ways, they could “learn” how to parse it in something akin to the way the human brain is believed to function.

It has taken the massive computing power concentrated in cloud data centres to train neural networks enough to make them useful. It sounds like a dream of artificial intelligence as conjured up by Google: Ingest all the world’s data and apply enough processing power, and the secrets of the universe will reveal themselves to you.

Deep learning has produced some impressive results. In a project known as DeepFace, Facebook recently reported that it had reached 97.35 per cent accuracy in identifying the faces of 4,000 people in a collection of four million images, far better than had been achieved before.

Such feats of pattern recognition come naturally to humans, but they are hard for computer scientists to copy. Even trite-sounding results can point to important advances. Google’s report two years ago that it had designed a system that identified cats in YouTube videos still reverberates around the field.

Using the same techniques to “understand” language or solve other problems that rely on pattern recognition could make machines far better at interpreting the world around them. By analysing what people are doing and comparing it to what they (and thousands of others) have done in similar situations in the past, they could also anticipate what they might do next.

The result could be behavioural systems that truly understand your behaviour and recommendation engines capable of suggesting things you actually want. These may sound eerie. But done properly, machines could come to anticipate our needs and act as lifetime guides.

But there is a risk of equating the output of systems such as these with the products of actual human intelligence. In reality, they are parlour tricks, albeit impressive ones. The important thing will be to know where to apply their skills — and how far to trust them.

Deep learning systems do not employ the sort of transparent reasoning involved in classical AI, where computers are fed with defined bodies of knowledge and rules about how to interpret them. That, say sceptics, makes their output inherently mysterious.

“If you seek advice and someone makes a recommendation, and that person can’t list their reasoning, then you’ll distrust their recommendation,” said Mr Raul Valdes-Perez, a computer scientist whose start-up, OnlyBoth, uses classical techniques to respond to queries in full sentences, which he says is beyond the ability of deep learning.

This is what Dr Etzioni had in mind when warning that deep learning, if pushed beyond its limits, will eventually fall off a cliff. He gave the example of a diagnostic system based on this technique that recommends removing a patient’s kidney. The machine would not understand what a kidney is and what it means to remove it, he said. “We don’t want a doctor or decision maker who doesn’t understand what it’s talking about.”

If it takes a leap of faith to trust such systems when it comes to serious medical interventions, how far should they be trusted when applied to everyday life?

The big leaps being made in this field suggest that the results of deep learning will soon be felt more widely. Under the control of human experts who know when to use it and how to interpret the conclusions, it could bring about a revolution in machine-assisted decision making.

But, as always with artificial intelligence, do not expect too much.

The Financial Times

ABOUT THE AUTHOR:

San Francisco-based Richard Waters writes for the Financial Times’ Tech blog.

Read more of the latest in

Advertisement

Advertisement

Stay in the know. Anytime. Anywhere.

Subscribe to get daily news updates, insights and must reads delivered straight to your inbox.

By clicking subscribe, I agree for my personal data to be used to send me TODAY newsletters, promotional offers and for research and analysis.