Red X iconGreen tick iconYellow tick icon
Wednesday 4 October 2017 10:30pm

In an era of burgeoning artificial intelligence, fears that machines will one day be more intelligent than the humans who created them are also escalating.In the first of a pair of opinion pieces by experts from two Otago departments, Information Science's Professor Michael Winikoff argues against such concerns.

digital-dreams-blue-image
Is superintelligence possible? How close are we to achieving human-level machine intelligence? And should we be worried?

Michael-Winikoff-image
Professor Michael Winikoff.

Recently the Otago Daily Times re-published an article titled Smarter than us. The article warns of the danger of machines becoming "super-intelligent". That is, a "massive expansion in intelligence" would result in "superintelligence, a singularity that we can only guess at".

This is clearly an alarming prospect, and a number of high-profile people have expressed concern.

However, what is usually not well explained in such articles is the relevant expertise of these people. For instance, the ODT article begins with the words "Artificial intelligence expert Max Tegmark".

But Professor Tegmark is a physicist. Other high-profile people who are warning about super-intelligent machines include entrepreneurs, physicists, and philosophers.

I would not go a dentist or an engineer for health advice. Nor would I ask a physicist or a GP for advice on engineering a bridge. Someone being an expert in one domain does not mean they have expertise in other (even related) domains. To be clear, I'm not saying that experts are necessarily right, or that a non-expert is necessarily wrong (and there is much in the article that I agree with). However, it is crucial to understand who has relevant expertise, and to take that into account.

"Aggregating the responses, they found that human-level intelligence was believed to be 50 per cent likely within 45 years."

So, what do the people who actually work in Artificial Intelligence (AI) think? And what do they know that ought to be considered? Is superintelligence possible? How close are we to achieving human-level machine intelligence?

As the article notes, there is an important distinction between specialised and general AI. There are a whole range of specific tasks where machines can equal or out-perform humans. For instance, playing chess (and more recently, Go), winning at Jeopardy, recognising images. But these are specialised. A general AI would not only be able to play chess, but to recognise faces, hold a conversation, and learn as effectively as humans.

One thing that AI experts know is that there is a huge gap between being able to perform well in a particular task and being able to perform well in all tasks.

A second thing that AI experts know is that general AI is hard. Really hard. We've had over 60 years of really smart people working on it, and we're still a long way from achieving it. How long? A recent paper surveyed researchers who publish in the AI sub-area of machine learning. Aggregating the responses, they found that human-level intelligence was believed to be 50 per cent likely within 45 years.

So, general human-level machine intelligence is still a long way away. But what about superintelligence?

We know that human level intelligence is possible. However, for superintelligence we do not even know that! Is there an upper limit to intelligence? We simply do not know.

There are limits to computation. There are physical limits, such as the speed of light. There are also limits from computing theory: some problems are known to be impossible to solve, and some problems are impossible to solve efficiently. These limits suggest that there are quite possibly also limits on intelligence.

"The problem with the media raising the alarm about superintelligence is that it diverts attention from real and pressing problems."

A simple analogy is falling. If you jump out of a plane you do not continue to accelerate to "super speeds". Instead, you reach terminal velocity due to air resistance.

So, we do not know whether superintelligence is even possible, and people who have relevant expertise are generally skeptical of the possibility of superintelligence. The recent paper also asked about superintelligence. They found that "Explosive progress in AI after HLMI [Human-Level Machine Intelligence] is seen as possible but improbable".

Some might argue that we should worry about superintelligence regardless, just as we might worry about all sorts of risks. But this argument relies on ignoring a crucial distinction. Some risks are real: they are known to be possible, usually because they have happened. For example, an asteroid hitting the earth. But superintelligence is completely speculative. We could equally well worry about cats developing telepathy and taking over the world.

Is there scope to explore the limits of intelligence in order to clarify what is possible? Absolutely. Is it time now to sound the alarm about superintelligence? No.

The problem with the media raising the alarm about superintelligence is that it diverts attention from real and pressing problems. These include more immediate consequences of automation and (specialised) AI that we need to consider, drawing on a broad range of disciplines.

Who believes in superintelligence? Generally not the experts.

Tomorrow – in the second opinion piece in this pair – Computer Science's Associate Professor Alistair Knott will argue that it is important to consider the impact AI could have on our everyday lives.

Back to top