Red X iconGreen tick iconYellow tick icon
Thursday 5 October 2017 10:29pm

In an era of burgeoning artificial intelligence, fears that machines will one day be more intelligent than the humans who created them are also escalating. In the second of a pair of opinion pieces by experts from two Otago departments, Computer Science's Associate Professor Alistair Knott says it is important to consider the impact AI could have on our everyday lives.

techeye-image
Even if it's true that super AI is decades away, it's not too soon to start thinking about it right now argues Associate Professor Alistair Knott.

alistair-knott-image
Associate Professor Alistair Knott.

Michael Winikoff makes an interesting point in his article Who believes in superintelligence?: the people who are most worried about the prospects for AI are not themselves AI researchers. The highest profile worriers are physicists (Max Tegmark, Stephen Hawking), philosophers (Nick Bostrom) and entrepreneurs (Elon Musk). These people are certainly not the best informed about AI technology: for predictions about how quickly this technology will advance, and how far it will get, we should look to actual AI researchers. As Michael notes, when AI researchers are polled, they estimate that we have an even chance of achieving human-level AI within 45 years – and that going beyond this to achieve superintelligence is 'possible, but improbable'.

But the question 'How quickly will AI advance?' is only one of the questions we need to consider. There is a second, equally important question: what consequences will AI have for us? How will AI impact on our society in the future, and on our everyday lives? AI technologists certainly aren't the best qualified to answer this question. AI experts are computer programmers. They mostly studied Computer Science. A Computer Science course doesn't contain much about how technology impacts on society, or about the social and political mechanisms through which technology can be regulated. It also doesn't cover ethical issues, beyond a few cursory lectures.

"My view is that we need a very broad interdisciplinary discussion about AI, that includes both technologists, philosophers and social scientists."

If we want to know what the social impacts of AI are likely to be, and how this new technology should be managed, we need to consult a different group of experts. Here, we certainly want to hear from entrepreneurs, who have experience in introducing new technology. And also from philosophers, who have expertise in discussing ethical questions. In fact, we should cast the net much wider: we want to hear from lawyers and political scientists, who know about regulatory mechanisms, and from psychologists, who know about how people interact with machines. My view is that we need a very broad interdisciplinary discussion about AI, that includes both technologists, philosophers and social scientists. This discussion should also include the people whose lives are most likely to be impacted by AI.

When should this discussion start? Polls of AI researchers suggest human-level AI might be 45 years away. But as Michael mentioned, there are AI systems that are already very good at certain human tasks. Even these systems are likely to have big social impacts. Driverless vehicles are a good example. These already exist: the main barriers to their deployment on our roads are legal and regulatory. How we manage the introduction of driverless cars, and other existing AI tools, is a topic for urgent interdisciplinary discussion right now. On this point, Michael and I are in full agreement.

The focus of Michael's article was on types of AI that don't yet exist, but are still the matter for speculation. I'll first consider 'general AI' - that is, a machine that can do everything a person can do. From my perspective, even if it's true that this kind of AI is decades away, it's not too soon to start thinking about it right now. Existing AI technology already has the potential to create huge disruptions. A machine that can do anything a human can do would be vastly more disruptive. We need to work out right now whether we want such a machine - and if we do want it, how it will be used and controlled. Answering these questions is a long-term project - and I think we need to kick it off as soon as possible. In particular, I think we need to start training people with expertise in both technology and public policy, who will ultimately be more qualified than us in tackling these questions.

"A machine that can do anything a human can do would be vastly more disruptive. We need to work out right now whether we want such a machine - and if we do want it, how it will be used and controlled."

I'll finish with some thoughts about superintelligence. Michael implies we don't need to worry too much about this prospect, because AI experts think it's unlikely, or very far in the future. I agree our focus should be on technologies nearer at hand. But I think it's worth investing some time in thinking about superintelligence, just because its consequences would be so huge. For the same reason, it's worth charting the courses of asteroids on near-earth orbits: the chances of an asteroid strike may be small, but if one happened, it would be a cataclysm. (There's a sizeable body of work in philosophy and political science about how to develop policy in the face of this kind of uncertainty - so again, these fields can contribute usefully.) Of course superintelligence is different from an asteroid strike: as Michael notes, it's never occurred before, so we don't even know if it is possible. But this ignorance cuts both ways: we don't know that it's possible, but equally we don't know that it's impossible. Michael suggests it might be impossible, using an argument based on limits: maybe intelligence is like light, in having a limit that can't be exceeded? But this argument isn't strong. Maybe intelligence is like light, in having a limit - but maybe it's not like light, and doesn't have a limit! In any case, if it does have a limit, why would we think that our human abilities are anywhere near that limit? Our human-centred perspective on the world can easily lead us astray here. Arguably, cosmologists like Hawking and Tegmark are among those who are least susceptible to such anthropocentric biases - I wonder if this is why they are a particularly vocal group in discussions of superintelligence.

In practical terms, Michael and I have no disagreement: now is a good moment to initiate an interdisciplinary discussion about AI and its impacts on society. In fact there are already several initiatives at Otago, and elsewhere in New Zealand, that we both participate in. The AI and Society discussion group has been meeting regularly for over two years, bringing together participants from many disciplines, covering the sciences, commerce and humanities. The AI and Law in New Zealand project is a research initiative, studying how predictive AI techniques are used in the criminal justice system, and potential effects of AI on employment. Further afield, the AI Forum of New Zealand provides opportunities for business people to discuss the opportunities and challenges of AI with policy experts, and with representatives of people who stand to be affected by it.

Yesterday – in the first opinion piece in this pair – Information Science's Michael Winikoff argued that it is too early to become concerned about superintelligent machines.

Back to top