New book decodes AI for citizens
"> Accessibility Skip to Global Navigation Skip to Local Navigation Skip to Content Skip to Search Skip to Site Map Menu

New book decodes AI for citizens

Thursday 25 February 2021 4:29pm

AI-book-650-group
(From left) Associate Professor Alistair Knott (Computer Science), Professor James Maclaurin (Philosophy), Professor Colin Gavaghan (Law) (Photo: Otago Magazine Issue 47/ Alan Dove)

A new book on Artificial Intelligence (AI) looks beyond the hype of the “new algorithmic world” to decode how AI already affects everyday life, and what it will mean in future.

Published by MIT Press and released this month, A Citizen’s Guide to Artificial Intelligence is an accessible book structured around 10 core themes, including how much New Zealanders need to know about how AI works; why it is often biased; where responsibility lies when it causes harm; how best to retain control in an AI driven world; how AI will affect our autonomy, our privacy, the business of government and the future of work.

AI-book-226-cover-book

The far-ranging discussion has contributions from philosopher John Zerilli (University of Oxford), James Maclaurin (Philosophy, University of Otago), Colin Gavaghan (Law, Otago) and Alistair Knott (Computer Science, Otago), John Danaher (Law, National University of Ireland), human rights and technology lawyer Joy Liddicoat and Merel Noorman (AI, robotics, University of Tilburg).

Compiled as the world adjusted to the lockdown-induced realities of COVID-19, the book also covers the vocational implications of increased AI use; it poses tantalising questions such as ‘could we do without work altogether?’.

The pandemic, the authors suggest, provides many good examples for the need to analyse and understand AI use.

“As world emerges from the crisis, hard questions about “privacy, safety, equity and dignity will have to be asked” as governments and police assume new powers and technology is rapidly pressed into service to track and monitor our movements.”

AI book Zerilli-226
Dr John Zerilli

Co-author and Otago Philosophy Professor James Maclaurin explains the human thinking behind A Citizens Guide to Artificial Intelligence:

Why did you undertake research in this area, and the publication project?
A group of us in the Centre for AI and Public Policy at Otago have been working for several years on a New Zealand Law Foundation project looking at how the law might have to change to accommodate rapid deployment of AI that will affect many aspects of how we live and work. This has resulted in us writing large technical reports for government and business.

At the same time, we’ve been very struck that it is difficult for members of the public to learn about this technology that is about to change our lives. Lots of books and articles present very simplistic pictures of how AI will affect us—“AI is a gold rush / arms race so we should adopt it as quickly as possible”, “robots are going to take away many people’s jobs”, “AI will be fine as long as we always put a person in charge of it” etc etc. The reality is both more complicated and much more interesting.

What is it about?
Rather than focussing on applications (e.g. AI that can translate languages, write websites, drive cars, etc), we organised the book according to the sort of practical, philosophical, and legal questions that experts think about. That allows us to give plain English explanations of really challenging problems:

  • - If AI gets to be much better than humans at curing diseases or detecting crime, would it matter if it was so complex that almost none of us knew how it worked?
  • - What is ‘algorithmic bias’ and if AI is often biased, does this mean that we shouldn’t use it?
  • - How can we effectively supervise machines that are increasingly smarter than us, at least within very specific domains?
  • - Could we build an AI that was really responsible for its actions?

What should everyone know about AI use, and what are some of the benefits of knowing how it affects us?
The idea that everyone needs to learn how to write the code used to build AI, is no more true than the idea that everyone needs to know how to build a car or wire up a house. What people do need to know is (in simple terms) what AI is; what sort of tasks it is good at; and how it sometimes goes wrong.

I hope reading this book makes people think about . . .
Rather than reacting to ’scare stories’ about AI going wrong, we hope this book will make people think about how we would like AI to make the world a better place.

You should read this book if . . .
… you want to understand a technology that is going to completely change the world.