Red X iconGreen tick iconYellow tick icon

Friday 26 October 2018 4:29pm

James Maclaurin image
Professor James Maclaurin

University of Otago academics applaud the first ever stock-take report into government agencies' use of algorithms launched on Thursday by Statistics New Zealand and the Department of Internal Affairs.

Researchers on the Artificial Intelligence and Law in New Zealand Project (AILNZP), funded by the New Zealand Law Foundation, say that this is an excellent initiative providing important information about the way government develops and deploys such tools. The AILNZP believes it will also inform New Zealanders about the many contexts in which artificial intelligence is directly affecting government decisions about their lives. There appear to be few government agencies not using AI algorithms in some way. The report also makes clear some of the challenges of having humans and machines collaborate in decision-making.

While the level of discussion about particular algorithms is variable, it is clear that some agencies have put a great deal of effort into balancing the effectiveness of algorithmic tools against their potential to invade privacy or appear as unexplainable ‘black boxes’. Nonetheless, as Professor James Maclaurin of Otago’s Centre for AI and Public Policy notes, a list is very different from an audit.

“So while we know much more about government algorithm use, important ethical, legal and technical challenges remain to be addressed,” Professor Maclaurin says.

New Zealanders can also be reassured by the Principles for the Safe and Effective use of Data and Analytics developed jointly by the Office of the Privacy Commissioner and Statistics New Zealand. These are for the most part sensible principles and the report contains useful discussions and examples of their application. They also make clear how challenging it will be to meet these standards in difficult cases.

Professor Maclaurin says a pressing question remains over when agencies should be comparing the accuracy of algorithms with the accuracy of non-algorithmic approaches.

“The stocktake contains little discussion of the accuracy of algorithms and of the false negatives and false positives they generate. Much of the report is written in the language of cost benefit analysis. There is almost no consideration of fairness. Should we use a tool to prioritise spending which leaves the average stakeholder better off but leaves some citizens, the statistical outliers, much worse off?” asks Professor Maclaurin.

The Principles for the Safe and Effective use of Data and Analytics are not enforceable regulations, but they serve to remind the public that we have much to learn about issues such as transparency, bias, and human oversight in the use of AI in decision-making tools. These questions need both independent research and informed public debate.

The AILNZP says that in terms of future direction, the report has detected inconsistency in how different government agencies have developed and deployed algorithms, and in the standards and processes used to assess their ongoing use. It is also interesting to note, says Maclaurin, some types of information are almost wholly absent.

“There is almost no discussion of the accuracy of the various algorithms discussed, or of what sorts of errors they make. While the possibility of bias is discussed, there’s no discussion of whether the pattern of errors is different for different groups and hence of the fairness of the algorithms in question,” he says.

The report notes the importance of ‘transparency’ about government algorithms, but it focusses on transparency about the way systems work in general, rather than transparency about how a given system reached a particular decision. The AILNZAP believes this could be crucial for individuals wanting to appeal decisions they think are incorrect.

The report champions human oversight, but it is less clear about how this is to be achieved.

“Is this vetting the design of algorithms, monitoring their deployment, assessing each individual decision in which they are used?” asks Dr John Zerilli from Otago’s Department of Philosophy.

“International research suggests that having a ‘human in the loop’ can, in some cases, offer false reassurance if humans become overly dependent on the algorithms' recommendations or predictions,” Dr Zerilli adds.

The report also recommends the establishment of a Centre of Excellence advising the government on its use of algorithms. Associate Professor Alistair Knott from Otago’s Department of Computer Science applauds this idea, but argues that such a centre should also disseminate information about these algorithms, as well as evaluating each system by answering a standard set of questions:

  • How accurate are the system’s decisions?
  • Is there any evidence it is biased in relation to particular social groups?
  • Is it able to give explanations about its decisions?
  • If human workers use it to inform their decisions, are they trained in its use?

Associate Professor Colin Gavaghan of Otago’s Centre for Law and Emerging Technologies says that despite its lack of detail on evaluation, the report is timely.

“It emphasises the need for better public understanding of governmental algorithm use, and recognises the importance of involving a wide range of stakeholders affected by the use of these tools. This raises the question of whether we should be extending such analysis further; should we be taking stock of algorithms used in government funded institutions like hospitals and schools, should we be looking at strategic rather than purely operational algorithms, and should we also encourage companies to divulge the extent of their algorithm use to the public?” queries Associate Professor Gavaghan.

For more information, contact:

Professor James Maclaurin
Co-director, Centre for Artificial Intelligence and Public Policy
University of Otago
Tel 03 479 8719
Email james.maclaurin@otago.ac.nz

Mark Hathaway
Senior Communications Adviser
University of Otago
Mob 021 279 5016
Email mark.hathaway@otago.ac.nz

Electronic addresses (including email accounts, instant messaging services, or telephone accounts) published on this page are for the sole purpose of contact with the individuals concerned, in their capacity as officers, employees or students of the University of Otago, or their respective organisation. Publication of any such electronic address is not to be taken as consent to receive unsolicited commercial electronic messages by the address holder.

Back to top