Red X iconGreen tick iconYellow tick icon
Data science bannerMonday 27 May 2019 2:03pm

New Zealand is a world leader in government algorithm use – but measures are needed to guard against their dangers.

This is the conclusion of a New Zealand Law Foundation-funded report from the University of Otago's Artificial Intelligence and Law in New Zealand Project (AILNZP), which was released this week.

James Maclaurin image
Professor James Maclaurin.

The study points to examples from other countries where algorithms have led to troubling outcomes. In the USA, an algorithm that had been used for years in the youth justice system turned out never to have been properly tested for accuracy.

In other cases, there has been concern about producing racially biased outcomes. The COMPAS algorithm, for instance, has been widely criticised for overstating the risk of black prisoners reoffending, compared with their white counterparts – an outcome that can result in them being kept in prison for longer.

Report co-author Professor James Maclaurin says government agencies' use of AI algorithms is increasingly coming under scrutiny.

“On the plus side, AI can enhance the accuracy, efficiency and fairness of decisions affecting New Zealanders, but there are also worries about things like accuracy, transparency, control and bias.”

Alistair Knott image
Associate Professor Ali Knott.

“We might think that a computer programme can't be prejudiced,” says co-author Associate Professor Ali Knott. “But if the information fed to it is based on previous human decisions, then its outputs could be tainted by historic human biases. There's also a danger that other, innocent-looking factors - postcode for instance - can serve as proxies for things like race.”

Checking algorithmic decisions for these sorts of problems means that the decision-making needs to be transparent. “You can't check or correct a decision if you can't see how it was made,” says Knott. “But in some overseas cases, that's been impossible, because the companies who design the algorithms won't reveal how they work.”

So far, New Zealand has done better with this, Maclaurin says.

“Unlike some countries that use commercial AI products, we've tended to build our government AI tools in-house, which means we know how they work. That's a practice we strongly recommend our government continues.”

Guarding against unintended algorithmic bias, though, involves more than being able to see how the code works.

Colin Gavaghan image
Associate Professor Colin Gavaghan.

“Even with the best of intentions, problems can sneak back in if we're not careful,” warns co-author Associate Professor Colin Gavaghan.

For that reason, the report recommends that New Zealand establishes a new, independent regulator, to oversee the use of algorithms in government.

“We already have rights against discrimination and to be given reasons for decisions, but we can't just leave it up to the people on the sharp end of these decisions to monitor what's going on. They'll often have little economic or political power. And they may not know whether an algorithm's decisions are affecting different sections of the population differently,” he says.

The report also warns against “regulatory placebos” – measures that make us feel like we're being protected without actually making us any safer.

“For instance, there's been a lot of talk about keeping a “human in the loop” – making sure that no decisions are made just by the algorithm, without a person signing them off.

“But there's good evidence that humans tend to become over trusting and uncritical of automated systems – especially when those systems get it right most of the time. There's a real danger that adding a human 'in the loop' will just offer false reassurance.”

“These are powerful tools, and they're making recommendations that can affect some of the most important parts of our lives. If we're serious about checking their accuracy, avoiding discriminatory outcomes, promoting transparency and the like, we really need effective supervision,” Gavaghan says.

Dr John Zerilli image
Co-author John Zerilli.

The report recommends that predictive algorithms used by government, whether developed commercially or in-house, must:

  • Feature in a public register
  • Be publicly inspectable, and
  • Be supplemented with explanation systems that allow lay people to understand how they reach their decisions.

Their accuracy should also be regularly assessed, with these assessments made publicly available.

Law Foundation Director Lynda Hagen says the project received the “lion's share” of funding distributed under the Foundation's Information Law and Policy Project ($432,217).

“We did this because we think artificial intelligence and its impacts are not well-understood in New Zealand, so research was urgently needed.

“We welcome the release of this Phase 1 report which provides the first significant, independent and multi-disciplinary analysis of the use of algorithms by New Zealand government agencies. The information from this work will better inform the development of stronger policy and regulation.”

Joy Liddicoat image
Co-author Joy Liddicoat.

The report's co-authors were Colin Gavaghan, Alistair Knott, James MacLaurin, John Zerilli and Joy Liddicoat, all of the University of Otago.

For more information, contact

Associate Professor Colin Gavaghan
Email colin.gavaghan@otago.ac.nz

Associate Professor Ali Knott
Email alik@cs.otago.ac.nz

Professor James Maclaurin
Email james.maclaurin@otago.ac.nz

FIND an Otago Expert

Use our Media Expertise Database to find an Otago researcher for media comment.

Back to top