Established in 2018, the Centre for Artificial Intelligence and Public Policy aims to foster high quality research into policy, regulation, ethics and governance associated with the implementation of artificial intelligence both internationally and within New Zealand. We aim to build links with government and industry to help maximise benefits and minimise harms of this fast-growing constellation of technologies.
James Maclaurin is a Professor in the Department of Philosophy at the University of Otago. He took an MA from Victoria University of Wellington in scientific applications of mathematical information theory, then took a PhD in philosophy of science from the Research School of the Social Sciences at the Australian National University. His research involves the application of philosophy to epistemic, ontological and ethical issues in a wide variety of scientific domains including public health, economics, ecology and information science. He was one of the founders of Otago's Artificial Intelligence and Society Group and he is a PI on the AI and Law in New Zealand project. He is also the Associate Dean for Research in the Division of Humanities.
Tim Dare is a Professor of Philosophy at the University of Auckland. He worked briefly as a lawyer before completing a PhD in the philosophy of law and starting his academic career in the early 1990s. His publications include books and articles on the philosophy of law, legal ethics, immunisation programmes, the significance of judicial disagreement, parental rights and medical decisions, the proper allocation of the burden of proof, and the use of predictive analytics in child protection.
He is employed by New Zealand's Ministry of Social Development to provide data ethics advice and to develop privacy, human rights, and ethical review processes for proposed uses of client data. He has provided ethical reviews of a number of predictive risk modelling tools in New Zealand and the US.
He is principal investigator on a New Zealand Royal Society Marsden Grant (2018-2020) investigating the ethics of using predictive risk modelling tools in social policy contexts.
Lisa Ellis is Professor of Philosophy and Politics and Director of the Philosophy, Politics, and Economics programme at the University of Otago. Her work in environmental political theory investigates how we can make environmental policy decisions that serve our collective interests in flourishing now and in the future. She has written about environmental democracy, the collective ethics of flying, measuring the human value of biodiversity losses, climate adaptation justice, and species extinction, among other topics. Lisa's work has been supported by the Institute for Advanced Study, the National Endowment for the Humanities, the Alexander von Humboldt Foundation, the Andrew W. Mellon Foundation, the Deutscher Akademischer Austausch Dienst, and most recently by the Deep South National Science Challenge.
David Eyers' research seeks to make software and data-driven systems more accountable and their activities easier to audit. His recent publications examine topics such as security enforcement and controlled data dissemination mechanisms within wide-area distributed systems. In particular, he has worked with event-based middleware, role-based access control, and decentralised information flow control, as emerging paradigms that can help develop accountable software systems. The notion of accountability is of growing importance within cloud computing and the Internet of Things, as these systems gain responsibility for processing increasingly large volumes of personally sensitive data.
Colin Gavaghan is the Professor of Digital Futures, Bristol Law School, University of Bristol. Colin graduated with an LLM and later a PhD from Glasgow University, where he also had his first lecturing position, before moving to New Zealand in 2009.
Colin specialises in medical and emerging technology law. He is the author of Defending the Genetic Supermarket: The Law and Ethics of Selecting the Next Generation (Routledge, 2008) and of dozens of articles and chapters on a range of topics including legal questions around genetic and reproductive technologies, end of life decisions and applications of neuroscience.
Emily Keddell is an Associate Professor in social work at the University of Otago. Her practice background is in family support and child protection social work. Her research interests include a number of facets of child welfare policy and practice: decision-making, inequalities in system contact, the politics of system design, and the use of predictive analytics. In each of these areas, she is interested in questions relating to the framing of knowledge, equity and discrimination, and the effects on citizens. She is a member of the Re-imagining Social Work collective.
Ali Knott is an Associate Professor in the Computer Science department at Victoria University of Wellington. He studied psychology and philosophy at Oxford University, then took an MSc and PhD in artificial intelligence at the University of Edinburgh. He is an expert on models of human language, and its interfaces to perception, motor control and memory, and has published over 100 papers on these topics.
Ali has been interested in the ethical and social implications of AI throughout his career. At Otago, he pursues this interest by helping to organise the AI and Society discussion group, and by participating in the AI and Law in New Zealand project. Ali also works for the Auckland AI company Soul Machines, where he helps design models of human-computer dialogue and developmental models of language and cognition. Working in an AI company provides a different set of opportunities for advancing the discussion about AI and its impact on society.
Brendan McCane has been an active researcher in artificial intelligence for 25 years. His PhD involved developing a machine learning system for recognising objects in images. He has since published over 90 papers in the general areas of machine learning, computer vision, and biological and medical imaging. He is broadly a techno-optimist and believes that AI is and will be a net positive for humankind. Nevertheless, he also believes that AI systems have potential to cause harm and therefore need to be examinable and open so that biases or weaknesses either in data or algorithms are discoverable.