Red X iconGreen tick iconYellow tick icon

MWinikoff"When might you trust an autonomous software system?"

Imagine a future with autonomous systems - self-driving cars, robots, or a smart grid that makes rapid decisions to balance supply and demand in a distributed electrical network. Systems like these are starting to be deployed already, and are likely to become more common.

Autonomous systems have enormous global potential, and the University of Otago has a part to play in their future. Research conducted by University of Otago Information Science Professor Michael Winikoff provides a foundation for engineering such systems.

His recent work aims to address trust - a crucial hurdle to such systems' adoption and deployment. He asks when might people trust such systems. Would you be comfortable handing over electricity distribution decisions to a piece of software for instance?

Trust in an autonomous decision-making software system relies on software capable of making good decisions, with the history to prove it, but Professor Winikoff argues that this is not enough to earn our confidence. He believes that understanding how it makes decisions helps to trust a piece of software.

This means that the decisions the software makes need to be explainable in ways that people can easily understand.

One approach to achieving explainability is to engineer autonomous systems using concepts that are based on human decision-making. For example, software representations for goals and plans allow an autonomous software system to explain why it chose a particular course of action, and the explanation can be human-oriented.

Professor Winikoff has been working on how to engineer such human-inspired autonomous software for the past 15 years. He's developed underlying concepts, a software engineering methodology, and design notations to support human software developers.

Recently, he has been focusing on assurance and trust, aiming to develop the mechanisms that can provide a good basis for trusting autonomous systems.

A key challenge is how to demonstrate that an autonomous system (defined through goals and plans) will always behave in a way that satisfies certain properties. Since autonomous systems exhibit a very large range of complex behaviours, this will need mathematical analysis of all possible behaviours, rather than relying on a conventional testing regime across a range of situations.

One example of this are the requirements for a nursebot. We need to be able to demonstrate that no matter what happens, a nursebot which detects an incapacitated person will always seek medical help immediately, regardless of its other active goals.

A second key challenge is how to systematically identify exactly what properties people need to have verified. For example, what properties would you want to know about a nursebot before trusting it?

Further information:
Michael's Inaugural Professorial Lecture is available on Youtube (September 2014).

EQUIS logo AACSB logo AAPBS logo Global Business School Network logoFair Trade Certified PIM logo QS Stars logo.World Leisure Organisation logo

Back to top