Red X iconGreen tick iconYellow tick icon
Michael Winikoff bannerMonday 3 February 2014 11:26am

As robots and other autonomous systems become more widely used, how can we trust them to do what they should?

Autonomous software is moving out of the labs and into the real world.

Already there are self-driving cars on the open road; robots are being used in applications from tour guides to nursing-home care; and purely automated software is making business decisions in the commercial sector.

To explain, autonomous systems are software systems that make decisions and act on them. While robots are often autonomous, autonomous systems are not all robots. And often, but not always, autonomous systems involve at least some artificial intelligence.
University of Otago research is focusing beyond the technicalities of how to develop autonomous software and is looking at what it takes for people to have confidence that such systems do what they should.

Otago Business School researcher, and Head of the Department of Information Science, Professor Michael Winikoff is leading the way, investigating what is needed to ensure an appropriate level of trust in autonomous systems.

Winikoff has been working on how to engineer human-inspired autonomous software for the past 15 years. He says his current study isn't about convincing people to trust software – which may not necessarily be trustworthy – but about how to achieve appropriate levels of trust in an autonomous system.

This is a big issue for the future of autonomous systems.

“Clearly, a person needs trust that a self-driving car will know what to do in any given situation. The big question is exactly what conditions are needed for that trust to happen and how do developers engineer their systems to be trustable?” he says.

It's a complex area of research, but one thing is clear: it's much more than blind faith in technology.

Winikoff argues that an essential basis of trust is that users need to be able to understand how a decision-making system reaches decisions.

“People can't open it to see how it works so they, therefore, need to understand the decision mechanisms being used. That means developers in the future may have to not just develop software, but also enable it to offer explanations of how it reached a decision, in a way its users will understand.”

So how might autonomous software be able to explain itself in an understandable way?

Winikoff explains that one approach to develop autonomous systems employs programming based on human-like concepts of goals and plans. These provide human-oriented explanations for a system's decisions.

That means the systems have a variety of response scenarios programmed and that these can be re-combined in certain ways on the fly, providing flexibility in response to a complex situation.

As an example, a nursebot treating a person requiring a specific health procedure might have a range of planned response options to deploy, or a combination of those. It could be developed to explain that it chose to perform an action – such as administering medication – because it was following a plan to achieve a particular goal.

A second essential basis for trust is being able to provide certain guarantees about the system's behaviour.

Traditionally a software system's behaviour is assessed by running it through a range of test cases, something that isn't effective with autonomous systems because there are too many test-case options to run.

Alternatives to testing exist and are being adapted for use with autonomous systems. These alternatives operate by systematically analysing the system using mathematical techniques to provide guarantees of certain properties.

For instance, it is important to know that no matter what happens, a nursebot that detects an incapacitated person will always seek medical help immediately, regardless of its other active goals.

A sophisticated analysis of the system could guarantee an important property like that. However, one challenge is identifying which properties need to be guaranteed in order for the system to be appropriately trusted.

Funding

  • University of Otago
Back to top