Red X iconGreen tick iconYellow tick icon
Wednesday 10 May 2023 9:08am

Briony Blackmore image

When something goes wrong, we intuitively want to hold someone accountable and have them pay for, and learn from, their mistake. But what happens when the thing that goes wrong is the result of artificial intelligence (AI)?

This is the question PhD candidate Briony Blackmore has asked herself while writing her thesis on how society could go about assigning moral responsibility when using AI.

Blackmore, who is completing her PhD through the philosophy programme, says it's a salient area to investigate with AI rapidly becoming more complex, learning and producing outputs that couldn't be anticipated.

“Who can we assign moral responsibility to, or can we not assign it to anyone?”

It is important that regulations and policies are put in place in Aotearoa New Zealand which align with a set of morals values and outline how victims who are harmed are cared for when something goes wrong, she says.

She looked at a set of cases where there wasn't a responsible actor involved – the developer or operator – which she says will feel unsatisfying to complainants.

There are going to be some cases in which we can determine whether a person involved in development or deployment has done something negligent or intentionally harmful, and in those cases responsibility could lie with them.

But a lot of what she has found goes against people's intuitions – in some cases responsibility can't be assigned – which is why developers and deployers need to consider the ethical impacts of an AI before it was deployed rather than rushing to get something on the market.

“I guess “I care about AI working for the community rather than against it.”

Many members of the public are completely unaware they're interacting with AI on a daily basis and have little understanding about what it means for them.

“It's everywhere.”

Some people will be familiar with Tesla cars which are self-driving, and ChatGPT which is a chatbot provides answers to an array of questions, but the Facebook algorithm that determines which ads show up in someone's feed is also powered by AI, as are the song recommendations Spotify makes.

Predictive risk models use AI to predict how likely it is someone will repay their mortgage, or the likelihood of a criminal reoffending, Blackmore says.

Amazon developed an AI-powered hiring programme to get the best candidate possible for a job, but the AI had a historical basis and only recommended men.

AI is an interesting space to work in as it's moving “very fast, it's hard to keep up with”, she says.

She has had to write entirely new sections on generative AI for her thesis as it did not exist when she started her PhD three years ago.

Once she has completed her PhD, Blackmore would like to go into a role in government or industry to help make ethical recommendations around use of AI.

-Kōrero by internal communications adviser, Koren Allpress

Back to top