tech news

People aren’t ready to let robots and AI decide on euthanasia, study finds


Asian woman lying sick in hospital.
Study participants were told of scenarios where AI or humans could decide on end-of-life care (Picture: Getty)

You’d probably let AI compose an email for you. Going over a medical scan to check for signs of cancer? Most likely, yes as well.

But deciding when life support should be switched off for a patient in a coma?

A new study has shown there’s one clear line where we don’t yet want a robot to take control, and that is deciding on the time of death.

This may not be entirely surprising, given most would hope for humanity at the end of life.

And so far, there are no healthcare providers which do allow AI to decide on when to switch off life support.

But as both artificial intelligence and assisted dying are set to become more and more of a part of global healthcare systems, it’s a question that is set to become more relevant – so researchers have looked at our attitudes towards such a prospect.

FILE - A 'suicide pod' known as 'The Sarco' is seen in Rotterdam, Netherlands, July 8, 2024. (AP Photo/Ahmad Seir, File)
A ‘suicide pod’ known as ‘The Sarco’ is seen in Rotterdam, Netherlands, last year (Picture: AP)

An international study led by the University of Turku reveals that people are significantly less likely to accept euthanasia decisions made by artificial intelligence (AI) or robots compared to those made by human doctors.

Participants in Finland, Czechia, and the UK were told about scenarios where patients were in end-of-life care, often in a coma.

Even when decisions about ending life support were exactly the same, they were accepted less if made by AI than by humans.

In other words, how we feel about a decision is not only about whether it was the right or wrong call, but who made it and how.

Researchers called this phenomenon the ‘Human-Robot Moral Judgment Asymmetry Effect’, saying we hold robots to a higher moral standard.

However, if the decision was to keep life-support switched on, or if patients could request assisted death themselves, there was no judgement asymmetry between the decisions made by humans and AI.

The findings echo similar conclusions by AI experts, who say humans are not yet at a point to accept giving AI responsibility for serious decisions about our lives.

A survey of the future of AI in the workplace by Microsoft found that in decisions which require accountability, we still want humans to be the ones making the call.

Speaking after the report’s release, Alexia Cambon, senior research director at the company, told Metro that there was a ‘primal question’ over how we should manage this new type of intelligence.

A medical robot of French start up SquareMind, designed to facilitate cancer screening using artificial intelligence is displayed during the Vivatech technology startups and innovation fair, at the Porte de Versailles exhibition center in Paris, on May 22, 2024. (Photo by JULIEN DE ROSA / AFP) (Photo by JULIEN DE ROSA/AFP via Getty Images)
A medical robot by SquareMind designed to facilitate cancer screening using artificial intelligence is displayed during the Vivatech fair in Paris last year (Picture: Getty)

She cited a recent paper by AI thinker Daniel Susskind, looking at what work will remain for humans to do once AI has thoroughly integrated into the workplace.

‘One of them is the moral imperatives of society,’ she said. ‘As a society, I can’t see a shortterm future anyway in which we will be happy for agents to manage humans.

‘An agent can’t make me feel seen, an agent can’t can’t make me feel connected to another human.’

Mr Susskind said his view would be that ultimately, the paid work left for humans would be ‘none at all’, but that there are currently ‘moral limits,’ where human beings believe they require a ‘human in the loop’.

Michael Laakasuo, the lead investigator in the assisted dying study, said: ‘Our research highlights the complex nature of moral judgements when considering AI decision-making in medical care.

‘People perceive AI’s involvement in decision-making very differently compared to when a human is in charge.

‘The implications of this research are significant as the role of AI in our society and medical care expands every day.

‘It is important to understand the experiences and reactions of ordinary people so that future systems can be perceived as morally acceptable.’

Get in touch with our news team by emailing us at webnews@metro.co.uk.

For more stories like this, check our news page.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.  Learn more