A personalized AI tool might help some reach end-of-life decisions—but it won’t suit everyone [MIT Tech Review]

View Article on MIT Tech Review
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This week, I’ve been working on a piece about an AI-based tool that could help guide end-of-life care. We’re talking about the kinds of life-and-death decisions that come up for very unwell people: whether to perform chest compressions, for example, or start grueling therapies, or switch off life support.

Often, the patient isn’t able to make these decisions—instead, the task falls to a surrogate, usually a family member, who is asked to try to imagine what the patient might choose if able. It can be an extremely difficult and distressing experience.  

A group of ethicists have an idea for an AI tool that they believe could help make things easier. The tool would be trained on information about the person, drawn from things like emails, social media activity, and browsing history. And it could predict, from those factors, what the patient might choose. The team describe the tool, which has not yet been built, as a “digital psychological twin.”

There are lots of questions that need to be answered before we introduce anything like this into hospitals or care settings. We don’t know how accurate it would be, or how we can ensure it won’t be misused. But perhaps the biggest question is: Would anyone want to use it?

To answer this question, we first need to address who the tool is being designed for. The researchers behind the personalized patient preference predictor, or P4, had surrogates in mind—they want to make things easier for the people who make weighty decisions about the lives of their loved ones. But the tool is essentially being designed for patients. It will be based on patients’ data and aims to emulate these people and their wishes.

This is important. In the US, patient autonomy is king. Anyone who is making decisions on behalf of another person is asked to use “substituted judgment”—essentially, to make the choices that the patient would make if able. Clinical care is all about focusing on the wishes of the patient.

If that’s your priority, a tool like the P4 makes a lot of sense. Research suggests that even close family members aren’t great at guessing what type of care their loved ones might choose. If an AI tool is more accurate, it might be preferable to the opinions of a surrogate.

But while this line of thinking suits American sensibilities, it might not apply the same way in all cultures. In some cases, families might want to consider the impact of an individual’s end-of-life care on family members, or the family unit as a whole, rather than just the patient.

“I think sometimes accuracy is less important than surrogates,” Bryanna Moore, an ethicist at the University of Rochester in New York, told me. “They’re the ones who have to live with the decision.”

Moore has worked as a clinical ethicist in hospitals in both Australia and the US, and she says she has noticed a difference between the two countries. “In Australia there’s more of a focus on what would benefit the surrogates and the family,” she says. And that’s a distinction between two English-speaking countries that are somewhat culturally similar. We might see greater differences in other places.

Moore says her position is controversial. When I asked Georg Starke at the Swiss Federal Institute of Technology Lausanne for his opinion, he told me that, generally speaking, “the only thing that should matter is the will of the patient.” He worries that caregivers might opt to withdraw life support if the patient becomes too much of a “burden” on them. “That’s certainly something that I would find appalling,” he told me.

The way we weigh a patient’s own wishes and those of their family members might depend on the situation, says Vasiliki Rahimzadeh, a bioethicist at Baylor College of Medicine in Houston, Texas. Perhaps the opinions of surrogates might matter more when the case is more medically complex, or if medical interventions are likely to be futile.

Rahimzadeh has herself acted as a surrogate for two close members of her immediate family. She hadn’t had detailed discussions about end-of-life care with either of them before their crises struck, she told me.

Would a tool like the P4 have helped her through it? Rahimzadeh has her doubts. An AI trained on social media or internet search history couldn’t possibly have captured all the memories, experiences, and intimate relationships she had with her family members, which she felt put her in good stead to make decisions about their medical care.

“There are these lived experiences that are not well captured in these data footprints, but which have incredible and profound bearing on one’s actions and motivations and behaviors in the moment of making a decision like that,” she told me.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

You can read the full article about the P4, and its many potential benefits and flaws, here.

This isn’t the first time anyone has proposed using AI to make life-or-death decisions. Will Douglas Heaven wrote about a different kind of end-of-life AI—a technology that would allow users to end their own lives in a nitrogen-gas-filled pod, should they wish.

AI is infiltrating health care in lots of other ways. We shouldn’t let it make all the decisions—AI paternalism could put patient autonomy at risk, as we explored in a previous edition of The Checkup.

Technology that lets us speak to our dead relatives is already here, as my colleague Charlotte Jee found when she chatted with the digital replicas of her own parents.

What is death, anyway? Recent research suggests that “the line between life and death isn’t as clear as we once thought,” as Rachel Nuwer reported last year.

From around the web

When is someone deemed “too male” or “too female” to compete in the Olympics? A new podcast called Tested dives into the long, fascinating, and infuriating history of testing and excluding athletes on the basis of their gender and sex. (Sequencer)

There’s a dirty secret among Olympic swimmers: Everyone pees in the pool. “I’ve probably peed in every single pool I’ve swam in,” said Lilly King, a three-time Olympian for Team USA. “That’s just how it goes.” (Wall Street Journal)

When saxophonist Joey Berkley developed a movement disorder that made his hands twist into pretzel shapes, he volunteered for an experimental treatment that involved inserting an electrode deep into his brain. That was three years ago. Now he’s releasing a new suite about his experience, including a frenetic piece inspired by the surgery itself. (NPR)

After a case of mononucleosis, Jason Werbeloff started to see the people around him in an entirely new way—literally. He’s one of a small number of people for whom people’s faces morph into monstrous shapes, with bulging sides and stretching teeth, because of a rare condition called prosopometamorphopsia. (The New Yorker)  

How young are you feeling today? Your answer might depend on how active you’ve been, and how sunny it is. (Innovation in Aging)



Leave a Reply