Skip to content

Surrogate decision making has some issues.  Surrogates often either don’t know what patients would want, or think they know but are wrong, or make choices that align with their own preferences rather than the patients.  After making decisions, many surrogates experience regret, PTSD, and depressive symptoms.  Can we do better?

Or, to phrase the question for 2024, “Can AI do better?” Follow that path and you arrive at a potentially terrifying scenario: using AI for surrogate decision making.  What?!?  When Teva Brender and Brian Block first approached me about writing a thought piece about this idea, my initial response was, “Hell no.”  You may be thinking the same.  But…stay with us here…might AI help to address some of the major issues present in surrogate decision making? Or does it raise more issues than it solves?

Today we talk with Teva, Dave Wendler, and Jenny Blumenthal-Barby about:

  • Current clinical and ethical issues with surrogate decision making
  • The Patient Preferences Predictor (developed by Dave Wendler) or Personalized Patient Preferences Predictor (updated idea by Brian Earp) and commentary by Jenny
  • Using AI to comb through prior recorded clinical conversations with patients to play back pertinent discussions; to predict functional outcomes; and to predict patient preferences based on prior spending patterns, emails, and social media posts (Teva’s thought piece)
  • A whole host of ethical issues raised by these ideas including the black box nature, the motivations of private AI algorithms run by for profit healthcare systems, turning an “is” into an “ought”, defaults and nudges, and privacy.

I’ll end this intro with a quote from Deb Grady in an editor’s commentary to our thought piece in JAMA Internal Medicine about this topic: “Voice technology that creates a searchable database of patients’ every encounter with a health care professional? Using data from wearable devices, internet searches, and purchasing history? Algorithms using millions of direct observations of a person’s behavior to provide an authentic portrait of the way a person lived? Yikes! The authors discuss the practical, ethical, and accuracy issues related to this scenario. We published this Viewpoint because it is very interesting, somewhat scary, and probably inevitable.”

@alexsmithmd.bsky.social

 

** NOTE: To claim CME credit for this episode, click here **

 


Eric 00:10

Listeners, Before we start the podcast, we’d just like to say we’re trying to do an end of the year push. You can donate tax deductible donations to GeriPal so we can continue to produce these podcasts in 2025. They cost us about $20,000 a year to run this podcast with our producers and all the other costs. Alex and I don’t make anything on it.

Alex 00:28

It’s a hobby.

Eric 00:29

It’s a hobby, and we love it.

Eric 00:56

And if you’d like to donate, go to GeriPal.org, click the donate button so you can donate your tax deductible donation to the GeriPal Podcast. Thank you.

Eric

Welcome to the GeriPal Podcast. This is Eric Widera.

Alex 01:10

This is Alex Smith.

Eric 01:11

And Alex, we’ve got a full house today.

Alex 01:13

We’ve got a full house. We are delighted to welcome back returning guest Jenny Blumenthal-Barby, who’s a philosopher and bioethicist and associate director of the center for Medical Ethics and Health Policy at the Baylor College of Medicine. Jenny, welcome back to GeriPal.

Jenny 01:27

Thanks. Good to be here.

Alex 01:29

And we’re delighted to welcome Teva Brender, who’s a third year resident in internal medicine at UCSF. Teva, welcome to the GeriPal Podcast. And finally, our last guest is Dave Wendler, who’s a senior investigator and head of research ethics at the National Institutes of Health Department of Bioethics. Dave, welcome to the GeriPal Podcast.

Dave 01:47

Thanks for inviting me.

Eric 01:49

So, Alex, we’re going to be talking.

Alex 01:52

About AI and surrogate decision making. But before we jump into that, who has a song request for Alex?

Dave 02:03

So I wanted to request Someday Never Comes by CCR.

Eric 02:09

Why did you request this song, Dave?

Dave 02:12

It’s a little bit autobiographical. Then goes into the session. When I was younger, I was the kind of kid who didn’t ask questions of other people. I tried to figure things out on my own. I remember one time I was trying to figure out how vision worked and I had this whole Theory, I’m a youngest kid. And then I’d talk to my older siblings about my theories and they’d say, in this case, look, that’s stupid. People figured this out a long time ago and they’d tell me.

And so I always thought, oh, I see. When you get older and you become an adult, you understand everything. And so that was my assumption. I got to be an adult. I realized that wasn’t true. And then I sort of despaired for a while. And so I think the challenge and I think the challenge with surrogate decision making for decision incapacitated patients is realizing we’re probably not going to get it right. Not always anyway. But still, we need to really work hard. It’s important and try to do the best we can. So some days, never going to come. We’re never going to get this perfect, but hopefully we can do better in the future.

Alex 03:12

Mm. Yeah. Fun choice. And I think this was the last song that CCR recorded as a single and released. Here’s a little bit of it.

Alex 03:20

(singing)

Dave 04:06

Thank you.

Eric 04:06

That was awesome. Thank you, Alex. Great song request, Dave. So I’m going to turn to you, Dave. We’re going to be talking about AI and surrogacy. What issues are we dealing with right now with surrogacy? You highlighted some of them in the song requests. How do you think about that?

Dave 04:24

Right. Well, first, I just want to say I appreciate Alex. I don’t know if this is intentional. Adding a little of the John Fogarty twang into that song. That was great. So I’ve been working on this issue for about 20 years. I’m a philosopher, and I’m interested in interesting practical issues that have philosophical implications. And this one is how do we respect people who can’t make their own decisions, given how much respect for autonomy is dominant?

And at the time about 20 years ago, when I started working on this, there were smatterings and I think some recognition that although we had advanced directives at that point for over 30 years, we were trying to figure this out, we still face two really significant problems. The first one is that a lot of patients, particularly the end of life, probably the majority of them, can’t make their own decisions. We have to rely on somebody else to make decisions for them. The current process we have, relying on DPAs and next of kin often doesn’t get patients treated in the way they want to be treated. Those people who make those decisions often don’t know the treatment preferences of the patient.

So that’s what we call predictive accuracy. So we’re pretty bad with predictive accuracy. That’s a problem. The second problem we looked into with a previous fellow, Annette or Ridd, was looking at what’s the impact on the surrogates of making these decisions for decisionally incapacitated patients. And for a lot of them, the impact’s bad to really bad. Some data suggest a quarter to a third of surrogates after they make these decisions, they experience symptoms of PTSD that can last months or years. Feeling like they’re responsible, they’re the ones who had to make these decisions.

One of the studies has these really compelling quotes, one of which was a surrogate looking back on it and said, yeah, I remember the day the doctor came into the ICU and asked me if I wanted to kill my dad today. And that’s how a lot of these people experience making those decisions. It’s terrible for them.

Eric 06:26

Well, can I, can I? Because I remember like older data looking at how accurate surrogates are. I think they asked both patients and their surrogates about a bunch of end of life questions, including intubation, CPR. I think amputation was on there. It was like 60 to 65% accurate. Then they just looked at those people who were healthcare agents as assigned by durable power of attorney for healthcare and it didn’t really change anything. Is that kind of where we are now? Does that data still kind of ring true? Has anybody kind of followed that up?

Dave 07:03

Yeah, there have been some follow ups. The data does ring true, but unfortunately even that number of about like 2/3 accuracy is probably in our context an overestimate because those data include, include what anybody who’s a clinician, what you guys do day to day, you wouldn’t consider it a problem. You have somebody who’s 25 years old, they have a significant bleed. If you stop the bleeding, they’re going to live for another 40, 50 years, expect and have a good quality of life. You don’t agonize over those decisions.

You basically don’t even ask the family, you just stop the bleeding. That’s just what you do. It’s when we get to the really hard scenarios like somebody has moderate Alzheimer’s disease and they need to be intubated, those are the hard ones. And in those cases, the accuracy is really getting close to almost 50%. In other words, getting frighteningly close to random guessing or flipping a coin. So, yep, it’s still debated and looking.

Eric 08:02

At people who, thinking, in the future, if I had Alzheimer’s, what would I want?

Alex 08:07

Right.

Eric 08:08

Because we wouldn’t be asking somebody with severe Alzheimer’s disease, what do you want? And then asking their surrogates. I guess the other question is what probably is more interesting too is this idea of like decisional regret. Like, has there been any, anybody looking at of people who were seriously ill who got an intervention based on what a surrogate thought, what the patient would want, Is there any regret?

Alex 08:36

Right.

Eric 08:37

That’s what we really want to know. Like did what the surrogate recommend to do to the team or what was decided by the surrogate and the medical team?

Dave 08:47

Yeah.

Eric 08:48

Was that the right choice once the patient regained capacity?

Dave 08:54

Right, right. Yeah, That’s a great question. So there’s two aspects to that. One is sort of a philosophical ethical question of whether you want to prioritize the prospective preferences and values of the patient or the retrospective one. We could, we could debate that for a couple of hours if we want to. The other problem, although I think you’re asking the right question, is doing that study would always have this huge selection bias because you’d have say 50% of the people you made a decision not to intubate or intubate, half of them die.

So you can’t ask them, did the surrogate make the right decision? Are you glad they didn’t intubate you and you die? You’re only going to ask the people who recover and recover sufficiently to be able to understand your question. So it’s really unclear how we can even get the data, which I think just makes this even more challenging.

Alex 09:44

Yet another use case for AI. If we had just uploaded people to the AI Personas, then we could ask the AI Persona after the patient dies. That’s kind of tongue in cheek, but I just push in here because the future is strange and we don’t know what it’s going to be like. Teva, I want to go to you. You’re in it. You’re a third year resident. Does this issue ring true to you clinically?

Teva 10:12

Absolutely. Every day in the ICU we face these questions and I think the way that you and Brian and I had structured our viewpoint was to go from the most plausible to the most speculative use cases. And right now we’re talking about predicting what patients would want, which as we’re saying is very challenging for humans. It would also be very challenging for AI. We talk about what data are we using to make these predictions. We have to make sure that the.

Eric 10:40

Data is not biased before we jump into AI. Alex, was that your question?

Alex 10:44

No. No. I’m wondering whether, like, on your. In your clinical, everyday reality, this comes up and say, like, does this ring true to you from your work in the icu, from your work with patients?

Eric 10:57

And when you say this, Alex, with.

Alex 10:58

Dementia, I mean, the challenge of surrogate decision making, you know, do you. Are you encountering family members who are just, like, at a loss for what to do and tortured it?

Teva 11:11

Absolutely. I think there’s the question also of, there’s some surrogates who just don’t know the patient well. We see this a lot at the va. Patients who are sort of estranged from family members who then have to make these decisions for somebody that they don’t really know. And then there’s the surrogates who do know the patient well. But again, they haven’t talked about the specific intervention at question. And really they’re trying to understand what reasonable quality of life can this person expect to achieve and just agonizing over those decisions, over every individual decision. Do we do antibiotics or no antibiotics? It’s not even necessarily the big one. Cpr. No cpr. It’s every little decision along the way.

Eric 11:55

And Jenny, your thoughts on this, like, isn’t this just fixable, like, for the people who have someone in their lives that they can turn to, let’s just do an advance directive, let’s get that DPOA for healthcare paperwork in, and let’s just make sure that they have a conversation about what their healthcare preferences are if they got really sick.

Jenny 12:19

Yeah, I mean, I think that is a good start, but there’s a lot of evidence that shows that that doesn’t get people very far. Right. So we can have a conversation, but a conversation that’s going to have to revolve around so many different cases of, well, what would happen if I were in this condition or that condition? And you can talk about the. And then the condition the patient’s in might be a little bit different. And then that might cause the surrogates to sort of question the extent to which this situation at hand is analogous to the conversations that they’ve had.

So I think that’s one challenge for surrogates and family members, even if you have conversations. And the other is the challenge of which we know really well from fields like decision psychology and behavioral economics of people predicting what they would want in some future state. We often sort of mispredict what our preferences would be, what our emotions would be, how we would feel. So I think that those are a couple of challenges for even these cases where conversations happen with surrogates.

Eric 13:27

Okay, so I’m feeling that there’s a lot of issues with just relying on surrogates to think about what the patient would want, what their values are, what their preferences would be. Is there potentially another way to do this? Dave, you wrote a little bit about this.

Dave 13:47

Yeah. So let me first emphasize the extent of the negative here. Hopefully we’ll go upbeat towards the end of the broadcast here. But just to be a little more negative is two things. So when people hear this, that surrogate’s predictive accuracy is pretty bad, the initial reaction is to think, well, there are ways we could solve this one. As Teva said, a lot of times the surrogate is somebody who doesn’t know the patient well. So let’s make sure we get somebody who knows the patient well, ideally say their first degree relative.

That’s one thing you could do. Another thing you could do is let’s really encourage discussion ahead of time between the appointed surrogate and the patient while the patient’s competent about patients preferences about their values. And then once we have the surrogate, let’s really try to give them the information they need to make decisions. Unfortunately, there’s decent data that none of those things work. It turns out there’s at least pretty good data that actually the better you know somebody, the more intimate your relationship with them, the worse you are, not the better, but the worse you are at predicting their preferences values.

We could talk about why that is, but it looks like there’s really good empirical data for that. If we care about predictive accuracy, relying on the next of kin might be a bad way to go. There’s also been some really nice empirical studies to see whether or not prospective discussions increase predictive accuracy. At least the ones that have been done so far, which are pretty good ones. It doesn’t. And so it looks like those aren’t going to be solutions. And given that those are the real solutions that we have for advanced directives, I think we need to look other places for possible solutions or at least ways to do better.

Eric 15:38

Okay, so I got another question then. As before, we jump into ways that can do better because it just reminds me a lot of this is focused on the idea of substitute judgment. Right? Like we’re going to try to guess what somebody else would want based on this issue versus the idea, like best interests. What’s important to me is my family, kind of my surrogate decides what they think is best for me based on what’s going on. And I remember work from Dan Silmazi looking at this question of, like, what do people want? Do they want substitute judgment? They want best interest. And it was kind of like a curve, like most people are in the middle, that they want people to use a mix of best interests and substance judgment, not just exactly what I would want, although there are some people that would want that.

Dave 16:27

Yeah.

Eric 16:27

Just like there are just some people that just want to use best interest, but most people kind of want to mix. Like, how important is it that we just make decisions as surrogates just purely based on what the preferences they think would be important for.

Dave 16:42

Right.

Eric 16:43

That patient.

Dave 16:44

I’ll give you my view, and then these guys can jump in. Is one it. First of all, it depends on how strong you think the view is and how confident you are of it. So if the person was somebody who was absolutely committed to never wanting a blood transfusion, they were that way their whole life, and you know it. Then it seems to me that respect for them just trumps anything about the best interests. And the fact that they’ve now lost decisional capacity doesn’t mean that you jump in there and start giving them two units of blood.

So I think it depends on how confident you are. If you’re not that confident, then I think you should shift to best interest. The problem now, you guys are the clinicians is that, as I said at the top, when it’s really clear what’s in the person’s best interest, we don’t struggle very much with this. It’s the times where it’s like you got somebody who’s really sick and they’ve been in the ICU for three weeks and you’re not sure you can get them through, but you might be able to, but it might take a really long time.

You don’t know what their quality of life is going to be on the other end. And so you could have four intensivists with 30 years training each, and they could all disagree. I. We do consults all the time on these where different people will say, no, it makes sense to try. Others like, no, it doesn’t make sense to try. The chances are too low.

Alex 18:00

Yeah, I think those areas of uncertainty. Yeah, go ahead, Jenny.

Jenny 18:03

Yeah, I was just thinking there’s also a sort of third area between substituted judgment and best interests. And this has been developed by Dan Brudney, who’s a philosopher, and some other people, which is the notion of thinking about this concept of authenticity. So the thinking here is that we’ve gotten too hung up in the framework of substituted judgment, of thinking about what would the patients say they would want in this moment if they could speak, or what did they write down and say that they would want in this future circumstance?

But really, what a lot of patients want and what families do when they’re engaging in surrogate decision making is trying to figure out the sort of person that this patient was. Right? Like, what were their values? You know, they. I remember Dan Brudney has this example of this guy who’s, like, always wanting to be out riding his Harley, and he’s super independent, and he’s sort of like, that’s what surrogates do. We sort of try to get a sense of what would this person, sort of like authentic self, want us to do in that situation. And that’s a little bit different than the question of substituted judgment. And it’s also a little bit different than the question of what is in the patient’s best interests.

Eric 19:13

So only if there is a way we could predict what the authentic self would want.

Alex 19:19

Wait, just before we get there, Teva, did you want to jump in with something?

Teva 19:22

I was just going to say, and none of this considers the legal landscape, which I think, Jenny, you talk about in your commentary as well, is how courts have interpreted surrogates and should this be best interest or substituted judgment. So just to throw another curveball in.

Eric 19:38

There, now we should go to the question. Alex.

Alex 19:41

Well, yeah, yeah, let’s go. Eric, let’s get into the PPP or ppp.

Eric 19:47

So, Dave, let’s talk about the ppp. Can we predict what patients potentially would want or what their values are? You came up with the idea of the ppp. What was that?

Dave 20:02

So it started, as I mentioned. I started this about 20 years ago. As I mentioned, there were these two problems. Predictive accuracy is bad, and that’s in terms of what the patient wants. Or as Jenny was saying, in terms of what’s consistent with their values. It’s bad in both ways. And there’s burden on surrogates. There was also this other data that people knew for a long time. We didn’t come up with this. It’s that people’s treatment preferences are correlated with various aspects of them and their circumstances.

How much aggressive care you want varies depending upon whether you’re 25 or 95. There’s some really obvious ones. There’s some Less obvious ones, like where you were born, geographically, at least in the US can make a difference in how much aggressive care you want. Our thought initially was, what if we harness that data and use that data to try to predict what treatments the patients would want, what’s consistent with their values, and then give that information to the surrogate to try to help them. And then the thought was, well, maybe that could increase predictive accuracy and maybe also reduce some of the burden on them. So address both of the problems. We started out with a very, very simple, simplistic one and we found out that that one was as basically as accurate as surrogates was, including next of kin and patient appointed ones.

So we think there’s at least reason to believe. We haven’t proven this yet, but there’s reasonably, if you do a much better one, like for instance, age wasn’t in our algorithm, if you add that in, it almost undoubtedly is going to get at least a little more accurate. And so there’s chances that we could do better with this information. Since then, as everybody knows, we’ve gotten into AI, we’ve gotten into ChatGPT, we’ve gotten into machine learning, and so there’s now really different creative ways. We were thinking of just surveys of the general public. Now there’s looking at online behavior. Teva has this really nice paper.

If you look at the recordings of discussions they have with their doctors and even sort of the intonation when they talked about resuscitation, maybe that gives you information you could use to predict. Putting all that together seems to me really good chance it’s going to be more accurate than surrogates, but we don’t know until somebody gives us $20 million to test it.

Eric 22:23

So potentially using demographics, geography, survey data to help us get to that question, what would a similar patient want in this situation to help family members, those surrogates decide? Yeah, that seems right. So before we talk about AI, Jenny, this just kind of also reminds me a little bit of our podcast we did on behavioral economics and nudging, because this also feels a little bit like a nudge, like, oh, we have this data most people in similar situations, like your dad would want, Right?

Alex 23:10

What is that called? Is that the norming nudge?

Jenny 23:15

Yeah. Yeah, I guess you’re communicating a kind of social norm right to the family about what a similar patient would want to their father.

Eric 23:23

Yeah, I think the interesting thing there is it’s not a black box, like we have all the inputs that are going into it, the survey data, but it is using like Population based data to make an individual potentially decision, which is everything that we do in medicine, evidence based medicine is based on that. But I don’t know, it feels a little bit different here. Teva, Jenny, your thoughts on this?

Teva 23:49

Well, I think what we’re talking about and Dave went from I think the PPP to the PPP is there’s the potential that it does become a black box and that you look at, you know, an individual’s purchasing behavior. So the example that we give in.

Eric 24:05

Our pieces and real quick, just to highlight. So we’ve been talking about your piece, your piece, you have a jama im.

Teva 24:12

Yes.

Eric 24:13

Editorial with Alex on the use of AI in surrogate decision making.

Teva 24:20

That’s right.

Eric 24:21

We’ll have a link to that in our show notes. What did you suggest in that paper? Give our audience who haven’t read it yet an idea of what you were talking about.

Teva 24:29

Yeah, so Brian Block, who can’t be here, was one of the other co authors and we approached Alex whose initial reaction, I don’t know if you remember this was heck no, when we use AI to help surrogates. But we kind of steel manned our argument. And so we started with the vignette. And the vignette we give is a patient who has had a cardiac arrest and a surrogate who doesn’t know the patient well. And so we give the example of okay, this person was a member of the Sierra Club, they like to volunteer at the Humane Society, walking their dogs.

This is clearly somebody who would value their functional independence. And then we give three different examples working from the least speculative to the most speculative of how AI could help this surrogate. And so one is this ambient voice technology which it exists here at ucsf, they’re piloting this where you can record interactions between patients and clinicians and maybe in the future machine learning can pull those snippets, even if it’s just sort of the doorway conversation about those values, goals and preferences that really guide these decisions. The second is predicting functional outcomes.

So right now we have ways to predict 30 day mortality, admission to a SNF, et cetera. But what about those functional outcomes that mean most to patients? And then the third is predicting what patients would want. And that’s I think what we’re talking about here with the ppp. But the problem becomes if you start pulling this purchasing behavior, other things like that.

Eric 26:00

Millions social media posts.

Teva 26:03

Exactly, social media, you start to get to a place where it is a black box. You don’t know how this decision was personalized or how this prediction was personalized. We’re going beyond just population data to say many people in your situation would want this. To say no. Specifically you, based on who you are, who your digital avatar is, would want this. And that gets to be a little bit even more controversial, I think.

Alex 26:29

Yeah. Am I the sum of my Facebook posts and my purchases on Amazon? Right. This is the kind of the fun stuff that we talked about as we were writing this paper. That sounds terrifying, right?

Teva 26:44

It is, because we put a certain face on our social media. But then you could also flip it and say, well, that’s maybe aspirational. That’s who we aspire to be. Does that say something deep about us? Is the person, the face that we try to put on in public? Your Amazon purchase behavior is maybe a little bit more private, who you are when you’re just alone at home.

Dave 27:05

Yeah.

Alex 27:06

I think of the quote by, well, first of all, I think of the song that Dave chose, you know, someday Never Comes. I also think of the quote by George Box who said, all models are wrong. Some are useful. Right. We are never going to have the perfect model. However, using predictive models, using AI, we may be able to get closer and find some model that is useful and maybe better than the coin flip of surrogate decision making.

Eric 27:37

That’s better than code, better than a coin flip. Right. It’s, it’s not 50% correct for surrogate.

Jenny 27:44

I mean, it’s that, yes, it is better than a coin flip. But I do think that there’s a tension here, which is that in order for these algorithms to work, you need to feed them a ton of data. And then you think, well, what kind of data are you feeding it? I mean, and to this, this is to Alex’s point, of all of our blog posts, all of our purchasing behavior, you know what we’ve liked, part of that is really constructed by social media as well. Right. So how authentic is that to represent myself? So I think there’s going to be this temptation.

On the one hand, we want to just have good data that we feed into the algorithm, but we’re going to have a temptation to feed a lot of data in so that we can get more predictions. That’s what these algorithms need. And then I worry about the quality or the source of the data that we’re feeding in and how, how representative that is of patients preferences about medical treatments.

Eric 28:37

Yeah, this is challenging because how much of it is what the algorithm drives people to write and to post versus how much is the algorithm actually influencing who we are as people too. This is getting hard here.

Alex 28:56

Yeah, I know. We think that the AI does drive so many of these algorithms already, you know, suggesting the next thing to offer you to post the next thing to look at on Facebook. So is the AI already directing our lives? Right. We thought that the Terminator scenario, the Skynet, was in the far distant future, but maybe the beginnings of it are already here.

Dave 29:23

I don’t know if you guys talked about in this in your session, but I sometimes wonder if we have different standards for humans and AI. So I think one of the big worries as we’ve been discussing with the ppp, as Teva said, is the chance it becomes a black box and it’s making this prediction. Let’s imagine it’s even making these really accurate ones, but we don’t know exactly on what basis it’s doing it that worries us. But now take a human surrogate. It almost seems like we valorize the black box. In that case, like you say to the next of kin, what would they want? And they say he wouldn’t want aggressive treatment. I know it. I know him.

Well, on what basis do you predict that? I just know him. I can’t tell you what the basis for it is. It’s just I know him. In that case, that’s like great. That’s what we really want. Like you really know this guy in such a deep way, you can’t even point to what the specific data you’re relying on. So there it seems good. And I wonder, are those different or is it just we don’t trust the AI or we haven’t learned to trust it? I don’t know.

Teva 30:23

Or let’s say we have a model that’s 100% accurate but can’t explain why. Does that even help a surrogate to just be told this is what your loved one would want? Or do we want a model that’s maybe 75% accurate, but can at least explain how it came to that decision? I’m thinking about all of this through the lens of how do we help surrogates? And I think giving them the quote unquote right answer without showing them the steps along the way is not going to ultimately be helpful if we’re talking about reducing these symptoms of PTSD and how traumatic this experience can be.

Jenny 30:58

Yeah, yeah, yeah, that’s exactly right. And that’s one of the things we bring up in the editorial is that part of the hope for a tool like the PPP is that it can reduce surrogate burden. But I mean, you imagine a scenario where A surrogate is struggling with the decision. And then you just bring in this tool and say, well, the tool says that your mom would want treatment. And they’re like, but, but why? And it’s sort of like, well, we can’t tell you why it. Or maybe you say, well, it’s based on all her Facebook posts and all these other things, but trust us, it’s accurate. I don’t know how helpful that’s going to be to surrogates and I think it might even cause them more emotional angst and stress to sort of struggle with what, what to do with that. Right.

Dave 31:40

And this I think, yeah, this I think is a place where we just need empirical. We need the $20 million because we need empirical testing because we don’t know. And I think there’s just two possibilities. As you guys say, it might be that surrogates want to know the reasons why it’s predicting certain treatment problems. But it might be. My hope is we use this as what I call soft default. You just give the information to the surrogate and say, but if you feel strongly otherwise, don’t do this and that that possibly has.

What it could do is take the burden off the surrogate. So then the surrogate feels like, yeah, this is the prediction. I just went with it. It wasn’t my choice. I was confident that there wasn’t compelling evidence to think my dad wanted something else. I’m protecting my dad, but then I don’t feel as responsible and I don’t think we know which way it goes until we test it.

Eric 32:31

Well, I feel part of the role of the surrogate is, you know, if the patient doesn’t have capacity, it is that person outside of the healthcare team, outside of that healthcare system that you’re in that you’re asking like, what would this person want based on everything that you know. So it’s this entity that is removed from what is becoming a more often for profit or even the not for profits are making a lot of money institution, a big business of medicine. Somebody outside of that who says, no, this is, this is what this person would want.

My biggest worry with using any of these models, whether it be like AI machine learning algorithm, whether it be like these in the PPP using demographics and survey data is that still feels very. That is part of the institution, like why should I trust? Especially an AI black box model that could be easily manipulated. Let’s ramp up the. Not the thing that saves our healthcare institutions some more money. Let’s ramp up that part of the algorithm, that part of AI to push people, to nudge people towards stopping expensive stuff or if it were a fee for service model, to do more expensive stuff so we make more money.

Dave 33:55

Eric, I’m glad you brought that up because in all of the discussions I’ve had and things I’ve written, that’s actually my biggest worry. I think that’s the biggest worry about a PPP and it’s not discussed as much, but you’re right, it could just get, if we’re not careful, if it’s not done in a transparent way with people looking over it who we can trust, it could get hijacked. Right.

Eric 34:17

So AI inherently is not transparent. It’s a black box on how it’s kind of, kind of making this stuff up based on, you know, millions of data points.

Dave 34:28

Yeah. And so we end up with, huh. It just happens that people who lose decisional capacity, they all want really expensive back surgery. That makes us lots of money for our hospital, but none of them want to be in the ICU and a respirator because we’re not making as much money off of that. That’s really amazing.

Eric 34:44

Yeah. They also want to make sure that you give us five star outstanding on our data. Jenny, is this kind of, again, this is going towards like behavioral economics and nudging people towards particular directions.

Alex 34:58

This is economics. Economics.

Jenny 34:59

I guess I’m an, maybe I’m more of an optimist than a pessimist. I mean, I do think that generally we’re seeing more discussions about regulation of AI and things that have to be made transparent and reported by developers of algorithms. And a lot of those, like legal and regulatory requirements are in development now and will soon be released. So I mean it is a, I guess a kind of scary example, but I don’t know, you know, how likely something like that really is given some of the regulatory developments that we’re likely to see. But again, maybe I’m an optimist.

Teva 35:35

How about to take it back even from less conspiratorial to the example that Jenny gave where the algorithm suggests that the patient wants dialysis but the family says, I don’t think so. That to me it’s usually the other way. In adult medicine, it’s usually the other way where the clinicians are saying, you know, I don’t think dialysis is going to be within goals for this person who’s critically ill in the icu, but the family is typically pushing for more of those life saving interventions.

Alex 36:04

Well, I would just, just editorialize briefly and say at UCSF that’s the way it is. It may not be. My guess is out in the community there’s more pressure to initiate dialysis for various reasons. But yes, continue to.

Teva 36:18

Obviously, you know, I have, I have a limited experience and there’s only this one institution. But the point being, do the clinicians only pull in these algorithms when they’re getting the answer that they don’t want, or does it have to come in every time? Sort of. Dave, I think you suggested maybe more of an informed assent model, but that’s another area where there could be bias about when do clinicians pull the trigger on using these algorithms.

Dave 36:45

Yeah. And I think it’s really. I would add something we haven’t discussed, which is the increasing percentage of unrepresented patients who lose decisional capacity. So we’ve been talking about people where there is a surrogate and how do we decide between burn on surrogates and predictive accuracy. But more and more patients end up in emergency rooms and ICUs and they’re unrepresented. We can’t find somebody to make decisions for them.

On the one hand, I think a PPP could be really valuable. In that case, it might be the only information we have on them. But Tevi, your point there is then, or as Eric was saying, we don’t have this independent person looking out for their interests. And then you could really nudge, like, yeah, here’s what the PPP says. Give them that expensive back surgery and there’s not some relative they’re looking out for them and say, no, that doesn’t make any sense.

Eric 37:31

Yeah. And then I guess the question is it better? Like, what are we trying to look for the perfect or are we trying to just look for better than what we currently do? And what we currently generally do is like ethics committees for that, which are generally part of the institution anyways, that’s trying to combine all this data and.

Alex 37:49

Quick Plug AGS is coming out or has just come out with a new position statement on care of unrepresented. And we will have a podcast with Tim Ferrell and others on that issue coming up. That’s my. I guess I’m not an AI. That’s my nudge to listen to that podcast.

Jenny 38:08

So, I mean, you know, we’ve been talking about nudges a lot and this notion of something like this becoming a default and having a default effect. And I do think that that is a real important thing to think about. So you have a predictor and it says the patient would, you know, the predictor says the patient would want treatment. It’s going to be really hard psychologically for the clinical team, the surrogate, to override that default. And in our editorial we also talk about the potential legal concern right of now you have this, this idea that substituted judgment is saying that the patient would want this treatment.

And if you depart from that, if we think it’s accurate, you’re departing from like the legal standard of substituted judgment. So I think one of my major concerns is more just how this tool would actually be used. It’s great if it’s just used as an adjunct, as like part of the conversation, part of the decision, part of the information that the surrogate and the clinical team are using. But I worry that it’s just going to just become sort of too weighted or overly weighted.

Alex 39:15

Well, yeah, I guess while we’re venting our concerns, I have many and I think this one’s been said, but I just want to reiterate it. The turning of the is into an ought and that many of the decisions people make are in favor of likely non beneficial, potentially harmful treatment. And I worry that if we just go on what sort of decisions other people have made in similar scenarios, it may sort of take us inexorably even further and further down that pathway. If we’re nudging more and more people to make those decisions, well, then we’ll have more and more people who have made those decisions. And it gets into this sort of spiral and I don’t know where that ends up and I worry about that.

Eric 39:57

Yeah, there’s the other question too. If we’re using past data, so if AI is going through all of our notes, past hospitalizations, all of that, to come up with a what would I want? How much of that is how much of our past behaviors should be predicting our future ones? I mean, I just think about the patients with, you know, severe mental health disorders who are, you know, don’t make decisions not to see a healthcare provider based on like informed consent, but because that they have a significant mental health issue that is interfering with their decision making capacity.

And how much should we be relying on those past behaviors predict future kind of wants and desires, which we sometimes we currently see this too. Right. You know, when we talk to medical teams, oh, this patient never really interacted much with the healthcare system. Well, they never interacted much with the healthcare system because they were actively psychotic and delusional and having, you know, these significant fears. How much should that be weighing in to how we think about what somebody would want?

Dave 41:02

Yeah, and I think that becomes it’s interesting because with the initial iteration, which was just surveys there, it’s not so much of a worry because you can assess the people from whom you’re gathering the data and at least get some assessment of their capacity. But once we go more into these AI approaches, so, for instance, where what we’re doing is we’re playing off a recording of the conversation that the patient had with their clinician to take what they’re saying seriously. We need to make a judgment that at the time, they were fairly competent in that discussion. And how do we know that and.

Eric 41:41

It’s an authentic viewpoint for them? Because, I mean, even when I talk to, like, my own doctor, it’s not always authentic, Eric, that’s coming out. It’s like, oh, yeah, yeah, totally. I will totally do that. Yes. When there’s no chance I’m going to be doing that. Is that really authentic, Eric?

Alex 42:02

Yeah, I think another concern that, you know, we should. We should discuss is the. Or briefly, because we’re. We’re coming to the end, at least mention is the privacy concerns. You know, Teva and Brian Block and I had this conversation where we were talking about, you know, well, what would the AI say in some of these cases? And you’d ask it, well, why did you make that decision? Well, you think you’re. You think that he doesn’t have kids, but actually he does. He has several children. And based on his interactions with them, this is sort of, you know, so there are tremendous privacy concerns. Would people be willing to allow these things to be recorded? Would they be willing to use that information? What sort of, you know, backgrounds or privacy concerns would that unearth, raise, et cetera?

Eric 42:53

All right, my last question to all of you is there is no current AI box that is doing this right now. Dave is talking about getting $20 million to run some trials.

Dave 43:07

That was begging.

Eric 43:08

That was begging. How close do you think we are to this? Like, how is this just a pie in the sky? Did we just chat for 50 minutes about kind of something that’s never going to happen? Or do you see this actually being something that we’re going to see 2 years, 5 years, 10 years from now? Teva, I’m going to start off with you.

Teva 43:31

Well, I’m coming here from the Bay Area, where, you know, health tech, move fast, break things is kind of the motto. So I think we’re talking about this end use case scenario. Predicting patient preferences. That feels pretty far away to me. But some of the other things about incorporating AI to predict functional outcomes, you and Alex and Eric, you guys do a lot of work with prognosis. So I think that is more. More on the horizon is how can we use AI to predict the functional outcomes that mean the most to patients? I think that is a more realistic scenario, Jenny.

Jenny 44:07

So I think it’s very close because people are creating digital twins, like, of their parents. Like, I just listened to an episode of MIT Review of this journalist who went through an exercise of creating a digital twin of her parents and interacting with it and seeing how accurate it was. And you can imagine a future where her actual parents become incapacitated and she asks their digital twin, what would you want in this situation, Mom? So I think it’s quite close, actually, in that sense.

Eric 44:34

I remember Zoom, the makers of the video conferencing thing that we’re using, they wanted to create avatars for meetings. So when you join a meeting, just like Dave right now could be avatar Dave, that Dave would say is his avatar. And I can imagine in the future where none of us are the real ones, we just put our avatars up there and there’s just one avatars talking to each other. Like, what’s the point of it?

Dave 45:02

I mean, they’d be more interesting and funny. See if that’s why, you know, it’s not an avatar.

Eric 45:07

Yeah. Turn up the humor part better.

Alex 45:09

Guitarist for the Alex avatar. Yeah.

Eric 45:12

Dave, how close do you think we are?

Dave 45:15

I was joking about the money, but I think it’s going to depend on the money because we could get tech people to do this, but to test it, to validate it in a way that we could use it in medicine, we’re going to need to do a bunch of testing. I think Walter sitting in Armstrong is a guy at Duke who’s forming a company to try to do this. If somebody gives him truly the money, I think within 10 years, we’ll have it.

Eric 45:39

10 years? 10 years is ages in the tech space.

Dave 45:44

Yeah, in the tech space is. But in. In the clinical trials. So we’re going to put this into medicine. You guys are going to use it in the ICU. We’re going to have to get it tested, we’re going to have to get it validated. It’s going to have to be approved. FDA is going to have to look at it, and lots of stuff’s going to have to happen that’s not as fast as tech is.

Eric 46:02

Oh, they’re not going to like hearing that. Well, I want to thank all three of you. But before we end, Alex, do you want to do a little bit more of Someday Never Comes.

Alex 46:10

(singing)

Eric 47:01

Dave, Jenny, Teva, thank you for joining us on this podcast.

Dave 47:05

Thanks. This is really fun.

Jenny 47:06

Thank you all.

Eric 47:07

And thank you to all of our listeners for your continued support.

***** Claim your CME credit for this episode! *****

Claim your CME credit for EP337 “AI for surrogate decision making?!?”
https://ww2.highmarksce.com/ucsf/index.cfm?do=ip.claimCreditApp&eventID=14382


Note
:
If you have not already registered for the annual CME subscription (cost is $100 for a year’s worth of CME podcasts), you can register here https://cme-reg.configio.com/pd/3315?code=6PhHcL752r

For more info on the CME credit, go to https://geripal.org/cme/


Disclosures:
Moderators Drs. Widera and Smith have no relationships to disclose.  Guests Dave Wendler, Jenny Blumenthal-Barby, and Teva Brender have no relationships to disclose.

Accreditation
In support of improving patient care, UCSF Office of CME is jointly accredited by the Accreditation Council for Continuing Medical Education (ACCME), the Accreditation Council for Pharmacy Education (ACPE), and the American Nurses Credentialing Center (ANCC), to provide continuing education for the healthcare team.

Designation
University of California, San Francisco, designates this enduring material for a maximum of 0.75 AMA PRA Category 1 credit(s)™. Physicians should claim only the credit commensurate with the extent of their participation in the activity.

MOC
Successful completion of this CME activity, which includes participation in the evaluation component, enables the participant to earn up to 0.75 MOC points per podcast in the American Board of Internal Medicine’s (ABIM) Maintenance of Certification (MOC) program. It is the CME activity provider’s responsibility to submit participant completion information to ACCME for the purpose of granting ABIM MOC credit.

ABIM MOC credit will be offered to subscribers in November, 2024.  Subscribers will claim MOC credit by completing an evaluation with self-reflection questions. For any MOC questions, please email moc@ucsf.edu.

Back To Top
Search