Skip to content

Last week we talked about a trial of a nurse and social worker outpatient palliative care intervention published in JAMA.  This week, we talk about the other major palliative care trial of default palliative care consults for hospitalized older adults with COPD, kidney disease, or dementia, published in the same issue of JAMA. (See also our accompanying editorial, first author Ashwin Kotwal who joins today as a co-host, and a podcast I recorded with JAMA editor Preeti Malani). For context, listen to the prior podcast with Scott on “nudges” and prior podcast with Kate on who should get palliative care.

Three things I love about this podcast, and why you should listen.  First, in our editorial, we expressed concern about the length of stay metric not being patient centric, though important for health systems focused on cost savings.  It was refreshing to hear Scott and Kate express similar sentiments.  Second, we wanted to know how the palliative care clinicians felt about the increased workload – and we had some glimpses into those experiences (and hope for a future publication that fleshes it out further).  Finally, we heard about next steps and lessons learned, as though this was the largest pragmatic trial of palliative care to date, it isn’t their last.  Much more to come.  And next time maybe we really will play the game where every time the word pragmatic is mentioned you have to drink 🙂

And I get to play Phish, who Scott has seen about 100 times in concert. I saw them only twice. Once as an undergraduate at Michigan, in 1994.  They played Hill auditorium and I signed up to be an usher.  Can you imagine trying to usher Phish Heads to stay in their assigned seats?  Yeah, no. Gave up at some point and joined them.  Full electric experience. Second time was with Neil Young at the Bridge School Benefit at the Shoreline Amphitheater, California in 1998.  That concert, entirely acoustic, was impressive in its sheer musical virtuosity.  You’re kind of naked playing acoustic like that.  On today’s podcast  you get me, not naked, though still only with 2 left fingers (hand still broken) on the guitar, playing “Miss You.”

-@AlexSmithMD

 

Additional links:

Trey Anasatsio playing Miss You alone and acoustic, start around 21 minutes for the lead in

Original article describing the potential for default options to improve health care delivery: https://www.nejm.org/doi/full/10.1056/NEJMsb071595

Scott on goals of care as the elusive holy grail outcome of palliative care trials  (we discussed toward the end): https://www.nejm.org/doi/full/10.1056/NEJMp1908153

The protocol paper for REDAPS: https://www.atsjournals.org/doi/10.1513/AnnalsATS.201604-308OT

Big recently funded PCORI trial comparing specialist PC delivered by default vs. generalist PC following CAPC training + a different EHR nudge:  https://www.pcori.org/research-results/2023/comparative-effectiveness-generalist-versus-specialist-palliative-care-inpatients

Kate’s “Palliative Connect” RCT: https://clinicaltrials.gov/study/NCT05502861?term=katherine%20courtright&rank=1

 

***** Claim your CME credit for this episode! *****

Claim your CME credit for EP296 “RCT of Default Inpatient PC Consults”
https://ww2.highmarksce.com/ucsf/index.cfm?do=ip.claimCreditApp&eventID=12486

Note:
If you have not already registered for the annual CME subscription (cost is $100 for a year’s worth of CME podcasts), you can register here https://cme-reg.configio.com/pd/3315?code=6PhHcL752r

For more info on the CME credit, go to https://geripal.org/cme/


Disclosures:
Moderators Drs Widera and Smith have no relationships to disclose.  Panelists Kate Courtright and Scott Halpern have no relationships to disclose. Ashwin Kotwal reports receiving a research grant from Humana Inc. and consulting for Papa Health.

Accreditation
In support of improving patient care, UCSF Office of CME is jointly accredited by the Accreditation Council for Continuing Medical Education (ACCME), the Accreditation Council for Pharmacy Education (ACPE), and the American Nurses Credentialing Center (ANCC), to provide continuing education for the healthcare team.

Designation
University of California, San Francisco, designates this enduring material for a maximum of 0.75 AMA PRA Category 1 credit(s)™. Physicians should claim only the credit commensurate with the extent of their participation in the activity.

MOC
Successful completion of this CME activity, which includes participation in the evaluation component, enables the participant to earn up to 0.75 MOC points per podcast in the American Board of Internal Medicine’s (ABIM) Maintenance of Certification (MOC) program. It is the CME activity provider’s responsibility to submit participant completion information to ACCME for the purpose of granting ABIM MOC credit.

ABIM MOC credit will be offered to subscribers in November, 2024.  Subscribers will claim MOC credit by completing an evaluation with self-reflection questions. For any MOC questions, please email moc@ucsf.edu.

 

 


 

 

 


Eric: Welcome to the GeriPal podcast. This is Eric Widera.

Alex: This is Alex Smith.

Eric: Wait, Alex. Is this a coup? Do you have a new co-host?

Alex: That’s right. Ashwin is taking your place and you’re now remote. [laughter]

Eric: Maybe we should talk after this podcast about what’s happening here, but we have a great topic today. We’re going to be talking about default palliative care, a new article that just came out in JAMA this week, co-paired with another article that was published that we did a podcast on last week. Alex, who are our guests today?

Alex: We’re delighted to welcome some repeat guests; Scott Halpern, who’s a pulmonary and critical care physician, researcher, and director of the PAIR Center at the University of Pennsylvania. And PAIR stands for the Palliative and Advanced Illness Research Center. Scott, welcome back to GeriPal.

Scott: Pleasure to be here.

Alex: And we’re delighted to welcome Kate Courtright, who’s a pulmonary and critical care and palliative care physician researcher. Also a core faculty member at the PAIR Center at the University of Pennsylvania. Kate, welcome back to GeriPal.

Kate: Thank you. Good to see you.

Alex: And as mentioned earlier, Ashwin Kotwal is joining us as a guest host. He’s a geriatrician and palliative care doc/researcher in the UCSF Division of Geriatrics. Ashwin, welcome back.

Ashwin: Thanks for having me.

Eric: Guest host makes me feel a little bit better since I’m not in the room. It’s a better title. Scott, I think you have a song request before we talk about the JAMA piece and default palliative care. What is the song request?

Scott: I would ask that Alex finally consider, after all these years, a song by the best band in the world, Phish, and specifically this song, Miss You.

Alex: Yeah. And why this song?

Scott: A couple reasons. So, Trey Anastasio, the lead singer, wrote this song about his sister Kristy when she died about a decade ago, and that was a couple years after my father died. And when I first heard the song live, I was a pool of tears and it just hits me that way. And, of course, this is what families of patients undergoing palliative care go through. But I also think there’s a cool connection to this study in particular.

So, Alex, imagine you’re a seriously-ill hospitalized patient and I’m your doctor, unluckily for you. But potentially fortunately for you, Kate is the palliative care clinician who’s on service. Absent a default order for palliative care consultation, your myopic doctor, me, might not think that you would benefit from palliative care, and so…

Alex: You might miss me?

Scott: Kate would miss you. [laughter]

Alex: Yeah. Kate would miss me.

Kate: Indeed, I would.

Alex: That’s good. And how many Phish shows have you been to, Scott?

Scott: About 100.

Alex: That’s amazing.

Ashwin: Wow.

Alex: That is amazing. This is a beautiful, wistful, melancholy tune. I’m still in a splint, so I’m going to play it with two fingers. Here’s a little bit of it.

(Singing).

Eric: That was lovely, Alex. Scott, I’ve got a question.

Scott: Sure.

Eric: I recently learned about something called Parrotheads because we had a Jimmy Buffett song. Who was that requested by again, Alex?

Alex: That was the doula podcast, which will be released later. Death Doulas.

Eric: And I know about Deadheads. Is there a Phishheads?

Scott: Phish phans.

Eric: Phish phans?

Scott: PH.

Eric: Yeah. A little bit better than Phishheads.

Scott: Yeah. And the best shows are always on Sundays, so it becomes Sunday Phunday, also with a PH.

Eric: Thank you for the song. Okay. Kate, I’m going to jump to you. I think the last time we had you on was 2022.

Kate: Sounds right.

Eric: I remember in particular one of the things you said is that we’ve had this explosive growth in inpatient palliative care, but the large majority of evidence that we’ve had for palliative care really comes from outpatient settings. Again, we’ll have a link to it in our show notes. This JAMA study, you went from zero to not even 100, like 1,000. This is 24,000 encounters, over 15,000 patients, 11 hospitals, eight states.

Why did you decide to do this study? And I am still trying to grasp my head about how big… This is the largest palliative care study, by far, we’ve ever had. You could probably tally up all the other patients you’d probably come to not even 15,000 unique patients.

Kate: The short answer is go big or go home, right? And before we started recording, you asked me what race I was going to run and it was either another 50-miler or a 12-hour timed race. So this must be my personality. I just go for the gusto.

I think the long answer is that it was intentionally designed as a pragmatic trial in which we wanted to roll this out in that kind of setting; many hospitals that look different from each other that are different in geography, and then inclusive of populations of patients that have prevalent diseases that often are underrepresented in palliative care studies.

Admittedly, we used historical samples to project what our sample size would be and we underestimated a little because that’s a challenge in these studies. But that’s really the upshot. Scott may have additional commentary there.

Eric: What do you think, Scott?

Scott: Yeah. You’re right, Eric. My sense is if you add all the palliative care RCTs that have been done to date, you wouldn’t get a sample size that equals this. And importantly, almost all the ones that have been done were with consent, so people had to consent to be in. And that’s often ethically not only appropriate but required, but in minimal-risk studies, it doesn’t have to be that way. And the virtue of not having a consent approach is you get all-comers and there’s no selection effects. So that’s a totally differentiating feature.

But what we really were after is, how do we understand in an experimental way whether inpatient palliative care affects certain outcomes? We can’t randomize people to get inpatient palliative care or not, no IRB is going to approve that and I couldn’t live with myself. But we can randomize whether an intervention that increases the probability of palliative care delivery is or is not administered to a given patient.

So that’s what we did, that’s the way to get at experimentally what previously had been based in just observational data; this whole notion that inpatient palliative care reduces length of stay. Do we know that that’s true? We needed to find out.

Alex: Yeah.

Eric: A couple words that I heard come up too, just from my own memory… Because I know, Kate, when you were last on, we talked about pragmatic trials. What is a pragmatic trial? You mentioned this was a pragmatic trial. Just in a sentence or two, what do you mean by a pragmatic trial?

Kate: I mean that it is a reasonable alternative to what’s happening in usual care and it is embedded or rolled out into routine practice that’s already happening and you aren’t requiring people to administer, be part of, enroll, recruit, do all the things, and that your outcomes are pragmatically captured from data that exists. That’s the far end of the pragmatic spectrum, but it is a spectrum. I think we talked about that on the last podcast.

Eric: Yeah. So you’re already thinking about how do I… Is this implementable in, is that even a word, practices in hospitals that are not just these 11 hospitals?

Kate: By and large, your goal would be to make it as generalizable as possible within the confines of how different hospital A, B and C are, even when you try to make it as diverse of an environment.

Eric: And, cast, one more question. I know I got a bunch of researchers with me. I read one sentence, it said, “A pragmatic stepped-wedge cluster randomized trial.” I think I got three of those words, stepped-wedged cluster. What is that and why did you decide to do it? Stepped-wedge cluster.

Kate: You want to take that one, Scott, or was that to me?

Eric: Either.

Scott: Go for it.

Kate: Okay. Stepped-wedge is… It’s easier with a visual, but it simply means, at its core with two groups, that every hospital in our case starts in routine care or whatever the usual care definition is and will ultimately, in a randomized fashion, transition to adopt the intervention in steps, meaning over time. And so, for us, it was a two to three month step. 2.7 months, excuse me. So that everybody by the end of the trial has the intervention and is in the intervention phase, if you will, or period.

You didn’t ask me this, but there are virtues and drawbacks to that in contrast to what might be more familiar to people, which is a parallel trial in which you would just assign half the hospitals to keep usual care, in our case, or half the hospitals randomly assigned to adopt the default order.

And then the cluster part is simply our randomization unit, and for us it was a hospital and, again, for a number of reasons. But you could have clustering within clinics, within clinicians who care for a panel of patients, that sort of thing.

Eric: So you’re not randomizing the patient, you’re randomizing when they adopt the intervention, which is the default order, over time?

Kate: The hospital.

Scott: When the hospital does, yeah.

Eric: When the hospital does, yeah.

Ashwin: And, Kate, was there any blinding to this for the teams? Did they know that these orders were going to be put into place eventually and did they work with you all in thinking about the timing, or was that completely random?

Kate: The hospital assignment as to when they transition on a calendar day is randomly assigned, so they didn’t get a choice about that. But for obvious reasons, clinicians were not blinded. Palliative care teams were not blinded because one, we needed to engage them as stakeholders for this big change to their workflow or hospital and volume, as I’m sure we’ll get into.

And then clinicians who were receiving the default orders certainly were not blinded because they could see them appear. So yes, that is true.

Scott: Yeah.

Eric: Okay.

Ashwin: And that seems more pragmatic, too, that you’d want them to be aware that their workload is going to change, and it sounds like maybe they even made some changes in response or to prepare for that.

Scott: Yeah, we neither encouraged nor prohibited them from making changes, consistent with the pragmatic nature. Some did, some didn’t. The big changes that were made were not big and not sustained in a lot of cases.

Eric: What do you mean by not sustained?

Scott: You add a FTE for a year and then it’s gone the next year.

Eric: And this was done pre-COVID, right?

Scott: This was, yes. This was a lifetime’s worth of work. Going back to when I was shopping this idea around the health systems in 2012, we landed on the stepped-wedge design because no one was… I said before you couldn’t randomize people to get palliative care or not, specifically.

Eric: Yeah.

Scott: You can’t say you, Alex Smith, are prohibited from getting palliative care. You can’t do that. We couldn’t even get hospitals to say, “Oh, the default palliative care? Yeah, let’s do that. We don’t want to be in the placebo arm or usual care arm.” So we had to pick a design where everyone ultimately got something to get buy-in in the first place.

Eric: It also feels like it covers two questions; one is, does creating default orders, it could be any default order, change the number of outcomes? So in this case consultations. And then the second thing is, does that result, for palliative care, in any important outcomes?

Two really interesting questions. One is about this nudge. Would you consider this a behavioral economics intervention? A nudge towards a particular… Pushing people towards something, in this case a palliative care consult?

Kate: Absolutely.

Scott: Yeah. We’re making it easier to do the thing we thought that they ought to be doing.

Eric: And a little harder not to do it. Because now it requires another step to cancel the consult.

Scott: Yes. I think it’s fair to say that it was no harder to cancel the consult than it would be to order a consult under normal conditions.

Eric: Yeah.

Scott: So we’re just flipping the switch from what happens absent that same level of effort.

Eric: Beautiful. Let’s talk about that. Let’s dive into what the intervention is. What was the nudge? Before we talk about the nudge, interesting you included people 65 and older with COPD, dementia or kidney disease. Why those three? I’m going to turn to you, Scott, for this one.

Scott: At the time that we did this study… And I think it’s still true today, maybe with one exception being dementia. But in general, at the time we did the study, nearly all if not all palliative care studies had been done in cancer and/or heart failure so we wanted to expand the evidence base.

So, we picked three diseases that consensus criteria at the time had recommended inpatient palliative care consultation for patients with dementia, COPD and end-stage renal disease who also meet these certain other criteria. They have to… It’s not all-comers with those diseases, it’s people who are particularly sick.

Eric: COPD was two hospitalizations last 12 months or oxygen, dialysis for kidney failure and coming in from a long-term care facility, or it has a PEG tube or two hospitalizations the past 12 months for dementia. So those were the three.

Scott: Wow, it’s like you read the paper. [laughter]

Eric: I got it right in front of me, Scott. Right in front of me. I shouldn’t say that. That was purely from my memory, listeners. I am that good. [laughter]

Scott: Nailed it. Yeah, exactly. We wanted to assess, do these people benefit? The other thing though is we never would’ve gotten this past go with the health system if we included everyone, right?

Eric: Yeah.

Scott: Because then think about what the response of the palliative care teams would be. “Oh, now we’re getting defaulted to see everyone with cancer and everyone with heart failure and all these other groups? No way, not sustainable.”

And related to that, when we designed the study, we said it was going to be everyone 45 and older, but we did a pilot year and when we counted the numbers, that was… All the palliative care teams were like…

Eric: Whoa, whoa, whoa. [laughter]

Scott: And that’s how we wound up at 65.

Eric: All right. And then Kate, what was the nudge itself? So, we talked a little bit about it as a default. Can you explain what actually happened?

Kate: Sure. So, when patients were identified as eligible through this EHR algorithm and some nurse EHR input, if needed, they were automatically enrolled. If they were in the intervention phase at the hospital on 2:00 PM on hospital day two, the system was programmed to create a palliative care consult order that then triggered an alert to the attending clinician that said, “This patient has COPD with two hospitalizations,” like how they met the criteria. “A palliative care consult has been ordered. If you do not wish to have that, go here to cancel it.”

We tried to make it you’re not trying to hide that they can cancel it. That alert occurred for 24 hours, because that happens when they log into the chart, to enable them sufficient opportunity to truly opt out and to know about it. And the consult remained inactive or pended, if you will, for that whole time.

Then it wasn’t until hospital day three at 2:00 PM when it would become active and then go through whatever process that hospital’s palliative care team took in consults. It’s now going through that routine process.

Eric: A couple of questions on that. First, I did ask Twitter did they have any questions about this and Anil Makam asked one question about the trigger, “How much effort did it take for each hospital’s IT infrastructure for automated identification, randomization and alert?” Was this a huge heavy lift, or was it pretty easy to create this trigger?

Kate: This is where maybe that bottle of alcohol’s needed. [laughter]

Eric: Enough said.

Kate: No, it’s a tough answer in part because, as Scott mentioned, this was being implemented somewhere in the 2014/15 range after funding and building and that pilot year. And I think it’s fair to say that health systems, even IT and EHR infrastructures, are already different. And they’re better, they’re more streamlined, people have the same platform across all their hospitals or are trying to move to that.

And so, for this trial… And this was also a big lesson learned for us. This trial, what was challenging about the IT builds, I believe, was a little less about actually doing the work once we understood what the work needed to be but the fact that although all used the same EHR in name, they all had actually different instances of that EHR. So there was duplicative effort required because of that, which is a health system-specific issue that, frankly, as early pragmatic trialists, we didn’t even know to ask.

And I think that that is a challenge. But having learned that and now knowing the language to speak better to IT and IS folks about what we’re looking for, what we need, not allowing assumptions, I think we understand better how to partner and make that a more efficient process. But I don’t want to pretend that these builds happen overnight, nor should they. They do require piloting and whatnot. So it looks easy once it’s implemented because it just happens, but a lot of work goes into it.

Alex: Yeah.

Eric: I got another question. So, you get the nudge, it pushes this default order. In a non-pragmatic trial, palliative care team is going to see everybody. You’ve created increased infrastructure, it’s part of, “Do I want to know if there’s palliative care work?” You didn’t do that here. It’s a pragmatic study, a lot of these teams look the same. If the workload’s going to increase, their team size is going to be the same.

Could the palliative care team opt out of seeing somebody or maybe nudge the primary team, “Do you think really that palliative care needs to be seen here?” Because I did hear from another colleague, I won’t mention the name, that as a fellow, they had these auto triggers and that’s what… They would call up the primary team. They’d talk about the case and they’d say, “You know what? Maybe you could just do this. Or maybe they could be seen by outpatient palliative care instead.”

Scott: Yeah, for sure. We planned on this. We powered the study with an estimate that we’d increase the proportion of patients seen by palliative care by around 30% and we estimated that the baseline would be somewhere in the range of 10%. It turned out the baseline was about 16% and, indeed, it went up by about 30%. That was the whole idea.

There was no way we ever imagined, or even wanted for that matter, everyone meeting these criteria to be seen by the palliative care team under the intervention. That’s not a sustainable thing. And the goal isn’t… It’s not just that the goal was not to have the best be the enemy of the good. It’s actually that we didn’t know, and we still don’t know, and I think it’s probably not true, that having all of those people seen by palliative care specialists would even be the best.

Eric: Yeah. Okay. I’m going to move on unless, Ashwin, you have any other questions as our guest host?

Ashwin: I have so many questions. One is related to… And this might be jumping ahead a little bit, but I…

Eric: Don’t go to results yet because I’ve got to ask about outcomes.

Ashwin: Okay. All right. Let me pause there.

Alex: Yeah.

Ashwin: Let’s talk about outcomes first, yeah.

Alex: We got to get to outcomes, let’s not bury the lead.

Eric: Yeah, I was worried. Yeah. Okay. I got one big question on outcome: why length of stay? You know I’m going to ask this question. Why did you pick length of stay as the primary outcome?

Scott: All right. I’ll take it. Look, at the time that we wrote this grant, the whole business model of inpatient palliative care was predicated on reduction of stay.

Eric: Yeah.

Scott: That’s what it was.

Eric: Utilization, utilization, utilization.

Scott: We may not think that it’s the ideal outcome measure and Kate and I have sworn that we will never design another trial where length of stay is the primary outcome for sure. But that’s what the business model of inpatient palliative care was predicated on and hence that was the outcome that is responsive to health system decisions about whether they’re going to up or down staff palliative care teams.

Eric: I completely agree. I think it is an important outcome for stakeholders. It is probably the primary reason why palliative care in hospitals just took off much more than outpatient palliative care where we have a lot more evidence.

Scott: Yep.

Kate: Yep.

Eric: But I’m going to push back a little bit. You did something a little funky with length of stay, right? You coded deaths as the 99th percentile of longest length of stay, which is interesting because that’s something that hospitals don’t care about. So if you have say two similar patients who go to the ICU with COPD, both are an event, one lives for 27 days and dies in the ICU, the other one gets a palliative care consult and dies four days from hospital admission.

In reality, that is a huge cost saving for the hospital. But for this primary outcome, with this funky code, it makes it look like they had the exact same length of stay.

Scott: Yeah. And I know you don’t want to get into the weeds of the stats here, but I’ll just say two things: first of all, we did analyze it the conventional way also, and the result is exactly the same.

Eric: You just blew my thesis out of the water.

Scott: Yeah. Sorry. This is a null overall finding on length of stay and analyzed in 17 different ways, with or without death being ranked. But the reason that our primary approach ranked it is because every other approach to analyzing length of stay data is wrong, statistically wrong. Because it suffers from an informative censoring problem. You don’t know what the length of stay would have been had it not been for the death. Literally, there’s dozens of stats, weedy stats papers on this.

Eric: I am not a statistician, but I would push back. If fundamentally the question is I am making a business case to the hospital and I’m going to be using this paper, I am not going to be using a length of stay coded to the 99th percentile, I’m going to be using… I just want the plain length of stay data.

Scott: I know. And we’re researchers for everyone, that’s why we did it the other way too. So we can do the scientifically correct thing and we can do the thing that you, Eric, as the CEO…

Eric: As the non-scientifically correct person.

Scott: Yeah. Then we can make everyone happy.

Eric: Yeah. Ashwin, I think I’m done. You can jump to the results now.

Ashwin: Yeah. Related to this length of stay argument, you all did try to look at people who received treatment that comply a average treatment effect and found a massive difference in the length of stay. And when we were reading it, we were thinking, is this just per protocol analysis on steroids? Is this just a signal? How should we be interpreting that result?

Eric: And for the listeners who may have not read the study, I’m just going to… Scott already threw out the result of no change in length of stay in primary outcome in multiple sensitivity analysis for the default versus non-default palliative care. But Ashwin is bringing up this… Was it CHAID analysis?

Ashwin: A CHAID analysis.

Eric: Looking specifically at Scott, is that out of folks who actually got the intervention? Or not the intervention, everybody got the default intervention.

Scott: Yeah, this is an analytic technique that comes from economists and it’s based on an instrumental variable, which we don’t need to get into the weeds. But it’s very much not a per protocol analysis. A per protocol analysis would say, “These are the people who got palliative care and these are the people who didn’t, and we’re going to compare their outcomes.”

It’s like last time I was on with you guys, we were talking about the state of science of POLST. All POLST studies do that; these are the people who complete POLST, these are the people… Huge, confounding. These people just aren’t like these people. So it’s flawed on its face.

What a CHAID analysis does is infers, using all of the data, no one’s excluded, of the people who only would have gotten the intervention, had they by chance been randomized to the intervention group, what was the effect on their length of stay of having gotten it or not? So it’s not all the people who got it, it’s of the people who only would’ve gotten it if they were assigned to the intervention, what did the intervention do to them? And as Ashwin mentioned, it was a pretty big reduction.

Eric: That is… I think we’d probably have a whole nother talk about that analysis. Ashwin, anything else about that you want to bring up?

Ashwin: I am curious what that means. Is that a result of palliative care teams being really good at knowing who to see? Is it a function of these default orders being in place for a certain period of time and people getting used to it? How do you all interpret that finding?

Alex: Should we say for the listeners who may not have read the study that of those who were assigned the default palliative care orders, I think about 45% had the consult, about 10% of cases the primary team canceled the consult, and the other 45% of cases the patients were not seen by palliative care.

Ashwin: Right.

Alex: Just want to make sure that’s clear.

Kate: Correct. I don’t know that I’m going to directly answer your question with this, Ashwin, but I just want to mention because we brought it up earlier that the palliative care team indeed, to some extent, did select who they were going to see through all different means. And that’s because, as Scott mentioned, we didn’t tell them how to handle that excess volume or what to do with it. We really worked with them to do what was most comfortable within their usual workflow.

If they had a busy day before this trial started, what did they do? Did they call the teams? Did they screen charts? Did they just do first come first serve? And encouraged them to continue whatever practice was most working for them. So we know that probably there were selections going on as to who was going to be seen or prioritized or triaged. We don’t necessarily systematically know how, and that’s okay. But it gets at that issue of, how much of this was driven by intentional selection when 10 came in and two were seen? Or there would be one canceled, four seen, and…

But then the other issue is you want to believe that a really simplified, that Scott might get mad at me for saying, takeaway is simply that… And what we were trying to get at is the idea that if delivered, if palliative care is indeed delivered, outcomes move. But when you have half of those accepted default orders not seen for all sorts of reasons, discharged, died or just no time or what have you, or triaged not to, and half are seen comes out in the wash as null.

And so we really were trying to get at that in a more robust, statistical way than a per protocol analysis which is inherently flawed, as we’ve talked about.

Eric: And it also seems like while the primary outcome was null, there were some other important outcomes, including the fact that your intervention, the default orders, increased consultations by a lot, like 40-something percent versus 16%. Decreased time to consultations by a day, which is pretty huge, especially… I don’t know what the median length of stay is, but it’s usually not too long in the hospital. And you had higher rates of hospice discharges, higher rates of DNR orders.

So a lot of results coming in when you think about, do these default orders for palliative care… What kind of impact should we be seeing from them? Now, can I turn to Ashwin? I read your editorial, Ashwin. Ashwin and Alex and…

Ashwin: Lauren Hunt.

Eric: Lauren Hunt did an editorial together and one of the statements struck me as… Ashwin, you said in the editorial that palliative care teams are probably both relieved… Mixed emotions with this thing. You remember the line?

Ashwin: Yeah. I think when we were reading through the article, we put ourselves in the place of palliative care teams that might be about to have a default order that dramatically increases the number of people we see. And I’m on service right now, it can get really busy, we spend a lot of time with every consult that we see. I remember as a fellow some days struggling to even get out of the team room because you were getting paged so much and doing a lot of triage on who to see. And so this idea of increasing the number of consults with the same team, the same resources, I think it gave me a little bit of pause.

And so I’m curious also, Alex, what you think. I think, for me, I wondered if there might be some flexibility in the default order, that there’s a different threshold you use for the default order to make it sustainable. It invites so many other questions about how we increase access, but I also didn’t want to forget about some of the wellbeing of these teams where their workload is increasing.

Alex: I think it also raises a question that you said you were going to ask here. What did you hear from the teams? Did you hear anything from the teams? Did you have access to any information? What was their perception of this experience?

Kate: Just Scott pointed out… Okay. So, we did hear from the teams. We spoke to them before as part of the lead into the trial, targeted the hospital’s teams before they went live at their hospital to give them extra airtime. Talked about things like pragmatic ways to manage volume, anticipated volume, gave them projections of volume based on enrollment that was going on in usual care, and just really did a lot of palliative care. We did a lot of listening.

They were nervous, they were distressed. Ashwin, I was on the consult service last week at our busy hospital and I do admit that sometimes I think, “I can’t even imagine.” And yet I think there’s a tension in the field that you can read the same number of articles that say we are not seeing enough people and we’re not seeing them early enough. We need to see who is out there. How do we get to them?

And then when we do create these triggers that are imperfect and those are the future directions, I hope, we talk about, that then we’re like, “Whoa. No, no, no.” So there is a tension there and I totally agree with you that whether it’s triage, a better trigger, a different threshold, better identification of palliative care needs rather than diagnoses or prognosis and how we get at that I think we have a lot of work to do, and I’m excited about that work.

But I think conceptually we have to be okay that we do want to see the right people and we want to see them earlier, and we don’t even know the universe of those people unless we start to systematically trigger and take out some of the biases that are inherent to the usual care opt-in clinician. The Scott example of his myopic Alex isn’t going to get palliative care because he just doesn’t think about it the same way as had you just been fortunate to get Kate as your ICU doctor that day.

Scott: Yeah. You lost the lottery, Alex.

Eric: I got a question because I want to save time for next steps.

Alex: Yeah.

Eric: Scott, I’m going to turn to you in this one. Big picture, we started off with you wanted to run a trial in a world where you can’t randomize people to palliative care or not to see whether or not palliative care does anything. Based on the results you got, negative length of stay, some other positive secondary outcomes, what do we know? Can we answer the question? Does palliative care do anything, based on this study?

Scott: Yeah. I think your eye roll at secondary outcomes aside, that we can actually stay alive.

Eric: Dang the video. Just go back to audio days, Alex.

Scott: Ashwin brought up the comply our average treatment effect analysis is an unbiased way of saying that when inpatient palliative care teams see patients who are 65 and older with one of those three diseases in this health system… Which is the largest nonprofit health system in the country, by the way. So it’s a pretty generalizable thing. But with those caveats, yeah, we can say that length of stay is reduced.

We can also say that among all-comers who are going to get a palliative care consult, those who get it by default get it sooner and more often. And we can say that getting palliative care does not change your odds of dying in the hospital, which is important to a lot of people, and yet it does increase the odds that you’re going to be discharged to hospice care. Those two things together, no increase in mortality but increase in hospice utilization, it doesn’t prove that palliative care is goal-concordant, but it’s certainly suggestive.

Eric: Okay. Kate, I got a lightning question for you because we’re getting close and I want to know future directions before we do that. So, you’re a pulmonary critical care doctor, right?

Kate: Yeah.

Alex: And palliative care doctor.

Eric: And palliative care, but pulmonary critical care.

Kate: I am.

Eric: Is there a 15,000 person pulmonary critical care study showing that there’s benefits of pulmonary consultations?

Kate: I don’t know if this is…

Eric: And why is palliative care any different? Or cardiology or nephrology?

Kate: Yes. I don’t know if this was meant to be a softball, but you’re needling me at something that really bothers me. [laughter]

Eric: I wasn’t. This is not meant to be softball. It goes to the heart of why we run these studies.

Kate: Yes. It totally bothers me that somehow we, as the field in palliative care, are held to the standard to even prove that we should exist and our value. Diane Meier and others are better to get at the history of that and the why and what, but I think, just looking at it on its face, it’s crap.

But the trial, as I think Scott articulated well, is that not only did we intend to, yes, show that because it seems we need to for health systems to invest in [inaudible], but also to answer that question about when delivered… Which we had to create separation between the groups in order to answer that question in a large, generalizable, pragmatic way. And so I think we accomplished both goals with its limitations in mind.

Eric: Yeah, listeners can have a drinking game with this podcast. Every time we say pragmatic, you have to take a…[laughter]

Okay. Lightning round, next steps. Unless, Ashwin, you have another question? Because I want to talk about next steps. Each of you, one thought, you can maybe have two. But one or two thoughts as far as next steps. Kate, I’m going to start with you.

Kate: My passion project is to do a better job at identifying patients who should be enrolled in these trials that are likely to benefit from palliative care, whether that’s prognostic, whether it’s palliative care needs. But we got to have better predictive enrichment. And I think that’ll get at a number of limitations and problems we’ve talked about.

Eric: Great. Scott?

Scott: My passion project that Kate and other members of the PAIR Center are working actively on is to figure out what outcome measures we can actually use in our next generation big pragmatic, drink, palliative care.

Eric: Because I heard you promise never to use length of stay again.

Scott: It will not be length of stay. Everyone and their mother wants to use goal-concordant care, but newsflash, we don’t have a good way of measuring that. But we’re in the world of large language models, we may be able to get there, so let’s get there. Let’s look at other innovative outcomes that we can all agree matter to all stakeholders and are measured without selection effects.

No offense to the other study that you discussed in the week before, but there are big problems measuring patient-reported outcomes. The missing-ness is not at random and you’re never going to have 100% capture of any patient-reported outcomes. So we all want patient-reported outcomes, Kate and I do too, but that’s a big problem as a primary outcome measure.

Eric: Can I ask one last thing? Kate, when I think about the Jennifer Temel study, we talked about this in the last podcast, I loved the study, but I actually loved the follow-up studies of that study more.

Ashwin: Yeah.

Eric: Are you going to look at that? Because cost wasn’t talked about in this. Is there a qualitative study what the teams thought about this?

Ashwin: Right.

Alex: Right.

Eric: Are those things happening too? And if not, can we ask for it?

Kate: Yeah, you can ask. We have several secondary studies going on, many planned, many that came out of hypothesis generation from some of what we were seeing as it happened and then seeing as results. Some are doable with our data and some are not.

Eric: Cost savings?

Kate: Working on it.

Eric: Working on it. Great.

Alex: Great. We’ll look forward to those.

Eric: Thank you both. But before we end, I think we’re going to go back to Phish.

Alex: A little Phish.

(Singing).

Eric: Okay. Scott, the real reason you picked that song, I know why. Because you missed being on the GeriPal podcast and you’re excited to be on. Is that right? [laughter]

Scott: Got it.

Eric: Got it.

Alex: I loved that song. Thank you. Great choice.

Kate: Thank you, Alex.

Eric: Kate, Scott, thanks for being on GeriPal podcast. And thank you to all of our listeners for your continued support.

Back To Top
Search