Skip to content
Donate Now Subscribe

In a recent episode of the GeriPal podcast, we explored whether the field of palliative care is in need of saving—and, if so, how to save it—with guests Ira Byock, Kristi Newport, and Brynn Bowman. Today, we shift focus to one actionable way to improve palliative care: through quality improvement (QI) collaboratives, registries, and benchmarking. 

To guide this discussion, we’ve invited three leading experts in the field—Drs. Steve Pantilat, David Currow, and Arif Kamal—who bring invaluable experience as pioneers in developing QI collaboratives and registries. Together, they authored a recent paper in JPSM titled “The Case for Collaboration to Optimize Quality,” which underscores the importance of these efforts.

In this episode, Dr. David Currow shares lessons from Australia’s Palliative Care Outcomes Collaborative (PCOC), a national model for standardized data collection and benchmarking that has driven measurable improvements in palliative care. Meanwhile, Drs. Steve Pantilat and Arif Kamal reflect on the history of the Palliative Care Quality Collaborative (PCQC), a U.S.-based initiative formed in 2019 by merging the National Palliative Care Registry (NPCR), the Palliative Care Quality Network (PCQN), and the Global Palliative Care Quality Alliance (GPCQA). Although the PCQC had ambitious goals, it ultimately closed earlier this year. Together, the panelists unpack the reasons behind its closure and discuss the lessons future registries can take away from its challenges.

Throughout the conversation, we tackle some of the field’s biggest questions about registries and QI collaboratives: What data should be collected to create meaningful quality indicators? How can we minimize the administrative burden of data collection on clinicians? And how do we balance the risk of becoming narrowly focused “symptomatologists” with the need to maintain holistic, person-centered care? By addressing these questions, the panel highlights the immense potential of QI initiatives to enhance palliative care while remaining true to the field’s core mission: ensuring that patients and their families feel deeply cared for during life’s most vulnerable moments.

      

** This podcast is not CME eligible. To learn more about CME for other GeriPal episodes, click here.

 


 

 

Eric 00:00

Welcome to the GeriPal Podcast. This is Eric Cordera.

Alex 00:03

This is Alex Smith.

Eric 00:04

And Alex, super excited. Today we’re going to be talking about QI collaboratives and benchmarking and palliative care. Who do we have on the podcast with us today?

Alex 00:11

We have three returning guests today. We have Steve Pantilat, who’s a palliative care doctor and chief of the Division of Palliative Medicine, and the Cates, Bernard and Hellman, Distinguished professor at UCSF. Steve, welcome back to the GeriPal Podcast.

Steve 00:24

Thanks so much. Great to be with you.

Alex 00:26

And we’re delighted to welcome back David Currow, who’s a physician researcher, Matthew Flinders, distinguished professor in Flinders in Adelaide, Australia. David, thank you for joining us. From all the way over there in Australia.

David 00:39

You make it sound like not many people come back to do a second round. [laughter]

Alex 00:47

I don’t know. We have a pretty good read. We’re trying to decide if we want to have like a frequent guest’s coat like they do on Saturday Night Live, something like that.

Steve 00:55

I think you got to do that. Yeah.

Eric 00:57

The five time guest Steve was the one who mentioned that. That he said we should.

Alex 00:59

That was Steve’s idea.

Eric 01:00

Steve’s idea.

Alex 01:01

So we’re working towards that.

Arif 01:02

I like it.

David 01:03

He’s got to be the first guy to wear it then.

Alex 01:05

That’s right. And we’re delighted to welcome Arif Kamal, who’s a palliative care doc and the first ever chief patient officer for the American Cancer Society and the American Academy of Hospice and Palliative Medicine Board President. Arif, welcome back to the Jerrypal Podcast.

Arif 01:21

Thanks for having me back. And as a self proclaimed king of swag, I’m gonna put in a vote for some GeriPal swag for guests, particularly returning members.

Alex 01:29

Yeah.

Arif 01:30

Saying we have a lot of ideas.

Alex 01:33

We got to shut this down.

Eric 01:34

Yeah, we’re going to shut this down. Okay, so we just did a podcast with Ira Byock, Kristi Newport, Brynn Bowman on basically, does palliative care need saving? And if so, how do we do it? This is kind of like a part 2. Not really talking about the problems, but talking about potential solutions, including the need, if there’s a need for registries, collaboratives to improve quality and palliative care. But before we dive into that topic, who has the song request? Is it you, Steve?

Steve 02:04

It is. Yeah, it’s me.

Alex 02:06

What?

Steve 02:06

On behalf of my colleagues, the song is Keep Me in youn Heart by Warren Zevon.

Eric 02:12

And why did you pick this song?

Steve 02:14

Well, you know, I like it because it reminds us that our patients are sick and their time is limited, and therefore there’s a moral imperative for us to provide the best possible care to our patients. Warren Zevon wrote this song after being diagnosed with mesothelioma, and the album came out just a couple of weeks before he died. And this is the last song on his last album.

Eric 02:38

Oh, wow.

Alex 02:39

Yeah. Great choice. And we heard this song previously. We had a podcast early, early on about songs that move you in palliative care, and I think Anne Kelly selected this song. Here’s a little bit.

Alex 03:00

(singing)

Eric 04:01

That was lovely. Good song, Steve.

Alex 04:02

Great choice. Terrific lyrics in that song, by the way.

Steve 04:05

Yeah, it’s really.

Alex 04:06

Check it out, listeners.

Steve 04:07

Really pretty.

Eric 04:09

Okay, let’s dive into the topic. QI collaboratives, registries, benchmarking. Anybody want to just before we even dive into do these things work? What they are. What is a QI collaborative or a registry?

Steve 04:25

All right, I can start. How about that?

Eric 04:28

Yeah, yeah.

Steve 04:28

So QI, what is it? Well, you know, QI collaborative is really just groups in this case of palliative care teams getting together and working together on quality improvement projects, trying to learn together, learn with and from each other. I think what, you know, Peacock in Australia, PCQN, GPCQA, PCQC. What they did differently is that we created a set of standardized data that the teams would collect that really allowed for direct comparison of outcomes and processes of care. So that we were comparing like to like, and we could really benchmark best performers and learn from the best of what really does work.

Eric 05:15

So I get the concept of collaborative and a registry. Why is it important to do this, benchmarking these registries? Do we have data that actually helps individual outcomes?

David 05:29

Absolutely. I mean, you know, at the end of the day, if we don’t measure it, we don’t know how we’re doing and even measuring it within our own services. We have no idea what it’s like in the next service, let alone across the state or across the country. And, you know, how many of us have really spent time in another service in the last five years I’m not talking about turning up and doing a talk. I mean, actually rounding and understanding how they practice. And suddenly you see this enormous variety of clinical practice that we assumed didn’t occur.

We thought everyone practiced the way we did, and obviously that’s the best way. So, you know, we’ve got this fundamental challenge that our practice is incredibly broad. You compare it to cardiology and cardiologists must tear their hair out as they see what we do for the same problems in 15 different ways. Yeah, yeah.

Arif 06:29

You know. Well, you know, I think it’s nested in this concept of humility.

Alex 06:33

Right.

Arif 06:33

Which I think is not, not unique to palliative care practice. But certainly I think it’s a hallmark of who we are and what we do with humility that says, you know, there is an art to what we do and that’s important. There’s also a science to what we need to bring every day to specific types of patients. And, you know, there’s an inter rater variability that exists, which means a reef may do something on a Tuesday and something different on a Thursday. Not because the patient is different, but maybe I just ate something different for breakfast and I just felt a different way that way.

There’s also variability within a team and then there’s variability amongst teams. So just as David’s sort of talking about, and I think the real question is, the only way you can really decide whether that variability is warranted or not is to first understand what does practice look like, where does it sort of differ from each other? And then it invites a conversation that says, does Reese practice on Tuesday versus Wednesday in neurology clinic versus an oncology consultant versus his colleagues who practice right next to him versus how he does it in Australia versus the US and California versus North Carolina. Is that variation warranted? Does it make good rationalization for why that exists? Is it truly a patient centricity that’s driving that or is it something else?

Steve 07:47

Yeah, and I would say that it’s not only that, but it’s also the idea that if we don’t measure things the same, then it could just be a measurement problem that I measure it this way or that way, or I define things differently. So my outcomes for pain is maybe it’s entirely because I’m just measuring it differently, or I’m measuring it on the third visit instead of the second visit, or I’m comparing the first to the last. And so you, you end up always being able to say, well, the performance is different because the measurement, it’s, it’s apples and oranges. And when you have a registry and you measure it the same, it’s like, this is apples to apples.

David 08:26

And let’s face it, if it’s not a measurement problem, then we ascribe it to patients. But my patients are different. We look deep into someone’s eyes and go, but my patients, they’re different. And so we can control for all of those demographic factors, we can control for a whole lot of clinical factors and then get down to a reef. Really important point is this warranted or unwarranted variation? Warranted variation. We can sleep well. Unwarranted variation. We need to know why. And we owe it to our patients. As Steve’s already said, we gotta get it right the first time. They are the frailest patients in all of clinical practice.

We can and we do do harms, and yet we don’t measure it. We ascribe everything good to what we did and everything bad to disease progression. Let’s cut the crap, let’s be honest and say we cause problems and we need to know about it. And the only way we can assure our communities, our patients and their families, as well as the referring services is to measure.

Eric 09:36

So I got a question, because I.

Steve 09:38

Think we got it. That’s a drop the mic moment right there.

Arif 09:42

I love it. Yeah.

Eric 09:44

You know, for anybody who’s done primary care, you probably have heart palpitations with some of this too, because you are inundated. Quality metrics and quality measures like hemoglobin A1C below 7, cancer screening, cancer screening, like, all of these measures, you have no amount of time to do it all. What I’m hearing from you, this is not necessarily about creating a quality measure, like everybody has to be, you know, above 90% on this particular measure, but more about defining what the metrics are that we are looking at, sharing that and potentially benchmarking with others as a way to improve the care that we give. No matter where you fall on the benchmark, you’re in the top 10%. Bottom 10%. Is that right?

David 10:33

The top 10% can still learn. The bottom 10% can definitely learn. But it’s a community of practice where we’re learning from each other and actually having honest conversations about what is and is not working.

Alex 10:48

Go ahead, Steve.

Steve 10:49

I was going to say, when we first started measuring the palliative care service, way back when, we found that we were improving pain 70% of the time for people at moderate to severe pain. And we looked at that and just asked, like, is that good or bad? You Know, I think if you’re up at 98%, you think, okay, well, I guess you can get to a hundred, but 98’s pretty good.

David 11:10

Yeah.

Steve 11:11

And if you’re at 10%, there’s no way you think that’s good. You think, that can’t be where you go home and you think, good day’s work. But 70%. And the only way to know is to look at other services and ask the question, if you control for your patients and how sick they are, is that good or not? And it turned out it’s average. Like, that’s what we learned, you know, across 130 palliative care teams all collecting data the same way, it turns out we were average, which is. It’s not bad to be average, but it’s nothing to be proud about. And I think if you. Before we started, someone might have said, oh, wow, you know, ucsf, they must be the best. And it’s like, well, not on that metric.

Alex 11:56

We read.

David 11:58

Well, they’ve got Steve handle that. They’ve got to be the best.

Arif 12:00

Yeah, they’ve got to be the best.

Steve 12:01

Thank you.

Eric 12:02

Well, I guess that, then that brings up the question is, how do you decide what metric to include in there? Because I can also imagine, like, when I hear, oh, 98% would have be pain free, I’m all, ooh, are they over medicating?

Alex 12:13

Maybe, like, maybe that’s the harm.

Eric 12:15

There’s always this balance between. I mean, this is the big thing with quality metrics is that generally a hundred percent on any quality metric may actually be a really bad thing, because there’s harms associated with everything that we do. Thoughts on that?

Arif 12:29

Well, I think one, the selection of the quality measures. Right. For a field where we talk about people’s values. Right. The measure is the expression of the values of the field. So what, you know, at the end of the day, yes, you could create 200, 300 measures, but what do you fundamentally value that should rise to the top by which there are a mix of things that should never happen, things that always should happen, and a lot of things that fall, you know, in between.

Alex 12:52

Right.

Arif 12:53

That’s really important. And I think for us, you know, we have intentions. Intentions are the values. But. But you have behaviors. Behaviors are measurable.

Alex 13:01

Right.

Arif 13:01

And as we know, the fundamental attribution error is that we judge others by their behaviors and ourselves by our intentions. Yeah, but when you enter in a collab.

Alex 13:10

Right.

Arif 13:10

We do. That’s the same.

Eric 13:12

Yeah. What was our last podcast where we talked about that?

David 13:16

I want a T shirt with that on.

Steve 13:18

That’s the swag over here.

Arif 13:21

I gotta come up with a bumper sticker, David.

Alex 13:24

So.

Arif 13:24

So we judge others by their behaviors and ourselves by our intention. So this is an example where we say, great. Our intention is to, you know, do timely and efficacious pain assessments and timely management, blah, blah. But does our behavior back up our intention? And if there’s incongruence that starts a conversation that’s really important. To your point, is 98% too high, too low, average, above average, et cetera.

And then like everything else, where do we want to be? What do our patients deserve? Are there any unintended harms that happen or are there balance measures? Right. We think about balance measures in a lot of other ways. If 98% of people are getting colonoscopies as a colon cancer screening, but they’re all 85 plus, you might say, well, the balance to that was you might have met the measure, but you might have, you know, led to some unnecessary procedures. So there are ways to sort of balance against numbers to put them into context.

Eric 14:14

A hemoglobin A1C measure with also a hypoglycemia measure.

Steve 14:18

Well, look at this. You could do a pain measure with a use of naloxone measure. Right. Something we’re measuring on our inpatient service. We didn’t collect that in PCQN or GPCQA or PCQC, but you could imagine that that would be something as a balancing measure. I’ll say. The other thing is that this was a patient reported outcome. So if your patient could report it, that’s important. Presumably someone who is so deeply sedated to the point of harm wouldn’t be able to report. That’s a different set of issues about what happens to the people who can’t report, but at least among the people who can.

Alex 14:58

Can I ask what makes it on to most of these? Which measures make it on? Like, is there, like, can we just rapid fire go around? Certainly. Pain is on there, I assume dyspnea is on there. Breathlessness.

Steve 15:10

Yep.

Alex 15:11

Um, what else? Everything on the ESAS Edmonton Symptom Assessment Scale. Yes, it’s on there.

David 15:16

Some variation or other. Some. Some level of measure of function is really important.

Eric 15:22

Yep.

David 15:22

You know, we, we talk about symptom control, but when we actually ask patients, and there are great papers on this, they say, I want to be as independent for as long as possible.

Eric 15:32

Yeah.

David 15:33

And so we’ve got to measure that and we’ve got to actually do something about it. How many of Us have enough physical therapists helping people in that circumstance. You know, occupational therapists, we, we focus on symptoms, but they’re not an end in themselves, they’re a means to an end. And the end is I want to do the things that are important to me as legacy issues. I want to be as independent as possible for as long as possible.

So function is, is actually really important here. And then some patient derived and defined goals that may be actually for that person. Are we achieving whatever your goal is? I don’t know what it’s going to be, but are we actually facilitating that? And we’ve got to swap our rhetoric for real measures that are patient centric.

Steve 16:26

Yeah, look, we also worked hard to try and represent the breadth of what we do in palliative care. So questions about quality of life, are you at peace? Were spiritual issues assessed, psychosocial issues assessed? You know, we didn’t have sort of complex measures for those, but we did want to make sure that we were looking at all the issues that we think are core to our practice, as well as, you know, discharge and where people were discharged and what kind of services they had and follow up, advance care planning conversations, but also documentation. So was there a POLST form, for example, an advance directive completed?

Alex 17:08

So now I’m asking on behalf of listeners who are, one might be wondering at this point, you said discharge, is this strictly an inpatient initiative or they’re outpatient?

Steve 17:17

Both. Every setting, really. It was actually every setting of care, including telehealth and any other setting. So discharge, obviously a measure from inpatient.

Alex 17:28

And there are some just there might. Our listeners might be thinking there’s some things that might be missing here or that are so hard to measure. You know, like the first thing we do in palliative care is we form strong relationships with patients. How do you measure that? You know, I don’t know. And then second is this idea of goal aligned care has been so elusive to try and measure. Do we actually have the ability to measure those two things, which I think many would argue, including me, are at the heart of palliative care practice?

David 17:56

Well, you don’t have to measure everything to talk about quality. And you’ve got to start somewhere. So what we’ve seen is embedding this in services so that it’s actually part of their workflow means that you can then have the discussion about what else should we add, what else could we measure? Are there other things that we can approach rather than saying we’ve got to have the full gamut of everything that Palliative care hopes to be all of our values, all of our intentions.

Let’s start with some simple things which we tell our colleagues particularly that we’re really good at, and let’s actually back that up with some data. I’d love to measure the strength of relationships, and I fear that we will be actually really surprised by the answers, Eric. So we may not want to start measuring that too soon. Let’s start with some things where we know we can actually make a difference. And you asked very early on, does this make a difference? Yes, it does.

Eric 19:00

We have data on that. So this is a data podcast. Do we actually have data to suggest, to say, give me one or two examples of the data that we have?

Steve 19:10

Well, I think there’s great data from Peacock that shows that once a team joins, you can see that their outcomes improve over time. So when you look at from when they first join and you look at the outcomes that they achieve, like real outcomes that will matter to patients, like symptom management, for example, you see that those improve as they participate and collect data. So it shows that just the collection of data and the participation matters in the quality of care that is delivered. And if you look at teams over time, certainly, and Peacock has done this really beautifully, you can look and just see that outcomes performance actually improves over time.

Eric 19:53

And if I remember correctly, Steve, like, was it Kyra Bischoff who did the. Was it Gemma Im the.

David 19:59

The.

Eric 20:00

The people that were. Who are DNR who were being discharged, Right. How few of them were getting pulse forms on discharge?

Steve 20:07

Yeah. But we also found that, you know, if you looked at code status, there was a. There was a dramatic change in code status from most people being full code to most people being DNR by the time of discharge. And, you know, to the extent that that’s reflecting the patient’s actual wishes, you know, we talked about this goal concordant care and that you’re trying to understand what patients really want. Those who at discharge were do not resuscitate as their code status. You know, figuring that out and documenting that is really important and meaningful for patients.

Arif 20:41

I mean, you know, the challenge in quality measurement, if you think about the don obedient framework of structure, process and outcomes, is we spend a lot of time as a field talking about who should be on the team. And we still have those conversations. That’s really a structural thing. Should we available 24, 7, which patients do we see, et cetera?

You know, outcomes are always admittedly tough because of really both case adjustment and attribution. If You’re a palliative care consult service and you can’t write orders, right, let’s say, and someone doesn’t get their pain better, is that because you weren’t a good communicator to the team about what dose they should have written? If you’re being ignored, even though you gave the recommendation? Like, how does one deal with attribution? Listen, everyone wants attribution. Wants attribution when things are going well. It’s when things are not going well that everyone kind of. It’s like Homer Simpson back into the hedges, like.

David 21:26

Right.

Arif 21:26

Like we all kind of pull back a little bit and that’s tough. Right. The other thing is case mix adjustment, because the patients I see are not going to be the same as the patients you see. So how do we hedge against that? Not to make a Homer Simpson reference again, but it’s because you focus on you. You admittedly, you focus on process. Because at the end of the day, people want to know, what is it you did when you walked in the room and did you actually do it and how often did you do it and did that make a difference?

Becomes a question we start asking the other time. Remember, our field is still relatively young. We think about these issues all the time. Did I ask about pain? Should I ask about pain on the first visit? What if the reason for consultation was advanced care planning? Do I still ask about pain or not? And what it does is it becomes fodder for very, you know, conversations we can have within the field and within our own team to say, hey, is it consult? Is it good consultation etiquette to ask about a thing that the consult team didn’t ask you for? Yeah.

Steve 22:13

Right.

Arif 22:13

Is that akin to, if you ask cardiologists come in and, you know, check their, you know, peripheral vascular disease and they.

David 22:20

And they.

Arif 22:21

And they listen to your heart. Was that bad consultation etiquette? So if someone asked me to do pain management and I asked about their understanding of their prognosis, was that bad consultation etiquette? I don’t know. But what I do want to know is, is Steve doing that when. Right. And is David doing that? And is there, you know, is there solace in realizing we’re all actually struggling with some of the same things and trying to find a way forward? Like that’s the community building component of a collaborative because it’s more than just a registry.

Alex 22:45

Right.

Arif 22:45

It gives you an opportunity to ask people questions that we know in our practices. Many people are in very small clinical teams. And so the question is, who do you ask when you’re trying to figure out is this normal or not?

Eric 22:57

I love that because it reminds me again when we talked with Ira and Christy during the Bryn when we talked about, is palliative care losing its way and path forward? We’ll have a link to that in our show notes. One question that came up was when we think about quality palliative care, like, you can turn to our evidence base and our evidence base does not necessarily reflect what we say. Like, Jennifer Tamel’s study was like a physician and an advanced practice nurse. They weren’t doing advanced care planning on everybody day number one. It was illness, understanding, coping, racial relationship and rapport building. So it was not like a standardized template note where they covered everything on day one. Stem cell transplant paper. Palliative care improved outcomes. Arisha’s article. Yeah, same thing.

Arif 23:44

Right.

Eric 23:44

Only looked at symptoms, psychological and physical symptoms they didn’t do. Everything else was that high quality palliative care. It’s certainly our evidence base. And I love what you’re saying there, Arif, is that we don’t know.

Alex 23:56

Right.

Eric 23:56

It’s not saying you have to do it this way, but at least we’re measuring it. Is that what I’m kind of hearing?

David 24:02

Absolutely, yes. It’s a huge natural experiment.

Arif 24:05

Yes.

David 24:06

Because we have wide variation in what we do for the same problem, as Arif says, even for the same clinician on a different day, that variation needs us to. To really sit down and listen to each other and listen really carefully. You know, you say it’s about a congruence between what we say we do and what we actually deliver.

Arif 24:34

What.

David 24:34

What’s our evidence for improving quality of life?

Alex 24:38

It depends on the disease and the service provided. But certainly Jennifer Tummel’s study.

Arif 24:43

Yeah.

Steve 24:44

In for people with neurology, Benzie Krueger study showed that.

David 24:47

Yeah, yeah, yeah. We’ve got two.

Alex 24:52

Okay.

David 24:53

We’ve got a very small number of studies, and yet we tell all of our colleagues in the community we improve quality of life.

Alex 25:02

Yep.

Eric 25:03

Project Enable.

Alex 25:04

Yeah.

David 25:04

Okay.

Arif 25:05

Okay.

Eric 25:09

Wait a minute, let me ask you this because, Steve, so you guys wrote a wonderful paper together, and one of that was one sentence that I got was the. The wide variety in practice is both a concern and advantage. How. How is that possible? Cause last time we had this podcast with Ira was the wide variety of practice is a concern. Like, it’s, it’s a huge concern that we’re not practicing the same way.

Steve 25:32

Well, look, the, the advantage is that there’s, you can learn from what other people are doing. So that’s the advantage. You know, I guess you could say it’s a concern because there are some services that are not performing well. Look, when we looked at the data in PCQN and pcqc, there’s wide variation in every single outcome. Everything you look at, and there are teams that are on the very low end, very low performance, and there are teams at the very high end, very high performance. And you know, you could look at the team at the low end and just ask like, what’s going on?

There’s something you’re not doing right. And there’s some really outstanding practice at the other end. And the question is, what are you doing that’s really working? So for example, one of the things we looked at when we were looking at pain in the PCQN, we found that 25% of people had moderate to severe pain where that was not the reason for consult. So to what Arif was saying before, you know, you get a consult for goals of care and you screen people for pain and you find out 25% of them have moderate to severe pain. So that’s, you know, so the standardizing of that practice is a good thing. You don’t want a lot of variety there, but you can learn from the variety.

And then the other thing we found is that teams that had nurses on their team had better pain outcomes. So that’s really interesting also and it ties to what Arif was saying about structure, process outcomes about now it begs the question, what are nurses doing uniquely or differently that could make a difference. But it gives you some ideas about that. And the fact that some teams have nurses and some don’t allows you to, like David said, have these natural experiments that really allow you to understand what’s working.

David 27:22

And you know, really important point from that, Steve, is also none of these programs, as far as I can see, has found any correlation between the resourcing of the team and the patient outcomes.

Steve 27:36

That’s true.

David 27:37

So you’ve got some really well resourced teams that are actually delivering pretty ordinary outcomes, patient outcomes, and some teams that are doing it with the ass out of their trousers and delivering extraordinary outcomes.

Steve 27:50

That’s Australian for that must be Australian, by the way.

Alex 27:53

Thank you, thank you for the translation.

Arif 27:56

Yeah, listen, we, you know, we’re in this field and there are two sort of somewhat competing interests. Maybe one is that our patients are unique. There’s not a one size fits all approach. Right. So, okay. And at the same time, the ADAGE holds true that consistency builds trust. Any person who knows kids, if you parent one one way and you punish the other one in a different way, like your kids will call you out. That’s not fair. When she did X, you did blank.

When he did X, blah, blah, blah, right? So people look for consistency being part of their trust, right? So in, in front of a patient, you know, tailoring it and so on makes good sense. But let me tell you, nothing undermines a referral to palliative care than when someone says, hey Arif, when your colleague does the consult, they always do X. And when you are on service, you never do X. So we wait for what? When your colleague’s on service because they do a better job or we’re always looking for X and you’re never doing it. Right. So that lack of consistency, it, it plays out.

People don’t get consults one week and they get consults another week, or when someone’s on service, something happens, we’re like. And what we’re trying to say is actually there are some principles that when we show up, regardless of whose face it is or how long their white coat is, there are certain things you should be able to depend on. Right. And so the question is then what is that core set of quality measures that governs the thing that we believe? We’ve got a framework for eight domains and other things too. But what’s good about it is we’re building that evidence base within this field to understand what’s the core things that you always do or mostly do and what are the things that really been become the add ons that are tailored.

And last I’ll say this, I need to add this because we talk about what we do in palliative care. We tell people all the time the core unit of care is the patient and the care. So I ask everybody how many caregiver specific quality measures exist. I just told you, quality measures express our values. We’re really good at telling people about this issue of like they’re our core unit of care, blah, blah, fantastic.

But I’m not saying it’s easy to do per se, but we should as a field look at ourselves and say if that’s really our value, then we should have assessment specific carriers, which some of us do. We should actuate on that and then we should look at that data and say, who’s doing it? Well, who’s not? How do we get better?

Alex 30:09

Yeah, like did you feel prepared for this event, the end of their person’s life? Did you feel like you know, things went the way you expected. Did you feel like you had the information you need? Were they psychological outcomes like depression, ptsd? Those are great caregiver measures.

Steve 30:26

Yeah.

David 30:27

I just want to follow on from Arif for a second. We’ve got to get our practice right within the team in which we work first before we can do any of this. I was called into consult for a service where every time the attending changed, the pharmacist had to come in and pretty much strip out the entire drug cupboard because they couldn’t. They couldn’t agree on first and second line therapy for opioid responsive pain.

They couldn’t agree on first and second line therapy for constipation, for nausea, for breath. It didn’t matter. Everything was stripped out because as. As the new person came on, everything was going to change. Now, that’s ridiculous. It’s dangerous for the staff administering those medications. It’s bad for patients because they get changed week on week. We can’t get our own act together.

And so it’s really important that these conversations when we start measuring are actually a catalyst to say, okay, can we at least get some agreement on some basics here so that consistency can be delivered within our own hospital or community team? And then might actually catch on somewhere else when someone says, wow, that’s really working for you guys. And we’re going to evaluate it prospectively, we’re going to measure it and we’re going to compare notes and we will change the face of hospice palliative care just by getting those conversations right.

Alex 32:05

I’m convinced. This is so great. We’ve just loved hearing from our guests because we have such robust and THR collaboratives in the United States and Australia. Right, Eric? So is there anything more to talk about? Because everything seems to be going so well in terms of these collaboratives in both countries. Is that right?

Eric 32:23

Maybe we could just briefly describe where we are with those collaboratives. I’m going to start off with Australia, Steve. I keep on hearing Peacock. When I read that, I didn’t realize he was referring to PCOC or Palliative Care Outcomes Collaborative.

Steve 32:39

That’s it.

Eric 32:40

Which is Peacock.

Alex 32:41

Indeed.

Eric 32:42

I learned something today, David. What is Peacock in Australia, do you call it Peacock?

David 32:49

We call it Peacock. And no doubt we have a picture of a peacock as the emblem for it.

Eric 32:55

That makes much more sense now.

David 32:56

I did not.

Eric 32:58

When I was doing my homework for this, I did not pick that up.

David 33:02

Yeah, I think there are a couple of great far side cartoons about that, but we won’t go there. It’s just turned 20. And it started with eight services agreeing to collect the same data at the same time points and start to benchmark. And from day one, the patient aspects of this were standardized so that we were comparing like with like so that people couldn’t say that my patients are different. And I went to the 20th birthday celebration just two weeks ago. It’s been funded by our national government all of that time and it’s now got more than 200 services contributing data from point of care.

Eric 33:47

And it’s voluntary or does everybody have to do it?

David 33:49

No, it’s voluntary. And that’s been really important, I think, in ensuring that people don’t game it. But having said that, now some of our insurers and, and some of our health departments are saying we’d like to see your data, to see how you’re doing. But it is voluntary, it’s always been voluntary and it now accounts for more than 90% of all people referred to palliative care services in Australia.

Alex 34:19

And why do you think the government funds it?

David 34:22

Because they wanted to see quality and they were worried that it was making the front page at the time. It was a time of great debate publicly about end of life care, about the role of voluntary assisted dying, the role of palliative care, the role of services, particularly not being available outside major metropolitan areas. And so it was a time that government was starting to invest very heavily and they wanted to see that this was quality as you would anywhere else.

I mean, if we just step outside hospice palliative care for a second. If you look at services that participate, for example, in clinical trials in cardiology or oncology, the services that are active trial participants provide better care. And we’re not talking subtly better care, we’re talking higher discharge rates, alive in cardiology. These are not subtle changes we’re talking about, these are real changes.

And much of this is about just getting used to being measured and that feedback loop. And by about. And Steve’s written about this very eloquently, by about the fifth cycle of getting feedback, people start to own it, but it takes five cycles. And Steve, you may want to comment on the relationship between the late Dr. Kubler Ross.

Arif 35:54

Yeah, and.

David 35:55

And data benchmarking. But it’s alive and well, Steve.

Steve 35:58

Yeah, well, yeah, thanks, David, for that. You know, we, we talk about sort of the Kublera stages of, of data, of looking at your own data. That starts with denial, you know, that’s not my data. And then there’s anger. You’ve collected it wrong. You’ve analyzed it wrong. You don’t know what you’re doing. Then there’s the bargaining. Our patients are sicker and you don’t understand that and our work is harder. So of course our outcomes are not as good.

And then there’s the depression. Oh my goodness. This really are our data and we’re not doing as well as we thought. And then finally you emerge at acceptance, which is, okay, this is our data, these are our patients and we’ve got work to do. And you know, and we care. Like we actually want to do better and we care and we can see that there are people we can learn from and with. And I, you know, I think this is the important part of like it’s not just about the data, although the data are really important, but it really is about this community of practice.

I think, you know, one of the things we saw consistently in PCQN is teams that never collected data but nonetheless participated. And it really speaks to this idea of community and how we learn together with and from each other. Now if no one collects data, there’s no data to look at. But there is a lot to be gained from being in this together and being able to share your data and to say, look, we’re not doing as well here. What are you doing? Like how are you doing this? You’re really doing well here. Tell us about what you do.

Eric 37:35

Arif Another Homer Simpsons five Stages of Grief link. We could foot there.

Arif 37:41

That’s right.

Eric 37:42

I wonder, moving from Australia to the US because we keep on mentioning kind of an Alphabet soup of different things. NPCR, PCQN, GPCQA, all of that soup merged into it five years ago. One organization, right? PCQC. I think we’ve heard that Palliative Care Quality Collaborative. Is that right?

Steve 38:06

That’s all right, yes.

Eric 38:09

And what was the, what was the.

Steve 38:10

Hope for PCQC brief you want to take?

Arif 38:13

Well, yeah, so I mean, you know, one, it was to bring together multiple efforts that were happening in the field and, and the, the CAP C National Registry is part of that as well. To bring what is really team and structured data in coordination with process data and then process data across multiple registries into one space. I think that what happens is that the healthcare ecosystem has also changed during that period of time. I mean Steve and I were doing this for a decade before then.

Five years of doing it together is. We saw some changes happening in the U.S. I mean consolidation for one of many palliative care programs, the introduction of corporate backed programs as well. And none of these are Value statements. These are just the observations that exist, but also think that seasons for quality measurements, at least in the US and I’m a huge fan of what David’s pulled off in the US to ebb and flow and change and sort of what’s happening and what’s needed.

As much as this was about registry, it was also about bringing a culture of continuous quality improvement and fundamentally a culture of humility. I think that you can get caught up in a bit of your own hype around how great you are because you have to tell other people how great you are. So they fund you and they support you and they help you grow your team.

But you also have to have that moment where you pivot, you close the door and you look at your team and you go, okay, we got the resources to do the right thing now let’s make sure we are doing the right thing. And as some of those things have changed, I think what that’s looked like in the US has also shifted a little bit too from being proponents of the culture around this, around data collection, around standardized data collection, these other things that really have become the legacy of the work that we put forward and that David continues to do very successfully in Australia.

Steve 40:05

Look to your question, Eric. You know, PCQC does not exist. You know, it, it folded back in May, May of 2025. And you know, GPCQA and PCQN started back in 2009. And to Arif’s point, it honestly was a little confusing in the field. And then there was the National Palliative Care Registry that was collecting, you know, team based data and institution based data. And there was a lot of confusion about which one to join and what do they do and how are they similar and different. And it really seemed like that confusion was not helping us get where we wanted to go.

And so the idea was, could we bring all these together? We could then look at the structure and process data, bring that together with patient level outcome data and really do sophisticated quality improvement. That was the idea behind it. We had strong grant funding to move that forward, developed P C qc, the Palliative Care Quality Collaborative. All three of the organizations came together and then a couple things happened.

One, there was a pandemic that happened and that really kind of derailed a lot of the work. It just put things on hold for the teams that were participating. Institutions had very different priorities financially. Our tech partner that promised they could build the registry infrastructure, they had their own problems that happened because of the pandemic. The person working on PCQC was stuck outside the country and wasn’t able to work for a long period of time and it didn’t feel right.

And the plan was that this would be a membership organization and you would pay to belong, that the membership fee would support the organization long term along with grant funding. But that fundamentally we were committed to it. And we kind of honestly, we ran out of Runway between the tech partner not being able to deliver and we felt like we couldn’t charge people for a product that we didn’t have. And we worked really hard on getting it to happen.

You know, the sad irony, of course, is that just as the database was ready and functioning is kind of when we ran out of funding and at the end of the day we didn’t have enough members. In part, we weren’t recruiting more members because we felt like the product wasn’t ready to offer. And once it was, we ran out of the initial funding that we have and it was in an environment that it was hard to raise more funding. So I think the people who participated got a lot out of it, but we weren’t able to really grow it to the size of where it was sustainable.

Eric 42:40

I guess I’ll turn to you, Reef, if you had to put your QI hat on this and think five years down the line from now, if this were to restart in the US like it is in Australia, continuing strong after 20 years, what lessons did you learn from this to help that next group that tries to restart this obviously avoid pandemics, which is probably a good one.

Arif 43:07

Yeah, we’ll do that next time. You know, I think what we cannot lose is this culture around self reflection. And someone once taught me that the simple formula could look like this. Compassion equals empathy plus action. Nested within empathy is having data, lived experiences and quantitative that tell you what is happening. Then with the right action leads to compassion. We cannot be compassionate as organizations if we don’t understand where we’re missing, where we’re doing well. Because you can’t take the right action.

Right. You are compassionate by intent, but not by behavior. So I think moving forward, we will continue as a field to have that spirit. I think we’re gonna have to watch a few things that are happening. One is sometimes people can see data collection and storage and the data itself being proprietary and intellectual property and a business secret. Trade secret. Yeah, that one hat. And so that’s one of the things we have to contend with, is that in the spirit of sharing, you are also giving something away, either by design or against the will of your general counsel.

Alex 44:15

Right.

Arif 44:15

And so you have to figure out a way forward that allows for that. I believe that’s the spirit of our field. The second is that we have to at least commit to a standardized set of practices that work for us.

Alex 44:28

Yeah.

Arif 44:28

That become the foundation. And then if, if the role of the next organization is to create quality measures and test them and give them to folks and let them use them in their registries. Because really an electronic health record in the US is the largest health registry that exists.

Alex 44:43

Right.

Arif 44:43

The largest collection of data is actually owned privately by very large EHR companies out of Wisconsin.

Alex 44:50

Right.

Arif 44:50

That’s the largest collection of data in this country. Right. So the real question is the data exists in, in many ways. So then it becomes, does the registry organization of the future really find a way to pull that out from structured data that’s already being collected? Do you use AI, AI methods to create structure where it doesn’t exist in unstructured ways? Right. There are other ways too, where maybe in the US a standalone self reported, self collected registry may not be where things go, but the principles of what’s happening there, which is humility, data for compassion, matching behavior to intent, have to stay.

Eric 45:25

Can I ask one last question too? I wonder as you think about this, because the one thing in my experience having been part of like groups that try to institute standardized consult notes, is that sometimes it feels like the people who create the note don’t really know the amount of work that it’s going to take to do the note and that maybe not every palliative care consultant needs to do everything on day number one, how do you manage kind of everything that you want to include in all of these different registries, future registries in the US current registries in Australia with just the like the, the overload that you can get just by there’s somebody who has to, to mark this stuff and to assess this stuff. Thoughts on that?

Steve 46:15

Well, you know, David said this earlier, you gotta start somewhere. You know, in PCQN we started with 23 data elements and that included demographics. So I think you can be comprehensive without being exhaustive. You don’t have to do everything you do every minute of the day. Think about what are the most important things we want to promise our patients and how would we know that that’s happening and how do we know who our patients are?

And you can do that with relatively few data, elements, number one. Number two, if it’s in the electronic health record, you don’t have to collect it. There are now ways to extract it without Having to write it down. So there are efficient ways to do this.

Eric 46:51

And I bring this up too because David, because I saw Peacock, because I’m going to have a link to it because you can actually pull up. Right. Delirium seems to be a huge component in Peacock. Right. Am I reading that right from Peacock?

David 47:03

So it’s just been introduced specifically. And again, because the rates of delirium and the rate of unrecognized delirium are a concern. So it is there. But the thing I’d say to complement what Steve just said is if you find an area that’s of concern, you can always do a time limited additional data collection to understand it. It doesn’t have to be all things at all times.

It’s the core that’s important. And Peacock’s a good example. We’ve run time limited additional data collections for specific issues that have emerged and that’s been really powerful for bringing the whole community together for that conversation.

Alex 47:47

Go ahead, Arif.

Arif 47:48

Well, and it’s separating measurement for improvement for measurement for accountability. I think what Dave is really getting to is when you measure for improvement, that’s an internal conversation. Hey, you know, curious, how often are all of us doing spiritual assessments? Let’s add that in and go check. That’s measure for improvement. Is there a problem doing improvement? Accountability is where people’s mind tends to go when you say measurement. And accountability is, simply speaking, is an externally facing activity.

Accountability is I’m going to tell a payer, I’m going to tell my leader, I’m going to tell my administrator, I’m going to tell blah, blah, blah about how we’re doing. That’s where everyone’s like tachycardia just gets a little bit higher because you’re sharing sort of outside the safe space. The recommendation would be do a bit of both. Quality measurement is not just for accountability. Looking for bad apples. Right. There’s also a ton of curiosity on the measure on the, you know, for improvement side, measure for improvement side. And I think a mix of both is a really nice place to do any of these.

Eric 48:42

Have public reporting. Absolutely.

David 48:46

And so the reports are made available publicly. It’s interesting to see again, you know, looking at the five stages when people are happy to share by name their services performance. And so it takes time and it takes five years.

Steve 49:04

You know, hell of a lot of trust you were talking about. Do we have a problem in our field? And to a previous podcast, but you know, there was an insurer in California, Partnership Health Plan that used PCQN data to monitor the home based palliative care programs in the medi cal population that they were representing and they looked at particular outcomes and they had to pay for performance on that. That simple participation was pay for performance and then there was performance. So there are ways to kind of tie these together to a reef’s point. Some of it is for accountability and some of it is for kind of internal quality improvement.

Alex 49:47

Yeah, I think the accountability is that the real urgent need right now that we addressed in our last podcast and one of our biox, like two middle points where we need data and we need data to drive quality improvement, particularly at this time. And so it’s kind of tough. There’s a tension there between wanting to join to do good for your field and also wanting to hold some of those palliative care practices that are Palliative care may be a name only feet to the fire to say, like, hey, you need to do better if you want to call yourself palliative care because we don’t necessarily believe that’s palliative care. We should get to the song.

Eric 50:19

Let’s do a Little Bit More Keep me your heart for a while.

Alex 50:34

(singing)

Eric 51:35

Arif, David, Steve, thank you for joining us on this podcast.

Steve 51:39

Thank you.

Arif 51:39

Thanks for having us.

Steve 51:40

Great conversation.

Eric 51:42

And to all of our listeners, thank you for your continued support.

This episode is not CME eligible.

Back To Top
Search