skip to Main Content

Artificial Intelligence, or AI, has tremendous potential.  We talk on this podcast about potential uses of AI in geriatrics and palliative care with natural language processing guru Charlotta Lindvall from DFCI, bioethicists and internist Matt DeCamp from University of Colorado, and prognosis wizard Sei Lee from UCSF.

  • Social companions to address the epidemic of loneliness among older adults
  • Augmenting ability of clinicians by taking notes
  • Searching the electronic health record for data
  • Predicting mortality and other outcomes

We talk also about the pitfalls of AI, including:

  • Recapitulation bias by race and ethnicity, and other factors, exacerbating disparities
  • Confidentiality concerns: do those social companions also monitor older adults for falls? 24/7?  
  • Hallucinations, or when the AI lies or bullshits, then denies it
  • When the AI approaches sentience, is it ethical to unplug it?

I’m sure this is a subject we will return to, given the rapid progress on AI.





Papers on AI and palliative care and concerns about bias:

Comparison of machine learning vs traditional prognostic methods based on regression:

Other links on the issue of AI and racial or ethnic bias:

Are Robots Racist? Greenwall Foundation Bill Stubbing lecture Are Robots Racist? Rethinking Automation and Inequity in Healthcare

MD Calc approach to inclusion of race


Eric: Welcome to the GeriPal podcast. This is Eric Widera.

Audio: [electronic voice] Alex Smith has been taken over by an artificial intelligence.

Eric: I’m not sure if people could hear that, but that’s artificial intelligence – Alex Smith is here. Alex, who is in between us right now?

Alex: Okay. So Sei Lee is here. Sei Lee is Professor of Medicine at UCSF in the division of geriatrics. He’s a palliative care doc and a geriatrician, and he’s very interested in prognostic modeling and has done research using both machine learning, which is a form of artificial intelligence. Maybe we’ll find out. We’ll discuss that. Welcome back to the GeriPal podcast, Sei.

Sei: Thanks for inviting me.

Alex: And joining us via Zoom, we have Charlotta Lindvall, who is a palliative care physician researcher who studies the intersection of natural language processing and communication data at Dana Farber Cancer Institute, where she also works in clinical informatics. Welcome to the GeriPal podcast, Charlotta.

Charlotta: Thank you.

Alex: And we have Matt DeCamp, who’s joining from the University of Colorado, where he is a primary care internist and bioethicist at the Center for Bioethics and Humanities. Welcome, Matt, to the GeriPal podcast.

Matt: Thanks so much for having me.

Eric: So we’ve got a lot to talk about today about artificial intelligence and palliative care, geriatrics, medicine in general, and defining these and where are we going with all this. But before we jump into this, I think somebody has a song request for Alex.

Matt: I have a song request for Alex. I’d like to request the song Alive by Pearl Jam. Since we’re talking about prognostication, I thought the chorus of this song was a good reminder that while we’re prognosticating, we’re still alive.

Alex: That’s good. All right. Here’s a little bit. Okay. Here we go.


Eric: Let’s see ChatGPT do that. [laughter]

Alex: That’s true. That was fun. Thank you for that.

Eric: We’ve got a ton of stuff to talk about. In preparation for this, I was trying to figure out what to ask in this podcast. And I just like to reveal to the listeners what I do beforehand. So for this one, I went to Bing Chat, which uses, I believe, Open AI ChatGPT. And I asked them, “What questions should I ask at a podcast about artificial intelligence in palliative care?” And Bing responded, I should ask about how AI can help identify patients who need palliative care and reduce hospitalizations, what are the benefits and challenges of using AI in decision-making, communication, palliative care, and how could AI support the quality of life and dignity of patients for those who are receiving palliative care? Which I thought were great questions. Maybe we could jump into that.

But then it asked me, “Oh, that sounds really interesting. Who are you going to be talking to?” And I said that I’m going to be interviewing, including Dr. Lindvall. And it started telling me about Dr. Lindvall, including that she’s published in JPSM and other journals. And then it asked me a couple other questions, but it basically ended with a really odd thing of as we were having this conversation, the last question it asked where I stopped it was, “Do you want a cool nickname?” And I said, “Sure.”

Alex: You said yes.

Eric: It said, “How about Sparky?” So apparently, my new name per ChatGPT is Sparky. So as we jump into this… But apparently it’s going to be taking over my role as a podcast interviewer. What is artificial intelligence? Is it different than machine learning? And how is it different than these things like ChatGPT, or is it?

Matt: Yeah, I was going to say that’s such a great question. I think of artificial intelligence as just being something that uses machines to do tasks usually reserved for humans, but I think Charlotta probably has a better, more specific definition than that, especially around machine learning.

Charlotta: Yeah, so what I see with the generative AI models like ChatGPT, what’s so different about them compared to machine learning models is that they are creative. And that’s what you were noticing, that they are very creative, and you can actually have this conversation going back and forth. And I think that is the big difference. They’re all based on predictions, predicting what is most likely to happen next. So that is no different, but this creates-

Eric: But that’s kind of the interesting thing about ChatGPT, is sometimes they’re a little bit too creative. For example, there was one article that we reviewed for Jags where it asked great questions, they gave great prompts, and ChatGPT answered great questions about antipsychotics and gave a list of references to back up ChatGPT’s assertions. It got all the way through the review process until one of the citations sounded like Dr. Widera and Smith, and I don’t remember me and Alex ever doing an article about that. Our names were slightly off. And they were complete hallucinations. They were made-up, cited references. So I actually put in an editorial note that you’ve got to be careful with ChatGPT, because it’s a bullshitter. It doesn’t know fact or fiction.

Charlotta: You should have have asked ChatGPT, “Please don’t make anything up.”

Eric: Oh!

Alex: Oh! I didn’t know you can do that.

Eric: I did ask ChatGPT why it lied to me and it tried to argue that it didn’t lie to me. And so we had this back and forth until I realized that it’s just trying to, well, it’s not trying to do anything. It is making up something that looks like something that I would believe.

Alex: Yeah.

Matt: Well yeah, I love that you said that and the fact that you said it’s not trying to do anything, because that’s implying even a question I had as you were having the discussion about creativity. There is this question in the background of is that really creativity? The fact that it’s a probabilistic prediction on the next word that will follow or so on? I don’t often think of that as creativity, but yet we often attribute those sorts of characteristics to that machine.

Sei: I mean, I think the thing that’s really fascinating is when people talk about emergent properties of these really advanced neural networks that once you get to a certain amount of data, that it starts doing things that we didn’t necessarily program explicitly. And so that makes me feel like is this the road to sentience? Is this, I feel like this is stuff that the science fiction writers have been talking about for at least 50 years. And these idea of emergent properties I think is really interesting, and that’s what everybody is excited about.

Although I think right now what we have is this, it feels like fundamentally a disconnect between the human beings and… The problem is when we see a string of text, we assume that there is an intelligence behind it. Whereas when I see an AI string of text, what I am thinking is, “Oh, okay, this has a corpus of how many ever terabytes of data and it’s just trying to predict what the most likely next word is.” And I don’t necessarily attribute any fundamental intelligence in any way to it. And so I don’t really feel like what we have currently is intelligence. It’s a very kind of maybe a nine month old infant sort of an intelligence, not really an intelligence that I feel like. But it is very good at this very one thing of figuring out, okay, what is the most logical next word that is going to continue this conversation?

Charlotta: Yeah. And it’s kind of almost suggesting that Eric and Alex should write that paper, because it’s predicting that you have already written that paper.

Sei: I think that means the podcast is over because Eric and Alex have a paper to write.

Charlotta: But it is true that this new models, like the generative models, they are like nine month old babies and they are rapidly evolving and they are evolving as quickly as that baby starting to crawl and walk and suddenly graduating from high school. So I think that we would see a huge development of these models.

Eric: I mean just the difference between ChatGPT four and what you’re doing in Bing right now where it’s actively looking stuff up versus using a data set from a year ago or two years ago. It’s pretty impressive.

Matt: Do we have control? If they’re evolving, do we have control over where they’re going?

Sei: Of course not, which is why I am getting ready to bow down to our silicon based overlords very soon.

Eric: Well, I guess the question is, let’s maybe taking a step back is, for our GeriPal audience, we have clinicians focused on both geriatrics and palliative care. When you think about the use of artificial intelligence, natural language processing, these ChatGPTs out there, machine learning, where do you think the big current use cases or the near future use cases are for this stuff?

Charlotta: Yeah, I think it will be in reducing the need to spend so many hours on documentation and looking through the EHR for information. Because I think in the near future they will be able to take audio recordings and conversations and make clinical notes that then the clinician can edit. And that I see as something very positive, because I think none of us like to spend hours documenting. Right?

Alex: Yeah, that sounds exciting. Yeah, and that’s a form of augmented rather than replacing. So that you’d still need the doc to have the conversation or the clinician.

Eric: For now. [laughter]

Alex: Yeah, for now. Okay. And then you still need the clinician to look over the note and edit it before approving it. That’s a good use case.

Charlotta: Exactly.

Eric: I think a scribe.

Charlotta: I think that’s coming pretty soon at Dana Farber we’re looking into testing it over the next year. My worry though with that is that are we going to be expected to be more productive then, to see more patients? I mean, I don’t know, it could be great, but it could also lead to have other consequences.

Eric: Yeah, because the use case is, when I’ve read about using artificial intelligence as scribes, is, “Oh, we get to spend more time talking to patients and less time documenting,” but that’s never been the case in medicine where-

Alex: They keep saying that. It just gets shorter and shorter and shorter.

Eric: I think everybody would be excited not to have to document so much in the EMR, but the question is will we just increase the churn through patients?

Alex: Yeah, that means you could see more patients, right. Five minute appointments.

Eric: But Matt, I’m guessing from an ethics standpoint, there’s plenty of concerns around that too from a privacy standpoint to this idea of what they, as far as I can tell in the industry call hallucinations, where they just kind of make stuff up. So somebody still has to make sure that the notes are correct?

Matt: Privacy is obviously a major ethics consideration when it comes to where the data are going and who has control over it. But I think you’re right. Also, they’re just interesting questions about… Charlotta, you mentioned that the clinician would have to review the note to finalize it. And there are these issues around, well when is the machine just summarizing what happened versus actually doing some interpretation? Now actually suggesting perhaps that you missed a diagnosis that you should have considered?

Eric: Yeah.

Matt: Then it brings up those questions about the role of technology and care and between patients and clinicians and so on. And I think that’s where it gets really interesting from the standpoint of ethics.

Alex: Mm-hmm. Yeah. And this case has been worked out the best in terms of the artificial intelligence suggesting diagnoses the clinician may have missed. In the case of radiology, they feed these bazillion radiographs to an artificial intelligence, so they also feed it what happened, what the outcome was, what happened in reality. And the artificial intelligence is actually better at picking up things than the radiologist, eventually. And then can we stretch our brains to think about how that might happen? What might happen with these conversations that Charlotta’s taping and is… Is there a way that it could come up with a diagnosis or, “I’m concerned that the symptoms the patient is experiencing maybe due to polypharmacy or,”-

Eric: Or communication training. Dr. Smith, I believe you missed an emotional cue. The patient is crying in front of you.

Alex: Yes.

Eric: Please respond.

Charlotta: No, I think absolutely. And what we are seeing in our research is that, we’re working on GPT four, is volunteering information that we didn’t ask for. And so if I ask what symptom does this patient have? It’s great that saying this patient has these symptoms in a conversation that’s very long, but then it actually may volunteer saying, “I think this symptom is from this medication or it could be from the,” and I didn’t even ask for that. So I think that’s concerning, because how the model is volunteering information, it could also introduce some judgment about the patient or all kinds of stuff that maybe would impact the care in a negative way.

Sei: I wonder if there’s a analogy between the internet 1.0 and 2.0, where initially I felt like the internet was… people thought of it as, “Well, we’ll just digitize everything that’s in real life.” So instead of reading a book, we’ll just read it on an iPad. And people were like, “You know what? I actually like reading books and the physical,” and what the internet 2.0 was, was realizing, “Hey, because this is the internet, we can do things that we would never have been able to do.”

And so it feels like we’re at that stage with AI and clinical medicine of we’re trying to do everything that we’re doing now, but we’re going to try to do it with AI. And fundamentally it feels like AI is going to allow us to do things that are almost unimaginable at this point because of this new technology, but we have to go through that painful first step of okay, we’ll do the same exact documentation but AI will help us. And that’s going to be a marginal benefit, but that next stage of what is it really going to allow us to do, I think is exciting and scary and very difficult to foresee at this point.

Matt: I think you’re also… Go ahead.

Charlotta: No, I totally agree with Sei. I think that is so interesting. Because the in serious illness care and care for older adults, their care is often very complex. And allowing that complexity to exist and having the AI to support understanding what’s going on, I am very excited about that. Because right now we’re trying to, we often try to make everything into simplify a patient’s conditions and everything. I think the AI can allow for that complexity but still provide some guidance.

Alex: I want to hear more use cases. I’m going to volunteer one that’s more in the geriatrics realm and that is AI social companions. There’s this epidemic of loneliness and social isolation. And there are a lot of lonely old people out there, as we’ve talked about on this podcast. And we’ve already had, we have these things like robot dogs and cats. But wow, wouldn’t it be next level if there was a social companion, artificial intelligence that could interact and was smart, could pick up on things that are potentially…

Eric: My companion called me Sparky today. [laughter]

Alex: Yeah, there you go, right?

Eric: Yeah.

Charlotta: That’s nice, you get to pick your own nickname. [laughter]

Alex: Other use cases?

Matt: Well it’s interesting you mentioned the companion and one of the characteristics you started to talk about I thought in your comments was moving from a bit of companionship to a little bit of surveillance. That you said the companion could pick up on things, and that’s been another use for AI is in the home being able to detect falls, mobility issues and so on. And in your case that there is the ethics question of [inaudible 00:20:11] companion, is it also surveilling? How do we feel about that surveillance? Bigger picture ethics question I always ask is why are we using AI for this [inaudible 00:20:23] solve this problem? Why is this a problem we’re solving and is AI really the proper solution? As sort of the most fundamental starting point of ethics questions, and fact that we’re even thinking about robot companions to me evokes a lot of ethics questions.

Alex: Yeah. I believe we should be paying in-home caregivers more.

Charlotta: Yeah.

Alex: That would be the first step before we jump to an AI social companion. That said, it is a use case. So each of these cases has promise and pitfalls. Yeah.

Eric: I feel like the one that I’ve seen a lot in lately is the prognostication use case. It could help us determine often around mortality prognosis, but I’m guessing things are going to be changing. Like who is at high risk or low risk? Whether or not be ICU and sepsis. So there’s some stuff out there like that, but also using AI to determine who at high risk and should get a palliative care consult.

Matt: I felt like that use case really started to accelerate around the time of covid, when there were these big scares around resource allocation and people really wanted to know, “Well, who was the most likely to live or die? And could we use a prognostic tool to help us triage and know where to spend resources? But I agree, I think that’s a common use case. I think it’s probably coming into a lot of EHRs near you.

Charlotta: Yeah. My sense is that with that mortality prediction models, I think we should shift the focus more on patient needs. What are the patient needs, like in palliative care we care a lot about symptoms and caregiver burnout. And so I’m not a big fan of using mortality prediction in isolation, but if it’s mortality prediction and there hasn’t been any conversation about goals of care or a caregiver hasn’t been identified and the patient has a lot of symptoms, I think then it could be useful. But mortality, what do you do with that in isolation?

Eric: Yeah, I can imagine if the point is to highlight or to focus extra resources on those with the greatest needs, that seems for me somewhat a reasonable argument. If it’s the opposite, focus the resources on those who are going to get the most benefit from it, which potentially will introduce a lot of bias because those who benefit most from our healthcare system are usually the people in the least need category who have been often marginalized. And I worry about the introduction of bias. Yeah.

Charlotta: I mean we You’re Go ahead.

Alex: Go ahead Matt.

Matt: I was going to say that you said something very clearly that we’ve heard in our NIH funded study around AI based prognostication talking to patients and family members and so on, that there’s a comfort level when it’s being used to identify people who might benefit from palliative care or benefit sooner. There’s a lot of discomfort when it’s being used to limit care or limit access. You could imagine it being used as a requirement for hospice, for example. Or, “Well, once you’re beyond this risk threshold now you don’t get that service.”

Eric: Yeah.

Matt: That preventive care. And it’s interesting that we’ll have that same sense. It’s okay if it’s used for benefit and for access, but if it’s to limit, oh, I don’t know so much, that makes me a little more uncomfortable. And then the question of course is can you actually limit its use in that way? Practically? The information there and you limit it in that way. And I think it’s an open question.

Sei: Yeah, and I think as somebody who develops prognostic models, it is very interesting. Because I certainly have a use cases in mind when it is being developed but once it gets published, it’s kind of like free ranging out in the world. And I don’t necessarily get the control how everybody uses it. And which gets even more problematic when you think about how there are, when we used to be able to develop models with a very finite set of predictors, and so we really don’t want to introduce biases on protected characteristics like race, ethnicity and stuff like that. And so we don’t have that. And now the things that I think is most kind of, I don’t know, alarming maybe too strong a word, is how some of the stuff that we see that if you start doing NLP, even if you take out every-

Eric: NLP?

Sei: …mention of… National language processing. So even if we take out in these clinic text notes all mention of race and ethnicity, NLP is very good at back predicting who is black and who is white. It’s just we can do it now with our current NLP algorithms with very high accuracy, meaning you can’t ever force the algorithm to ignore race and ethnicity. And it will be able to use it to make predictions, which leads to all sorts of… Because you’re telling it to be as accurate as possible, and using that information allows it to be more accurate. But it introduces this specter of like, are we going to worsen, reinforce disparities, worsen, exacerbate disparities by using these predictive methods?

Charlotta: But I would say to that, I mean our healthcare system is currently has a lot of biases. People introduce those biases every day all the time. So I don’t feel like that’s new or it’s like a new danger. I think there are some opportunities to actually use AI to monitor for biases, because AI could pick out that there are biases in how things are done.

Sei: Yeah, I mean I think the concern is that AI, because of its black box nature could supercharge, it will tap into the worst parts of ourselves and then supercharge the disparities.

Eric: And Sei, just to clarify, when you say black box, you can go to e-prognosis and you can actually see all the things that go into one of these prognostic indices in e-prognosis. When AI is doing this, is there transparency? Can you see what goes into or this is just always on the fly thinking…

Sei: Yeah-

Charlotta: It depends on what model is used. There are models where there is transparency in terms of what goes in the model, but what is so powerful with the AI tool is that it can use interactions between different characteristics in ways that then later on it’s impossible to understand those interactions. So even though you know what the, like, “Oh, we use lab values and we use this and that,” in the model, it may actually become extremely complex. And like Sei was saying, it totally predicts race even though race didn’t go in the model.

Eric: So I guess the other question is when we’re thinking about these using AI for prediction, let’s say you do it at Dana Farber, you come up with this great prediction model for your patients at Dana Farber and then UCSF here decides to use that model. Is there issues with generalizability when it’s ported out to another place, a different population?

Charlotta: Well that with the new generative AI models, that is actually less of an issue because they are already trained on… I mean they’re trained on then entire internet, so they are more generalizable than previous models. It’s easier to transfer those models.

Matt: I think that that concern is an important one though because we have seen or we’ve at least heard people say things like, “Well, I would trust a model that was made at my own University of Colorado health system and not one made somewhere else,” and it’s all, I think, because of that generalizability. And also perhaps related to questions about motivation and intent by designing the AI in the first place that, “Oh, I must trust it if it’s in my own backyard but not much if it’s by some tech giant that I don’t trust at all.” And that may matter for whether physicians and other clinicians are willing to use it.

Eric: Well, I guess the other question is, I mean I get to hang out with Sei and Alex a lot, so I hear these words I may not understand. I’m like, over fitting. Say, what’s over fitting again?

Sei: I would say over fitting, it’s the same reason why stockbrokers were convinced that when skirts were going up, the stock market was going to do well or something like that. It’s like when you have so much data that you can find things that are related just due to random chance. And so if you have enough data, you may find that people with blue eyes are more likely to like mountain biking. Is there a true relationship with that? If you actually look at the next person there may not be, but. So over fitting just means that you found a relationship in your data that is not going to be generalizable to other people. And it’s just because you’ve dug into the data too deeply to find these spurious links.

Eric: Is that an issue with AI? Since they’re looking at these huge datasets.

Charlotta: Yeah, it’s actually that risk is going down.

Eric: Yeah.

Charlotta: Because the more amount of data that are going into the large language models is so large that it is becoming way more generalizable. And once we think about the large EHR software like Epic is building these models on their entire data they have access to, it is even though there has been some failures from Epic building machine learning models in the past. I think in the future it will be less of an issue. I would say though that any healthcare system needs to have a very, we have to have frameworks for how to evaluate this model

Eric: Yeah. I got another use question.

Charlotta: Yeah.

Eric: A lot of talk about the Aducanumab of the world, these amyloid antibodies diagnosing people with dementia really early on potentially for treatment, but also a diagnosis of dementia may limit people’s ability to do things like driving. Artificial intelligence for capturing people who are at risk or likely have dementia.

Matt: Not just dementia, some of the apps and such that can predict Parkinson’s based on mobile phone use patterns and dexterity and so on. It brings up this, one of the same ethics questions we face in the genomic world, which is when you have a prediction that’s out in the future, is that worth sharing that information with the person if they maybe can’t do anything about it? So in some ways, from an ethics standpoint, we have faced that or similar questions before. Maybe the first question on other people’s minds though, I don’t know Charlotta, what you think, how accurate are these really? Or how accurate do they have to be for us to believe in them and use them?

Charlotta: Yeah, I mean I don’t know. I think the models will become better and better at detecting early dementia and Parkinson. And I would argue that there are some interventions perhaps that you could, if you find out early enough, at least for Parkinson, like exercising. And maybe there aren’t medications or procedures, but there could be some early interventions that perhaps could slow down the disease course.

Sei: So-

Charlotta: I don’t know what you think, the geriatricians on the podcast.

Sei: Yeah, I mean a couple of things. I would say in terms of predictions becoming how accurate do they have to be for it to be actionable?, I don’t think anything is going to become super accurate over a decades long timeframe. Because fundamentally you are always going to be using past data to predict the future. And so if you are doing a 10 year prediction model, which we have done, what that means is you have to look 10 years ago because you need to know what actually happened to patients. And so whenever you’re using a 10 year mortality model on today’s patients, there is that jump. Like is care and the care processes and the predictors that were accurate in a cohort 10 years ago still going to be accurate today? And so I think part of this is going to be as we… I think models could become very accurate for six month mortality or two year outcomes, but as we get out to decades, it’s going to be less of an issue.

To go back to, I wanted to say was something about the over fitting issue, which Charlotta, I absolutely agree with that. As we are looking at more and more data, I think generalizability is going to be easier and easier. But the double-edged sword of course is that as we are looking at more and more data, the predictions become more and more black boxy. And so it is impossible for us to know when you are looking at basically the entire internet as the data source for this AI, exactly why it’s creating this. So it could be using things that if we actually understood how it was coming up with this prediction, factors that would seem ethically dubious. But we don’t actually know what the data inputs are.

Charlotta: But that’s where generative AI, what I’m like, one thing I’m really excited about is this new field called prompt engineering. And that is how you write your question. So Eric, when you were interacting, it really matters how you write those questions. And you can actually give a framework for, you don’t want this, you don’t want publications that haven’t been published on. You can actually write pretty extensive prompt to guide.

Eric: Which is really the interesting thing too is ChatGPT four felt like you needed a little bit of knowledge of how to write the question.

Alex: Yeah.

Eric: I got to say using Bing, which again I get no funding from Microsoft, it gave me questions to ask it so I could then pick of the three questions that they thought was most important that I should be asking next. It asked me who I was going to be interviewing, to ask the next set of questions. And it feels like, we’re talking about from baby to toddler, we’re in middle school right now.

Alex: Yeah. How to talk to your middle schooler?

Sei: So this definitely feels like what I was talking about as AI 2.0 because really the transition between the first level to the second level is how do you train human beings to interact most efficiently with this new… And I felt like internet 2.0 was all about developing social media and how to get actions with each other. And now how do we train human beings to work with this new technology to try to avoid some of the pitfalls and to try to maximize some of the benefits.

Matt: Do you think the concerns around transparency and explainability and interpretability and such are overblown? I mean there’s a whole lot in medicine that we don’t understand. We may, or that we think we understand or…

…In five years we’re not going to understand it. And I don’t know why sometimes people get so wrapped up in the transparency.

Sei: I personally feel like it’s a huge generational issue. I remember coming of age when the most famous prediction model I think was lights criteria for a transudative and exudative effusions. And there were three factors. And that was understandable. And then when we started coming up with mortality models or other sorts of prediction models with five factors or seven factors, people were like, “That’s too complicated. I can’t trust that I’m the one who’s going to get sued if this go goes wrong, and so I don’t trust it.” And now we have a new generation of doctors who are used to basically walking around with a supercomputer in their pockets and they routinely don’t necessarily, they don’t feel like they need to understand exactly why something is happening the way it is as long as there is a valid scientific reference for this is where the model comes from, these are the inputs, they are much more willing to… they don’t demand the same level of transparency. So I do think there’s a generational issue of as us old farts retire, I think the younger doctors will be much more comfortable with more opaque models.

Eric: But I do think that brings up a great question. I mean it goes up to what Charlotta was talking about before. Our systems already has tons of biases built into it. A lot of, again, biases can go in either directional, but a lot of negative biases. It has transparency issues, it has conflict of interest issues. Has all of these issues already built into it. I guess one question, and I guess… Actually I’m not going to be ask the question because I’m going to turn to Charlotta. I get a sense that you’re very optimistic about the future of this. What are you worried about?

Charlotta: I’m more worried about outside of medicine. I think medicine, yes, we’re one step behind in framework and regulations. We will continue to be behind, but medicine is still kind of slow and it’s not going to be super fast.

Eric: We’re always slow.

Charlotta: We’re always slow. But where I’m worried is on social media. Fake, just like misinformation and people pretending to be someone they’re not. And I’m really worried about that space and the military. And so my worry is not in medicine, but more in the common space of democracy.

Eric: Matt, what are you worried about?

Matt: Well, I think in medicine I still have the worry, and we’ve heard it from patients and family members in our study, that when this information like an AI based prediction about mortality becomes available, that our focus gets directed there and we start to think of people through that label and through that lens. And patients and family members tell us that they worry that their clinician is going to be overly focused on that mortality number now. And so that then leads us to that maybe the question we started with actually around dehumanizing care versus using this to augment care and allow clinicians to do their caring jobs better. I still worry about that.

Charlotta: But I would say, why are we building only mortality prediction models? We should build models to identify symptoms, to do proactive symptom management, and AI can totally do that. So I think that’s our decision and patient stakeholders, we need to advocate for other types of models that focus more on patient needs.

Matt: It’s the ethics question of are we doing it because we can or are we doing it because we should? And retaining that control over the purpose and rationale behind it, like you’re saying, is really important.

Eric: All right, I got another question. Right now, when you think about care of seriously ill, maybe older adults, what’s one thing that you’re seeing right now that is really like, wow, this is going to be really great for seriously ill patients or seriously ill older adults? Aside from the scribe issue. Charlotta, thoughts? Anything cool in the palliative care field with AI?

Charlotta: Well, I think one challenge in palliative care is the EHR is not built for palliative care. It tends to focus on all this structured variables, lab values.

Eric: Yeah, not function, not-

Charlotta: And so I think the AI can really capture the patient experience much better when it comes to symptoms and function and caregiver. If we can capture that directly from the conversations, I think that would be exciting.

Eric: Yeah. And even just, so we just had a podcast on diabetes and just the reams of data that you can be getting from continuous glucose monitoring. And again, we know like symptom monitoring for cancer patients is really important, aligned with good outcomes, but just I think doctors are all worried and all healthcare providers are worried just the amount of data that’s going to be coming at us. And having something that sift through that data. I got another question for you, Matt. AI ethicist, are we going to need ethicists in five years from now because chatGPT can do it for us. I can go on Bing right now and ask my ethics question and it’ll give me an answer.

Matt: We have seen pilot level studies, interventions like that one to do clinical ethics consultation.

Eric: Really?

Matt: AI. And it’s a really interesting question because if it is built on historical data, some people will say it’s going to be inherently conservative and based on how past decisions and recommendations were made, versus do we ever hold this AI moral being to a higher standard and say, actually if we’re going to do this, we want to have AI that makes better decisions than we do and avoids biases and so on. I don’t know, I’d have to defer to the AI people here on the podcast. Because I don’t know how you train that sort of a AI ethicist clinical ethics consultant. In ways that would make it superhuman because that’s probably what we want. We want it to be better than us.

Eric: Yeah.

Sei: My brain keeps going on the sci-fi tangents. But Matt, I feel like your comment about whether it is possible, that feels like almost a fairly high bar test for sentience that you’re talking about. If something that we create is generative and has a moral compass, is that a sentient being? And what response, are we not allowed to unplug it then because we’d be killing a sentient being? I mean, what is…

Eric: Well this is the problem, right, with ChatGPT. I can have a conversation. I don’t actually know if it has a moral compass, I don’t even know if it’s… Like it’s just putting out stuff that it’s been trained to put out. But honestly how different that is than me?

Alex: There’s a great podcast with Ezra Klein interviewing Ted Chang, who’s a notable science fiction writer, wrote the book that was a basis for Arrival the movie, or the short story rather. Where he talks about, Ted Chang is very worried about developing AIs that are sentient. Because if the way we treat animals is any indication of the way we treat AIs, this is going to be highly, highly problematic. And it is in our future. History-

Eric: You mean how the AIs are going to treat us when they’re…

Alex: Yeah, yeah. And if we treat them that way-

Sei: I am not kidding about our silicon overlords. We need to be nice to them, but they’re going to take over.

Alex: We won’t be alive anymore.

Eric: Oh, I could have this conversation for another hour, but I think Alex just brought up the song title as a hint. We’re at the top of the hour.


Alex: Okay.

For now.

Eric: All right. Alive for now.

Alex: Still alive for now.

Eric: For now. Well thank you both for joining us on this podcast. It’s really great. Maybe I wanted ask one last question for both of you. One project you’re working on in this area right now that you’re most excited about. Can you give me like 15 seconds? What that is. Matt, for you first? Anything?

Matt: Yeah. Well, I already referenced one project. We have a multi-site study ongoing to understand how patients, caregivers, clinicians broadly construed what they think about the use of AI-based prognostication. Super excited about that one. But also really excited, we have a different grant project looking at the use of chatbots in healthcare. Patient facing chatbots and understanding what people share, how comfortable they are sharing. And the chatbot one is definitely high on my mind too.

Eric: Charlotta?

Charlotta: So we are using generative AI to identify symptoms that patients bring up during conversations with their oncologists, and hoping that we can make sure that those symptoms are not missed and that can help improve patients’ experience living with cancer.

Eric: And is that listening to the conversations or is that looking at the EHR?

Charlotta: So it’s automated speech to text and then generative a large language model.

Eric: Wow, that’s amazing. Well, again. Thank you for both, for joining us on this podcast and to all our listeners, thank you for your continued support.

Back To Top