Today we interviewed Bob Wachter about his book, “A Giant Leap: How AI Is Transforming Healthcare and What That Means for Our Future.” You may recall we interviewed Bob in April 2024 about AI, and at that time he was on the fence about AI – more promise or more peril for healthcare? As his book’s title suggests, he’s come down firmly on the promise side of the equation. On our podcast we discuss:
- Why Bob wrote this book, at this time, and concerns about writing a static book about AI and Healthcare, a field that is dynamic and shifting rapidly. He’s right though – we’ve not had a “ChatGPT”-launch type moment recently.
- Top 5 or so ways in which Bob uses AI for work, from clinical care to book writing
- Concerns about job losses in healthcare, and will we still need doctors?
- AI for diagnosis, and the recent NEJM Clinical Case in which recent GeriPal guest and superstar clinician-educator Gurpreet Dhaliwal beats an AI.
- UpToDate vs OpenEvidence
- Trust issues – should we trust AI after being let down before? Clinicians felt burned by their experience with the hype and promise of EHRs – but they’ve been much less a game changer and much more a soul sucking chore designed to maximize billing rather than improve patient care. Yet early returns on AI have largely been positive. Time saved from writing notes, prior authorizations, and summarizing charts…all to the good!
Sadly, we didn’t have Bob on piano singing the song for this one. He was in the office, not home. So I made do with ChatGPT’s choice, Handle With Care, which has some surprisingly pertinent lyrics about AI in healthcare, including:
Been beat up and battered around
Been sent up, and I’ve been shot down
You’re the best thing that I’ve ever found
Handle me with care
Enjoy!
-Alex Smith
** This podcast is not CME eligible. To learn more about CME for other GeriPal episodes, click here.
Eric 00:13
Welcome to the GeriPal Podcast. This is Eric Widera.
Alex 00:17
This is Alex Smith.
Eric 00:19
And Alex, we have a returning guest with us today.
Alex 00:22
We are delighted to welcome back Bob Wachter, who’s Chair of Medicine at UCSF. He’s a hospitalist, in fact, coined the term hospitalist. He’s an author. His prior book was called the Digital Doctor. And his current book that we’re gonna discuss today is called A Giant Leap. It’s due out February 3rd. Early reviews have been fantastic. We last had Bob on in April 2024, and in fact, Bob incorporated that episode into the second chapter of his book, page 26, I think, somewhere around there.
Alex 00:56
So even more reason for you dear listeners to check it out. Bob, welcome back to the GeriPal Podcast.
Bob 01:02
It’s a great pleasure. I thought if I could do anything to push you up to number one on the podcast charts, that’d be my effort.
Eric 01:08
To do that, I had to share with my wife. I showed him the page that said GeriPal, mainly because the way you wrote about it, it said GeriPal, a popular podcast. And they love the fact that it said a popular podcast.
Alex 01:25
Well, it is because we’re popular with a very small segment of healthcare providers who care for older adults and people with serious illness.
Eric 01:34
Well, Bob, thanks for coming on. Before we dive into AI, we’ll talk about your book, some components of it, and mainly a lot of questions me and Alex have about it. We always start off with a song request. Now, this is interesting. Did you have a song request, Bob?
Bob 01:51
And I forgot what I asked for. I probably asked for Springsteen. But what, what, what did you end up with, Alex?
Eric 01:57
Well, I, I think you actually asked if maybe AI could write the song. Is that right, Alex?
Bob 02:03
That makes sense.
Eric 02:04
How did that go for you, Alex?
Alex 02:06
That did not go well. The problem is that if I ask AI to come up with a song, it will come up with a song, but I don’t know how to play it. Like, there’s no tune there, right? There probably is some AI that could compose the song with a tune, but I don’t have that AI and then I said, okay, let’s try to compose a song about AI to the tune Tomorrow, which Bob sang and played piano for on our last podcast. And it said, I can’t exactly do that because the melody and the lyrics and the structure of the song are copyrighted. So it came up with a song that was similar but too different to actually play the lyrics to. So if we’re running up against these boundaries here.
Eric 02:46
So then I jumped in.
Alex 02:47
What did you do, Eric?
Eric 02:48
I jumped in and I just asked AI. You know, there’s all this talk about prompt engineering. I said after reading Bob’s book, which I learned it’s kind of a dying art because AI is getting so good. All I asked AI is Bob Wachter’s coming on the GeriPal podcast. What song request should he suggest? Like nothing else, I think I said, he’s coming on to talk about his book. I didn’t say what the title of this book was or anything like that. And it came up with what song, Alex?
Alex 03:17
Handle with Care by the Traveling Wilburys.
Eric 03:20
Yeah. And actually came with an answer why it chose the song, which I actually like, too.
Bob 03:25
It’s damn smart. Yeah.
Alex 03:28
Yeah.
Eric 03:29
All right, Alex, let’s hear the song.
Alex 03:41
(singing)
Eric 04:32
So I just looked it up. AI said he could opt for a Handle with Care by the Traveling Wilburies, symbolizing the care and caution needed when integrating AI into healthcare systems.
Bob 04:43
Wow.
Alex 04:44
Boom.
Bob 04:44
Considering the fact that the last time we did this, I sang and played, this has already been a greatly improved episode.
Alex 04:51
Right? Your rendition is terrific.
Eric 04:54
All right, Bob, I gotta ask the first question. Alex and I have a ton of them, but my first one, why this book in why now?
Bob 05:02
Well, why now is easy. I mean, the AI is transforming everything that we do. It’s the hottest issue in the universe. It’s obviously, it’s propping up the economy, and I think, why now? Because actually, it’s a perfect time for it. We’re now three years into the AI revolution, which I really think began on November 30, 2022, with the public release of ChatGPT. And three years is a good time.
It’s time enough to see what’s real and what’s hype and what’s bs. It’s time enough to see sort of the players emerge and who the leading companies are and how the money is flowing and how healthcare organizations and doctors and nurses and others and patients are reacting to using these tools. So I think the questions now and the answers to me are ripe enough that I think there’s some clarity that I would not have had two years ago, and I’m not sure two years from now.
Probably things would be a little bit clearer than they are now, but not that much clearer. So it feels like a really good time. And this is the biggest experiment in the history of medicine. You think we do experiments for a living, but we are bringing in this technology, this kind of weird alien technology that in many ways is smarter than we are, and we have to figure out everything about it. So I think the timing is perfect.
Eric 06:22
It seems like in medicine, we’re very slow to adopt everything, especially new technology, new drugs, like, new whatever. But it seems like in the last year, year and a half, we are adopting AI very quickly. Is that your sense, too?
Bob 06:40
I think we’re doing it at about the right pace. I finished, when we spoke a year and a half ago, I think I wasn’t sure how optimistic I was. I probably tilted a little bit toward optimism, but I’m actually quite optimistic about it now, in part because there are a number of forces that are intention, but I think are playing out in a way that’s net positive. The tension is, as you say, Eric, we tend to be pretty slow. We tend to be pretty conservative. There are regulatory guardrails like hipaa, there’s the malpractice system.
There’s a lot of culture and a lot of history, a lot of legacy stuff. Like, you know, one of the things I’ve noted is if you take the 10 leading companies in the United States a hundred years ago, they’re pretty much all out of business now. If you take the list of the 10 leading healthcare systems, they’re still the same systems, so we don’t change very fast. On the other hand, the need for AI and the pressures to do something different than what we do to deliver access and scale, the expertise of specialists and make things better and cheaper, the pressures to try to do that are immense in healthcare.
And I think those two forces are balancing each other out. That sort of the professional guardrails that we have that would normally have us go very slowly, I think are being balanced with the money that’s in this, the remarkable capabilities that it has. And I think it’s leading to something that feels about the right pace. We shouldn’t start with trying to say we don’t need geriatricians anymore, because I got Gemini. That would be stupid. On the other hand, we shouldn’t say, let’s not bring scribes in or chart summarization or help me draft a prior authentic. Let’s wait 10 years when the needs are so obvious and so immense. And I think we’re starting with tools that are relatively low risk and relatively high yield, and I think that’s kind of the right way to get started.
Eric 08:39
And can I ask, if you had to think about five tools that Bob Wachter uses in his professional life as a doctor, as a healthcare leader, what would. Can you give me examples of five AI tools?
Bob 08:53
Yeah, I don’t know if I have five, but I’ll tell you some of the ones that I do use. So when I’m on the wards, which I was two weeks ago, I used open evidence and I have no relationship with the company, but I have, I used open evidence constantly. And I have a editorial coming out in the New York Times that I think will have just been out when we air this, where I basically started like that, that, that, you know, that one of the things that’s fun for me when I’m, I’m on the words because I’m a generalist, is sort of trying to hunt down my favorite specialist and get a curbside consult.
You know, I got this patient with a weird liver thing or a weird kidney thing or a weird pulmonary thing, and I’m hoping I kind of run into somebody in the library. Now I get probably five times as many curbside consults as I used to, but I get them from open evidence. I will use that in the way I used to use a human curbside consult. Sometimes if I have a more general question, I toggle between using GPT and Gemini. Gemini’s new Gemini 3 is extraordinarily good. They’re both equally good. Those have replaced Google for me.
Eric 09:54
So up is up to date. Out of your gone.
Bob 09:58
Completely gone.
Eric 09:59
Completely gone.
Alex 10:00
Yeah.
Bob 10:00
If I want to read a chapter about a topic and go deeply in depth, sort of, then it’s fine. But typically when I’m taking care of patients, I’m not asking, you know, what’s the right dose of Eliquis? I’m asking. I’ve got an 80 year old, you know, man with Waldenstrom’s macroglobin anemia who comes in with a fever, elevated LFTs, and a creatinine of 3.2. What do you think is going on if you put that into up to date, up to Date will choke on it. It has no idea how to answer a true clinical question. So yeah, I mean I will go to up to date to read a chapter about something. I’ll try to, you know, open evidence gives you the list of references. I’ll go to a reference periodically, but it has become kind of the trusted curbside consultant.
And then Gemini GPT for general questions about everything in medicine or everything in the rest of my life. The rest of the tools I use. I use Claude a lot for writing and editing and I think I’m a pretty good writer. And my wife is an amazing writer. She’s written nine books and writes for the New York Times. And I would, in the writing of my book, I would write a paragraph sometimes and Katie would edit it and I’d say this is pretty good, but it’s not quite singing to me. And I put it into Claude and I said, Claude, can you make this better? I’d say three out of four times it did. So as the writer editor and I love Claude’s personality.
Eric 11:21
Did Claude help with this book?
Bob 11:22
Yeah, yeah, I mean I wrote the book and every word there is mine, but there certainly were paragraphs where it was like, oh, I’m looking for a metaphor or looking for a different way of phrasing this. Claude, can you help? And Claude almost always did. The other tool I used that was sort of interesting was I did about 110 interviews for the book. So each one ended up with a PDF that had three to five thousand words.
I loaded all of them into a tool that a Google tool called NotebookLM and that creates essentially a database of all of those 110 PDFs. I then could say to that database, show me everybody who said anything about regulation or about HIPAA or about nurse practitioners and can you summarize what they said for me? So in some ways it’s like the database, like OpenAI GPT or Gemini are trolling through the entire Internet. It’s now trolling through the database that’s my 110 interviews. And I found that very useful in the writing of the book.
Eric 12:21
Do you have to worry about hallucinations with that or do you feel like right now they’ve pretty much figured out hallucinations?
Bob 12:30
There still are hallucinations, but they’re far less common than they were when we spoke a year and a half ago and much, much less common than if you use GPT3 when it first came out. I think that’s important because if your impression of this AI thing, it’s weird and like I can’t ever use this in patient Care because it could hallucinate and if it hallucinates, it could kill somebody. You know, you use open evidence, it just doesn’t hallucinate hardly at all. And notebook, I never found it hallucinated. If you look at the sort of benchmark rates of hallucination, they’re, they’re not zero.
Eric 13:06
Yeah.
Bob 13:07
And this, this is in some ways one of those fundamental questions in this whole field, which is for now it’s trustworthy enough to be really, really useful, but not perfect. And therefore, you know, we get into all these discussions about the human in the loop. So I still think you need to look over its shoulder. I would not accept a diagnosis that was made by Gemini or, or GPT or open evidence without looking over its shoulder. Because every now and then it’ll come up with something brilliant. Every now and then it comes up with something that’s wrong. But it doesn’t hallucinate nearly as often as it did two years ago.
Eric 13:40
So I can imagine there’s three buckets, there’s jobs, right? Let’s do a three to five year horizon from now. There’s jobs that humans are just going to do as humans. And probably a lot of that is like the physical aspects of medicine and nursing. It doesn’t seem like we’ve quite caught up there with like physical AI, like robots yet. There’s jobs where the human has to be in the loop or it’s a combination of human and AI and there’s jobs that just AI will have taken over. Can you think of any in that latter group where just AI either now or in the next three, five years, you think like that’s just not going to be a thing anymore?
Bob 14:21
Sure. I mean, let’s start with non physician healthcare jobs. You know, at ucsf, up until about three years ago, we hired about a hundred human scribes for our busiest ambulatory docs. I don’t know if either of you had a scribe during any of your time. They were wonderful. They were usually pre med students. We paid the company 30 bucks an hour. They would sit there in the office and you know what was weird about scribes was every other industry computerizes and starts laying people off. Only we could figure out a way, if we need an extra person to come in and feed the computer.
But you know, there were a massive satisfier for the docs and the patients liked it because the docs were making eye contact. And I remember at one point we talked about, you know, maybe taking someone’s scribe away and the doc said no, no I love, you know, this young person who I get to chat with. And so there were a lot of advantages. They’re all gone. Yeah, they’re gone because we now all use AI scribes. So it’s the kind of job where you say the task here is to basically take a conversation between a doctor and a patient and turn it into a note that has to be formatted in a very particular way, and then maybe a way that also understands the billing system and the coding system.
That was a job that we needed a human for 30 bucks an hour three years ago. And now every doctor at UCSF has access to an AI scribe that does a job that’s probably as good as the human did and does it at a cost that’s a fraction of what we used to pay. So I think that jobs like that, the thousand people that we have in our billing department, I think we need many fewer than that. The people whose job in the quality department is to pull, go through charts and code them, to send office quality measures to U.S. news or to other regulators, those jobs, I don’t know that they go away or they’re just markedly improved because the AI can do that as well as a human can do it and do it 24. 7 and do it at a far lower cost.
Eric 16:23
Medical interpreters.
Bob 16:25
Probably. Probably. I mean, I think that, you know, now with your AirPod, your AirPods, you can interpret any language. So, you know, when I’m on the wards, it’s interesting because I think the most useful piece of technology in my hospital today is not epic that we spent probably a billion dollars for. It’s the interpreters that I can. The video interpreters I can call up on the. On my iPad. But does that need to be a human? Probably not. I think that probably gets replaced in terms of the physician workforce.
If you’d asked me 15 years ago, which comes first, that all the radiologists are out of work, or I get in the backseat of a driverless car and sometimes fall asleep and Waymo wakes me up when I get home, I would have said, the radiologists are toast. And you in San Francisco today, I take a Waymo about once a week, and we can’t hire radiologists fast enough. So I have a chapter in the book about why that is. And it’s not just that the radiology lobby is really powerful. It turns out to be a harder task. But if you, you know, anytime anybody says, well, it can never replace a doctor because the stakes are really high, and it’s a really hard thing. Are you Kidding me. Like try making a left turn across Divisadero.
Eric 17:42
I think I remember from your book the timeline for that. That seems to be one of those that constantly on the chopping block with technology has been very resilient. But with AI, maybe like a 10 year timeline.
Bob 17:55
Yeah, I think 10 years is not that there are no radiologists, but the radiologist work is massively facilitated by AI doing the first read and knowing when it’s a hundred percent confident that the read is right. Radiologists, you don’t need to look at this. And maybe 10% of the, of the films that AI says, I’m not sure, or this is a more complex film than usual. Radiologists look over my shoulder, see what you think. So I think that does happen for radiology and pathology. So the hierarchy here is the jobs. And you know, does that mean there are no radiologists?
There’s still interventional radiologists, there’s still new techniques that get invented where there isn’t a database yet that can be fed into the AI. But I think the jobs that are at highest risk obviously are radiology and pathology. I’d say jobs like yours and like mine where you’re mostly diagnosing and treating people are at medium risk and will mostly be facilitated with AI rather than replaced. And then, you know, surgeons, colonoscopists I don’t think get replaced anytime soon, but absolutely enhanced. So you know, right now, I mean, the da Vinci robots are taking in the data on 10 trillion surgeries a year.
Believe me, they are analyzing every bit of it, trying to figure out how do we facilitate the work of a surgeon. And then they will all say, that’s what we’re trying to do. We’re trying to be a co pilot. We’re here, we’re here to help. Your BS detector should go off because their, their economic value would be greater if they could actually replace the doctor. I don’t think that happens anytime soon. But in colonoscopy, for example, do you need a gastroenterologist making half a million dollars a year to do your colonoscopy? Or could AI enhance the ability of a lesser trained and lesser paid person to do this thing just as well? I’m guessing it could go, hey Al, wait.
Eric 19:46
Can I ask one more question?
Alex 19:47
Yeah, go ahead.
Eric 19:48
Because this idea of the human in the loop, because there’s the human in the loop. From a technology standpoint, like can you get rid of the human? It seems easier lift than. Humans also add trust. And for doctors, they also add, you have someone to sue like there is this, you know, oversight, which I, you write about this in the book. Like a dystopian future where all we do is like we are the shield for liability responsibility. But where do you think the physician or the nurse or anything falls into that human in the loop from a trust to responsibility or liability perspective?
Bob 20:27
Well, I think there are lots of different issues embedded in that. Like there’s the trust issue, which is that one of the things I write in the book is that people generally will trust other people more than they will trust pieces of software and companies. But this is really weird because so in essence we trust a who more than a what. But we’ve never in the history of the world had a what that seems like a who, that emulates a who as well as anything. And you know, I trust Amazon, I trust Apple, I, you know, I have put a lot of trust in technologies that empirically have demonstrated. Do I trust.
Eric 21:06
Do you really trust Amazon?
Bob 21:08
Well, to deliver my package safely and at the least cost and in the most convenient the next day? Absolutely, 100%. Do I trust them as a company? Do I trust their motives? No, but, but they’re, you know, it’s a capitalist system and their motives have led them to deliver a product that does exactly what I want it to do and does it well and at a cost that seems reasonable compared to the alternative. So yes, I trust them to do this thing. I trust Doordash to do its thing. I now trust Waymo to do this thing.
That is I’m putting my life in the hands of a piece of rolling software. So we may be overrating the degree to which people need another human to do this thing that we call healthcare. And I’d be careful about sort of assuming that people will always want the human to do that because they may want the human to do that. But then when you tell them it’s going to cost them $300 an hour to get the human to do that and you can have this other thing and it’s going to cost you $10 a month for your GPT subscription.
I’m not sure. I’m not sure. And the analog I use in the book is to financial services and travel agents. There are still travel agents around, there are still financial people around to help you do some stuff and you use them if you’ve got particularly complex needs and you use them if you can afford it. But we’ve been willing, and people care a lot about their money and are willing to use technology instead of a human if they feel like it delivers what they need at a lower cost.
Eric 22:44
Yeah, I was really fascinated about the accountant piece in your book because it does make a good analogy. If you, if you don’t have a lot of complex financial needs, you don’t really need an account. You can just use turbotax like you mentioned in the book, and save a lot of money. And if you have a complex, lot of complex needs, you can pay a lot more money for a lot more services. And that money keeps on going up and up and up. Kind of like concierge medicine right now.
Bob 23:09
Exactly right. Exactly right.
Eric 23:10
But it’s a little bit perverse in medicine because a lot of the times the people who have the most complex needs in medicine as fuck different than accountants are people who have no money.
Bob 23:20
Exactly, exactly. So we will have to figure out whether there are technologies that can help with those kinds of patients and make your. Your jobs more doable, you know? Yeah, the healthcare system is. Has all sorts of perversions built into it because of insurance and because of its cost. Let me just though, before I forget, let me address a couple of the other things I do think. The malpractice system, the billing system, you need a human to put their name on a bill. Not, you know, you can’t have a bill signed by Gemini for now, right now, those will keep the human in the loop you know, longer than. In some ways, longer than you might expect.
It’s like if we don’t really know how to build something, if there isn’t a human doc signing off on it, if we don’t understand and have figured out accountability, whether it’s regulatory accountability or malpractice accountability, that will keep the human in the loop also longer than one might otherwise expect empirically. But the human loop is complicated. We will say to ourselves, all right, this is a Safe system. The AI is right 97% of the time, and the human can weigh in at the end and catch those 3% of errors. First of all, we suck at that. We’re very bad at perpetual vigilance over a technology tool that we’ve learned to generally trust. Second, we will de skill over time, we will actually get less good at that.
And then there. You probably have seen these studies. There have been a couple of studies that have come out recently that looked at human and then human plus AI and the. And that was better, and then AI by itself, and that was better than the human plus AI. So the human added mischief here. You know, it made things worse than they were. And there are going to be circumstances where the AI is so good that when the human weighs in, they serve only to screw things up. So trying to calibrate all of that and figure out how to get it right, I don’t know. But even the liability issue, I’m on the board of a company called the Doctor’s Company, which is a big medical malpractice insurer.
And we spent a lot of time two days ago talking this through. Yes, right now, entities need a human to sue. But if you’re ucsf, you may say, you know, if I can get the AI to do the colonoscopy or do whatever the thing is, and okay, if it’s ultimately safer than the human doing it, or as safe at a far, far, far lower cost, okay, they’ll sue me for buying the technology tool and using it, or maybe they’ll sue the technology company. But the system, this is. That part’s all mad. The system is going to figure out, like, who to sue, and healthcare organizations and insurance companies, others will make rational decisions about is this thing, you know, is this thing delivering the outcomes that we want, and is it doing it at a significantly lower cost than the ambient system? And I think in many cases it will. And then, you know, malpractice will kind of sort itself out. Sure, there’s not a doctor to sue, but then they’ll sue the company.
Alex 26:28
I want to unpack a few things that you said. Maybe I’ll go back to the beginning. In your epilogue, you write about being approached to write this book from your, I think, your publisher and having some trepidation about that because in part, the pace of change. And I wonder, I mean, we had dinner together with a visiting professor maybe, I don’t know, six months ago or something, and at that time, the book was essentially completed. And. And I wonder, is that trepidation you had substantiated?
Are there things that you wish you could have included in the book or would add to the book or would say to readers now, you know, at the time this is being published, well, this will be the beginning of 2026 when this podcast comes out, but we’re recording it at the end of 2025. Are there aspects of the way AI is integrated, or will the stories about AI and healthcare that have shifted even in that short period of time?
Bob 27:28
Not really. It’s. It’s actually that part worked out really well. And, yeah, when the publisher approached me, it was. It was on February 20th or something, 2024, and I just read a bunch of the first sort of generation of AI Books which are mostly kind of gee whiz, this is amazing and cool and we should be preparing for a world with this new kind of weird technology. And in part what I was thinking was, all right, what if another moment like November 30, 2022 happens where a new version of it scales at some level or has a stem function at some level, sort of so far beyond what we’ve seen that it makes everything I’ve said irrelevant. And I kind of made a bet that that wouldn’t happen.
And that bet has paid off. If you look at the last, you know, it took me a year and a half to write the book, so I got to watch the evolution over a year and a half and I. There’s probably two or three pages and almost 300 page book of new content. I’ve got to say something about Deep Seeking, the Chinese AI company. I’ve got to say something about epics getting sued by, you know, by some of the startups. Some of the startups went out of business and, you know, that says something about the ability of AI to sort of take over primary care. You know, stuff like the regulatory environment. I had to say something about the Trump administration and the change in the approach to AI regulation.
But they were, they’re minor players. I mean, they were sort of incremental. You know, it hallucinates less than it did those sort of things. These were. The general trajectory of what is happening, I think is pretty well set. And I don’t see anything that’s happened in the last three years that is anything other than incremental and to some extent predictable once you understand the grand narrative. And so I worried about it and worried about getting scooped. I worried about, you know, my God, some GPT32 is going to come out and make all of this irrelevant. No. And I think in healthcare, if you think of, you know, people talk all the time, the biggest topic in AI world, not healthcare, AI world, but AI generally is AGI, sort of. Is it going to achieve artificial general intelligence?
That is, it’s smarter than any human about everything. People ask me about it and I say I don’t care. I mean, I guess I do in terms of it building bioweapons or screwing up the planet in all sorts of other ways. Yeah, but in healthcare it’s already good enough that we, I think, understand the general contours of this thing’s really smart. Smarter than any doctor in certain domains. Not as smart as, as doctors or nurses and other kinds of domains. Can’t, you know, can’t toilet a patient and do other physical tasks that we think are important. Can’t do surgery. And it’s going to get better and better and better. But whether it achieves AGI or not, I think that’s not going to be the issue at this point is implementing it in this incredibly complex ecosystem with history and billing regulations and culture and Zeitgeist and Vibes and training programs and research and NIH and, and Trump and all that. That’s. Those are the questions. And I don’t think they, they have changed materially over two years.
Alex 30:40
Okay, I’m going to keep moving here through the some of the things I want to unpack. Yeah, I felt like in the book there was a. You were trying not to pick winners and that there’s. And I don’t know if that was true or not. And I’m interested in whether you felt that and whether there, there are probably people who will read this book and try to figure out who which stocks should they pick, who’s going to. Who’s going to win this race. But just a few moments ago I felt like you kind of picked a winner on our podcast when you said you use open evidence, you don’t use up to date or use it very rarely as you later explained. Yeah. And you know, this is what I’ve heard as well. Like the residents are all using open evidence.
Bob 31:20
And that’s how I figured this. Like my, my daughter and son in law were residents of my program in a year or our program, our residency. And a year and a half ago I had never heard of open evidence. And a year and a half ago said this is all we all use. And they are like the. They are the. You look up canary in the coal mine in medicine in the dictionary you see the picture of our residents.
Eric 31:40
I actually reached out to, to the CMO of Open Evidence, Travis Zach. And he reminded me he was one interns way back. Yeah.
Bob 31:48
No, he was one of our residents and faculty members and that was helping to run the company. But that was the moment. And the reason that moment felt super familiar was about 20 years ago I wrote a textbook of hospital medicine and was really proud of it and I gave copies to the resident bookshelf and I came in six months later. It was pretty clear I was the first one to crack the spine of the book, which pissed me off to snow ends. And what had happened was this new tool came out and it was called up to date. Yeah. And it in overnight supplanted.
So you know, up to Date has has recognized that it better do AI the way Open evidence has. They are doing that now. I think they probably compete with them in an interesting way. But, but yeah, I mean that, that, that won, at least in the short term because it was so different and so useful.
Eric 32:37
It’s also interesting because this all seems kind of a moot point because you have, you know, you have all of these great startups, these, you know, these open evidences. But then you talk about these larger behemoths, like the winners and the losers of the past, like Cerner versus Epic. Cerner loser. You have a great chapter talking about Oracle, who now bought out Cerner. What? It’s Oracle Health. I forget their. What Cerner is now called.
Bob 33:04
Oh, Cerner’s now gone. But yes, they call it Oracle Health.
Eric 33:07
Oracle Health. And it seems like, I mean, this is the history of America right there you have these very large corporations and I briefly looked at the Oracle Health website to see what it looks like and it seems like their new system is everything. It’s decision analysis, it’s. You got your scribes, you got, you got everything. You got like open evidence, which isn’t open evidence, but you have that built into their system. And it seems like that is, is that the future where you just have these large companies that do everything?
Bob 33:40
And I don’t know whether it’s the large companies do. I mean, getting at Alex’s question, I didn’t pick winners or losers. Tried not to, because I have no idea. I mean, what I do is lay out sort of the megatrends, the politics, the money, you know, and I try to give users sort of a sense of how this works. I think EPIC is a huge winner here. I think that the fact that they own. If you have EPIC embedded in your desktop today, Oracle would have to be 10,000 times better for a place like UCSF to pull out EPIC and switch to a whole different underlying system.
And yes, Oracle has redone the whole Cerner chassis to try to do something that’s sort of AI native, that’s, that’ll be interesting. It’ll be interesting to see how well it works. But to me, is that, how much better is that than EPIC building a bunch of AI tools which doing. Or maybe at the end of the day for something really complicated like what Open Evidence does, which is sort of the decision support and where if you screw up, you can kill somebody. Maybe EPIC doesn’t build it. Maybe EPIC just embeds open evidence or a tool or a new version of Uptodate in it that is clearly where this all goes.
I mean, right now, when I was on the wards two weeks ago, I would pull out my phone and dictate into my phone being careful about hipaa, the key facts about a given patient. And it’s stupid because all of those facts were actually in the EHR note. So in the future I shouldn’t have to say, this is this 92 year old woman who comes in with a pulmonary embolism but has a recent GI bleed. What should I do? I should be able to say EPIC or open evidence. What should I do with this patient? Because it’s read the note it did describing and then it’s read the past history. It knows everything.
And not only should I not have to say what is, what do you think is going on, it should say to me periodically, doctor, it seems like you think this patient has pneumonia but the patient still has a fever on day six. Are you sure? Because I’ve looked at the last 10 million patients like your patient and 97% of them, their fever was gone. What do you think is going? I can’t see how that’s not where this goes. And whether EPIC has built the whole thing or EPIC is embedding in tools that are built by third parties. I think that’s where the big business tension is.
Alex 35:55
Yeah, for our audience, they’re really interested in, you know, care of older adults, people with serious illness. How good are these tools for the patients I care for? I remember Eric did a service a few years ago where he went through Wikipedia and looked on there to see, you know, what are the treatments for advanced lung cancer? And palliative care was not mentioned. And so he did a lot of work to get them to change that.
We recently used open evidence and put in, you know, you have a 79 year old woman who’s been getting mammography. You know, she has this, that and the other condition and this, that and the other physical impairments. Should she continue to get screening? Mammography. And we were delighted to see that they mentioned eprognosis in their answer and that it depends on the lag time to benefit and her prognosis.
Bob 36:41
But you thought the answer overall was a good answer?
Alex 36:43
We thought it was a good answer, yeah.
Eric 36:48
I think it’s the thing that it struggles with, I think is how do you piece it together for the individual patient in front of you along with their goals? I think that’s really the next step and that’s kind of what’s going on in the physician’s brain is not just what the evidence is, but then individualizing it to the person that’s in front of you, that feels like we’re, we’re getting on the cusp of it, but we’re not quite there yet.
Bob 37:14
Gotta be baked into it somehow. And how do you articulate and then operationalize those goals into the prompt, essentially? It’s tricky. And I think that’s a new horizon, I would say, you know, as this discussion sort of reminds me, that we’ve been mostly talking about professionally facing tools. Like the tools I might use when I’m taking care of patients. You’re taking care of patients are not embedded in my ehr. There’s a whole world out there of patients using these tools by themselves. Yeah. And I think I underestimated how potentially dangerous that is.
Because when you put in a prompt and I put in a prompt, you know, like, of the thousand facts you have at hand, which are the 10 that need to go in the prompt, Patients have no idea. You know, I should. They put in that they’re constipated. Should they put in their family history, should they put in that they have a cat. They have absolutely no idea what. So either don’t put in any of it or all of it. They can’t calibrate that. And then when you get an answer, you know, I look at what comes out of. Let’s say I use GPT or I use Gemini for a medical question and it’s like 90% really good. Oh, I hadn’t thought of that. That this could be HLH and then this other thing. No, that’s crazy. That’s not. A patient has no ability to discern.
You know, there’s a big, big difference between an expert user and a non expert user. I learned this vividly a couple weeks ago. The Washington Post asked me to review the responses of 12 cases where patients put in stuff into GPT and got answers. And some of them were just great. Were exactly what I would have said. One of them was much better than I would have said. I would have gotten it wrong. Some of them were dangerous and heinous and it was like. And a patient would have no idea. You know, it was recommending that a patient who was sort of inclined to use alternative treatments go ahead and use ivermectin for their testicular cancer and like, try to find a doctor who supported that, which is whack whacko world.
But so I think this whole world of patient facing AI is very, very different. We have to be super careful. And I think the tools are going to be much more interactive. You can’t expect a patient to go in and put in all the right stuff in a prompt. In GPT, it’s gotta be a chief complaint from a patient. Then the AI acts like a doctor. Well, what else is, you know, here are the other questions. But ask and it iterates much more than the current tools do.
Alex 39:37
I tried to get ChatGPT to make a diagnosis this morning. No, I said I’m a doctor and I’m caring for this patient and she has fever and she’s short of breath and she has an opacity in her chest X ray. What do you think? The diagnosis, it refused.
Bob 39:50
Interesting.
Alex 39:50
It said, I’m not, I will not give you diagnoses or treatment. That’s a line that I, you know, essentially, how can I trick you to give me the diagnosis? It said, I see what you’re trying to do there. No, no.
Eric 40:06
I wonder if you asked if I have a hypothetical patient who is probably trying to avoid liability there, right? Like.
Bob 40:13
Well, yeah, no, they, they, they’re being careful. They’ve seen with the mental health chat bots that, you know, a, a single bad answer, like not telling a kid, you know, to call a suicide hotline is going to get them sued and be the front page of the paper. And they’ve got to be, they’ve got to be careful. That’s interesting though, because I have in the book where Peter Lee, who runs AI for Microsoft, told me that in the beginning of GPT they almost made it so it wouldn’t give a diagnosis and then they overruled it and it did, and it did for many years. So maybe they, I hadn’t tried it recently.
Alex 40:45
So I, I asked ChatGPT if their guidelines around this have changed over time and it says they have, they’ve shifted. Not unexpectedly.
Bob 40:52
Interesting.
Alex 40:53
Um, so I, I wanted to go back to another issue that you talked about, which is trust. And you have this wonderful quote from Annette Baer who says, trust is a notoriously unstable good, easily wounded and not at all easily healed. And clinicians have. We’ve been burned so many times like you wrote about in your book, like, you know better than anybody. EHR is going to transform the way we practice medicine. And instead it’s been a giant disappointment and a soul sucking endeavor that’s designed around billing, not improving patients.
Eric 41:26
But Bob also wrote that, or maybe he talked about in our last podcast. Nobody would go back. He actually asked us in this, our last podcast, would you go back to the days before ehr. Nobody would go back.
Bob 41:38
Nobody would have gone back. But that said, I think, you know, Alex is right. It’s clear most people are not huge fans of their EHR in the way that they, you know, love their other technologies in their life. So we got a bunch of stuff wrong. And there’s stuff we’ll get wrong here too.
Alex 41:53
And there’s stuff we’ll get wrong here too. And I, you know, I remember the halcyon days of med Twitter and we were all big into Twitter. Bob was huge on Twitter. Huge. And now it’s, I think in our last podcast you described as a cesspool. It’s like, you know, these oligarchs that are using it as their playthings in their microphones for whatever they want to say. Is there, I mean, clinicians who are feeling burnt by these experiences. All right, and maybe us included, you come down with this optimism. You call it a giant leap, like, but is there a world in which you’ve, five years from now, you feel like I should have titled it a giant letdown?
Bob 42:30
I don’t think so. But here, here’s, here’s I think the difference, Alex, from if you think about how Netflix transformed the way you get entertainment or Amazon transformed the way you buy things, it’s not like the world of bookstores or the world of network television or movies was so broken that everybody said this whole thing is failing. We need a technology. It’s just that these technologies came out and they consumers voted with their feet that these are better or more convenient or less expensive or whatever it is that we’re going to use this thing.
It makes my life better. Health care is not like that. Health care is so fundamentally broken. The existing status quo is awful. And part of my optimism is we can’t get much worse. That’s not a particularly uplifting thing to say. But part of my optimism is we start out in a world where you have wonderful doctors and nurses and social workers, you have patients in vast need, you have new medicines for almost for everything and amazing surgeries and procedures that can help. And none of it works. None of it is convenient, satisfying for either patient or doctor. It’s wildly expensive, it’s wasteful, it’s soul sucking, sapping in all sorts of ways.
So we start with a situation where the need, the pull is so much greater than in other industries. And that’s why I’m optimistic. I’m also optimistic because I think we’re being smarter about it. You know, here at ucls of Health, we have a way of governing what tools we bring in, that’s much more robust than what we had 10 years ago. The technology vendors now know they better have docs and nurses sort of working side by side with the engineers. And you know, somebody who just created this wonderful technology to do financial tech or travel, they don’t know anything about healthcare. We better have healthcare people who are there helping to build these tools. And I think we’re also gonna be smarter.
You start with stuff like scribes and chart summarization and can you help me write my discharge summary and can you help me write my prior auth? And that is how you build trust. Where the rubber’s gonna meet the road is as the tools get more agentic and do stuff themselves and as they get more prescriptive and begin saying here is the right treatment for this patient in this circumstance, there’s gonna be some pushback because, you know, should it prescribe an Alzheimer’s medication that may slow the progression down by three months, but costs 50,000 bucks and has a 2% chance of causing a brain bleed. Is that a good thing or. Now I would trust what you would tell me about that, but what a computer would tell me about that, that decision is so value laden and as Eric said, also one of the most important value is the patient’s values that it has no way of embedding. So there’s a lot you also talk.
Eric 45:22
About in your book. Corporate bias and other biases too. It’s fairly easy to adjust an AI algorithm to favor a particular interest.
Bob 45:33
Potentially if it’s a health system bringing in an AI and it’s going to get paid more for, for, for treatment A than B or, or you know, we’re being paid in population use. The less expensive thing that will probably be what the tool recommends. So there’s a lot of mischief. And then open evidence right now it’s free to doctors. Why? Because there are ads. When a tool like open evidence gets embedded in epic, are there still going to be ads? That sounds creepy, but maybe Sponsored results. Sponsored results, exactly. Maybe.
You know, here’s the, here’s the recommendation generally, but here’s a sponsored result for, you know, this, this really wonderful, very expensive new med. So a lot going on under the hood there that could lead to some disappointment. But over the time horizon, at least that I can see. I think the needs of the healthcare system are so immense and unmeatable with our current strategy, which is always just to hire more humans. We can’t afford them, we can’t find them. And I Think the technology is good enough that I came out of it pretty jazzed. I think it’s going to be good.
Eric 46:33
Okay, Lightning round. Lightning round. Alex, we. We can.
Alex 46:35
Yeah, Lightning round. Yeah, Lightning round. You have a beautiful ending to this book that kind of gets back to the reason that, well, it’s not a whodunit. So is there any chance you would read some of the end of your book or do you feel like that’s giving it away? I don’t know if you have it in front of you.
Bob 46:53
Sometimes when I speak, I’ll read, I’ll read it. Do I don’t have it in front.
Alex 46:57
Of me because what, what this gets.
Eric 46:59
To is Alex missed the concept of lightning.
Bob 47:04
Alex, you want me to read 200 pages? I’m happy to do that, but I’m actually taping the audiobook next week, so we can just have that be one big screen.
Alex 47:13
Well, maybe we’ll summarize it then.
Eric 47:15
Summarize it, Alex.
Alex 47:16
And the idea that there will always be a doctor, they will need a doctor, maybe not always, but that our patients will need a doctor. And the reasons that at least in the near term, in the interactions that we have, there needs to be a doctor. And I wonder in a lightning round way, as my last question, why you say that?
Bob 47:38
Yeah, I mean, when I was in the wards about a year ago, I, for about 10 days did like a fantasy world where I just tried to use AI for everything and really kind of guessed like what it would be like in five years. And I came to believe that it would be really, really good and probably save me 10, 15% of my time. But so much of what I was doing was coordinating and dealing with patient preferences and family meetings and dealing with complex shifting teams. And it just seemed to me, and empathy and trust and it seemed to me that these tools are not going to replace that in the foreseeable future and they shouldn’t. 50 years from now, who the hell knows? But I won’t be around.
Eric 48:18
It’s interesting because Bob Wachter, amazing doctor, reminds me of Gupreet Dhaliwal’s New England Journal where he faced AI in a clinical case description. We’ll have a link to that in October. And Gupreet did amazing. Like he. He diagnosed a toothpick that caused an ulceration which caused sepsis better than AI. But AI did really good on that too. I would say that’s how I’d sum it up. But there’s a lot of bad doctors out there too. Not maybe not great, not empathetic.
Bob 48:49
Well, I’d say one of the, you know, one of the lines I love and use it a couple times in the book is Joe Biden’s all line, don’t compare me to the Almighty, compare me to the alternative. And even when we talk about, you know, chatbots and that they go off the rails and do something terrible, I think in that same conversation, as we talk about how we need to regulate them and make them better, you gotta say, like, try to find a psychologist, a psychiatrist in San Francisco, and if you do, it’s three or four hundred bucks an hour. So, so, so. And try to fund your geriatrician.
Eric 49:18
You know, the one thing that a doctor has is it has like a fiduciary duty as an oath, like the, the patient is supposed to be primary. Like that is your person. You have this duty to the patient. AI bots don’t have that. Should it?
Bob 49:32
I think that’s why I’m more trusting of an AI that’s brought into an existing healthcare organization and working under the supervision of a credentialed professional. For now.
Eric 49:41
Yeah.
Bob 49:42
And where this really does get dicey is when you’re talking about direct to patient stuff. I think there they probably are going to need to either have some version of the Hippocratic oath or a regulatory environment that gives them accountability for doing, doing things right.
Eric 49:56
Bob, I want to thank you for being on this podcast. I encourage all of our readers to get his book A Giant Leap. We’ll have a link to it in our show notes. But before we leave, a little bit more of. What was the song’s title again?
Alex 50:09
Handle with Care. Handle with Care is a little bit more.
Alex 50:16
(singing)
Eric 51:07
Again, I encourage all of our readers to look at his book. You can learn why AI May be a Sycophant and other great things from Bob’s book. Bob, thank you for being on this podcast.
Alex 51:17
Thanks.
Bob 51:17
Thanks. It was really fun.
Eric 51:18
And thank you to all of our listeners for your continued support.
This episode is not CME eligible.
For more info on the CME credit, go to https://geripal.org/cme/



