skip to Main Content

Consider the following scenario:

A previously independent 80 yo gentleman develops a catastrophic illness.  His mental status is poor due to strokes during his illness and it is unlikely he’ll recover the ability to meaningfully interact with the world.  However, he also has severe lung disease and he needs a tracheostomy with the ultimate plan of going to to a long-term ventilator facility.  It is unlikely he would ever leave the vent facility.  He has no family members, no friends.

Should he get a trach?

In most facilities, my sense is that the answer is yes.  The logic is that unless we have clear and overwhelming evidence that a patient does not want a specific intervention, death is always the worst outcome and thus must be avoided at all costs.

However, I wonder if this logic is flawed.  If we have 100 patients in this scenario, it is very possible that some of them would want the trach.  But it also seems quite likely that most would not want the trach, feeling that it would prolong a life that they feel has no value.  Our current system implicitly suggests that keeping someone alive in a state they don’t ever want to be in is not nearly as bad as allowing death for someone who values life regardless of their ability to interact with the world.

For me, both errors feel equally egregious.  And the current system which implies that any death is worse than keeping someone alive against their will feels one-sided.

Consider an analogous situation from a very different field:

When a new employee elects to save for retirement, the default option is often a low-risk, low-return money market fund, which experts agree is the wrong answer for the vast majority of new (mostly young) employees.  However, the low-risk option was the safest (prevented lawsuits from employees if a default high-risk option lost money).  However, having an overly “safe” default, contributed to the very predictable problem of most people not having enough retirement savings.

There are increasing efforts to try to create defaults that are more likely to be better choices for most people.  So, a new employee of a certain age will be given a default that is tied to their most likely retirement date (“life cycle” fund).  For young people, these are likely to be more aggressive funds that have a high proportion of their money in stocks.  It is likely that some young people who are very risk averse will end up in a retirement fund that embraces more risk than they are comfortable with.  However, by using what is most likely to work for most people, I believe that this new approach is probably helping more people than it hurts.

Coming back to medicine, I wonder if a “reasonable person” standard would be better than our current default of doing something unless there is clear evidence that a patient doesn’t want the intervention.  Instead of saying, “If we don’t know whether he’d want this, we have to do it,” how about “What would most people in his situation want?”  That way, the burden of evidence is not on one side to say he would not want this, but on both sides to get a sense of what is most likely what this patient would want. In a situation where we don’t know what an individual patient would want, it seems reasonable that we make the assumption that he is an “average” person who would want what most people want.

Just as having an overly safe default retirement option leads to predictable problems, I think our current paradigm of do everything unless we have clear evidence to the contrary, is likely leading to the predictable error of sustaining a life that many patients don’t want. A more nuanced approach where we look at what most people would want would decrease the errors.

by: Sei J. Lee

This Post Has 4 Comments

  1. Another potentially useful approach would be an intensive attempt to contact people who knew the unbefriended person to learn as much as possible about their life outlook and goals of care, and use that information in a structured way (e.g. ethics committee) to determine how to proceed in the patient's best interest. This would require substantial investment of resources, but then again it is resource intensive to indefinitely keep people institutionalized with advanced interventions in contradiction to what they would have wanted.

  2. On the surface, this approach makes sense, but my gut feeling is that making decisions according to what we think“most people” would want is not a very ethically sound decision making model. I think these situations can be navigated in a more appropriate way if we consider whether the interventions that are being proposed/conducted are ethically appropriate (ie. do they cause good and not harm for the patient, do they protect whatever autonomy the patient may have, are we treating the patient with justice/fairness – and by extension, is the patient’s use of healthcare resources just/fair to the greater community around them?)

    I agree with Dr Steinman in his above comment. These types of patients often can benefit from the structure of an ethics consult.

  3. Thanks Mike and Stephanie.

    First, an admission. The presented case is an adaptation of a case that did go to the ethics committee. We tried to find folks who knew him, but sadly we weren't able to find out much about his outlook on life.

    Second, I share Stephanie's concern about trying to figure out what we think most people would want. But, research could actually start to inform how folks at various ages value (or do not value) certain life states.

    Finally, I agree in theory with the framework that Stephanie proposes, but the problem has been that the framework (as it is often applied) has a strong default. The default has been that death is always the worst outcome and barring clear evidence to the contrary, we need to avoid death at all costs. As medical technology has advanced, we are able to keep people alive indefinitely in states that would've been unimaginable 50 years ago ("hooked up to machines"). My sense is that our ethical framework has not evolved to account for the possibilities raised by advancing medical technology.

  4. Thanks for posting this, as it is very helpful to read through your thoughts. In fact, utility theory research has shown that there likely are conditions (i.e. health states) that are worse than death. Unfortunately, they likely differ by demographics, and impossible to predict for a given individual scenario. It reinforces the need for PCP (and other MDs) real discussion with patients about preferences, and not in a "do you want CPR or intubation" kind of way. If we, as physicians, have more meaningful conversations with patients then this issue would be much less prevalent. Alas, the health care system is not set up to incentivize prioritization of those discussions, so they are generally avoided.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top