separator LDI header tab literature

A MEASURE OF MEASUREMENT DISCONTENT

One Clinician's Journey Through a New World of Unintended Consequences

For those of us RWJF scholars who straddle the worlds of clinical medicine and health services research, as our clinical time steadily wanes, I think it gets increasingly difficult to remember why studying the delivery system seemed imperative in the first place. But this year's AcademyHealth National Health Policy Conference panel entitled, "What are We Doing Wrong? Measurement's Unfulfilled Potential," made me think about how it all began.
Five years ago. I'm a first year cardiology fellow in New York. A patient arrives from an outside hospital in the middle of the night. He
Lisa Rosenbaum
Photo: Hoag Levins
Lisa Rosenbaum, MD, is a cardiologist and Robert Wood Johnson Clinical Scholar in residence at the University of Pennsylvania, and a Senior Fellow at Penn's Leonard Davis Institute of Health Economics (LDI). Her articles on health care issues have appeared in The New Yorker and other mainstream media publications.
is having a myocardial infarction, but is also in cardiogenic shock. We take him to the cath lab, and find that he has three-vessel disease, and would thus best be treated with coronary artery bypass surgery (CABG). I call the surgeons. They don't want to touch him -- arguing that he is unlikely to survive surgery.

Conversations between the interventional cardiologists and the surgeons continue for a few days while he remains "stable" on an intra-aortic balloon pump, but in shock. At the family's continued urging to "do everything," the interventional cardiologists agree to attempt percutaneous coronary revascularization.

He gets three stents and leaves the hospital a week later. Others don't. Too sick, it is concluded, to be saved by our interventions.

Improvement through transparency
In 1989 New York State began publicly reporting the mortality rates of its cardiac surgeons, and in 1995 the effort was expanded to include Percutaneous Coronary Intervention, (PCI). As I began to sense the influence of this measure over our behaviors, I noticed that the literature confirmed my sense that this attempt to improve outcomes through transparency had wrought unintended consequences. It seemed physicians perceived racial and ethnic minorities to be of higher risk, and were thus turning them away.

Several other studies highlighted similar harms wrought by the policy, including decreased access to PCI, and a flux of high-risk patients to places like Cleveland where no such reporting measures existed. While we all know that we should not be exposing patients to interventions unlikely to offer benefit, when it comes to coronary revascularization, it is eminently clear that those who are most ill often benefit most. And so I wondered: Were we choosing wisely, or wisely choosing?

Five years later, I did not expect the panelists to focus specifically on report cards and its impact on cardiac care, but I was eager to better understand how other attempts to measure performance have played out, and what these experts had in mind in terms of how to do it better. But although Tom Lee, CEO of Press Ganey, asked the panelists -- Christine Cassel, Patrick Conway, Suzanne Delbanco, and Ashish Jha -- to comment on the biggest disappointments of performance measurement, it was not all bad news.

Process measures that backfire
Though there was acknowledgement that process measures, like timely administration of antibiotics for pneumonia treatment have backfired, leading many to receive antibiotics who do not need them, in general, panelists noted that these process measures have been a success. Though the link between these process measures and outcomes remains somewhat tenuous, there are data to suggest that hospitals excelling in performance also have better outcomes for common conditions such as pneumonia, congestive heart failure, and acute myocardial infarction.

But as the panelists acknowledged, outcomes measures are a different beast, and we have much to learn about how to implement quality measures around outcomes such as mortality rates. Some panelists suggested we need new types of outcomes, including those that are more patient-centered, or that better capture "patient-suffering." Jha also pointed out that we have focused almost exclusively on inpatient measures, whereas there is likely tremendous room for improvement in post-acute care long-term facilities.

Finally, there was general consensus that our current incentive schemes were lacking, and part of the inadequacy we are seeing with pay-for-performance may be that 1-2% incentives were not enough.

Incentives gone awry
And just as I started to squirm with the not-too-distant memory of incentives gone awry, Jha said something that gave me pause. "Unintended consequences are OK. Let me say that again: Unintended consequences are OK." He went on to explain. "Any intervention that changes things will have unintended consequences, especially bold changes. The question is: Are the unintended consequences worth it?"

Are the unintended consequences worth it? How do we decide?

Of course, on some level, the decision involves a weighing of the data on benefits and harm. Jha makes the point, well described here, that, like drugs, measuring performance will have side effects. The analogy not only shifted my thinking about how we evaluate performance measurement, but also forced me to reckon with my own hypocrisy.

In my research on medication non-adherence following myocardial infarction, I have noticed that when patients with heart disease talk about medications, they often fail to weigh benefit and risk. Rather, for those who have some sort of preexisting aversion to these medications, side effects loom large, and benefits are just not perceived. When it comes to understanding medication non-adherence, I have been quick to dismiss this lack of recognition about inherent tradeoffs as irrational. And yet, when it comes to weighing data about performance measurement, aren't I guilty of something similar -- quick to dismiss potential benefits as I leap to point out unintended consequences? And if so, can I overcome my visceral aversion to this pursuit of performance measurement so I can rationally evaluate the data?

A lingering memory
Right now, I can't. Perhaps it's the lingering memory of a night on call, holding pressure on a woman's groin as she nearly exsanguinated from the balloon pump we used to stabilize her as we waited, too long, to decide who would perform the high-risk intervention she needed to live. Or perhaps it is the fear I can still see in the eyes of doctors I admired deeply, unwilling to let these patients die with nothing but whose careers were clearly at stake if they did something.

Of course, performance measurement is about so much more than public reporting of cardiac outcomes in NY State. But I have come to realize that my lingering distaste has less to do with any one quality improvement effort, and more to do with the act of measuring performance itself. Something fundamental to the act of doctoring is at stake, and I can't help but wonder whether each effort to improve performance, however well intentioned, just further shifts the calculus away from doctor and patient, toward doctor and documentation. If we define quality based on mortality rates, readmissions, patient satisfaction scores, or lengths of stay, then we will pursue excellence in these measures. But is this what quality is really all about?

Falling through the cracks of our data
This is not a question I can answer in a short post, but I wanted to at least raise the issue. I am not suggesting that we, as health services researchers, stop doing all we can to study the impact of any given policy and its impact on metrics, like mortality rates, that, of course, matter. But I am suggesting that we push ourselves to consider what falls through the cracks of our data. That our feelings are not always rational, but that sometimes they remind us of the importance of what we can't count.

~ ~ ~

blog comments powered by Disqus

Share This Page

share icons

OTHER RESOURCES

HealthPolicy$ense

The LDI Blog

@PennLDI

LDI Twitter

Data Briefs

LDI/RWJF Joint Reports

Main LDI Site

Health Economics Center

Knowledge@
Wharton

Business News Journal

~ ~ ~

ON LDI'S BLOG

~ ~ ~

OTHER LDI
eMAG STORIES