Monday Jun 24, 2019
Data in (Direct) Primary Care: Striking the Right Balance
We hate data. Taken on their own, numbers are benign, but when we hear the word "data," physicians are reminded of a litany of related issues that make our lives far more difficult.
Unnecessarily complicated payment schemes.
We are so swamped with noise, alerts, risk scores and notes copied forward in a patient's electronic health record that we legitimately can't tell what's going on with the patient.
By virtue of our training and the principles that guided the decade we spent obtaining our training, we are empiricists by nature. It's sad to think that to many of us, data equals garbage rather than better care.
How in the world did we get here?
I'll go into that history in more detail during my presentation June 28 at the DPC Summit in Chicago,(www.dpcsummit.org) but it is likely that this love/hate relationship with data started a couple of decades ago when computing entered the realm of health care.
Briefly: Players in the health care system -- insurers, regulators, hospitals, and even physicians -- became enamored with the possibilities that data presented, so we started collecting it and, at some point, everyone lost sight of the why behind data collection, and it started being weaponized against physicians. Our paychecks were tied to data entry in the form of a unique currency cryptically called "RVUs," and our compensation from public payers now revolves around self-selected metrics for quality. Our EHRs demand it and suck it past the event horizon and into the EHR black hole, never to be meaningfully used again.
Data has become a four-letter word.
With all this data collection, we're also pummeled with messaging that implores us to provide "high-quality" and "value-based" care. But -- get ready for this -- we don't know how to meaningfully measure quality in primary care.
No joke. We really don't.
As a group and an industry, those of us working in primary care have not come up with a standardized, effective way to measure or prove quality.
I didn't think about it critically until well after I started my direct primary care practice in 2016. I was desperately trying to ensure I was playing by the rules and providing high-quality, high-value care -- and I wanted the data to show it. But when I went digging, I realized that I couldn't find any empirical guidance as to what I should be measuring.
The problem with measuring the quality of any given primary care physician or clinic lies in the vastness of the services provided and the nonlinear pathways undertaken in primary care. Primary care is, in essence, a nonlinear process. Compare this with, for example, a knee replacement.
The March/April 2017 issue of Annals of Family Medicine contained a fantastic article detailing this problem.(www.annfammed.org) The authors outlined the following example: "... in primary care, even though many patients say they are willing to undergo colon cancer screening when asked by their physicians, uptake of recommended screening is low, often measured at less than 50% of eligible patients. Even in primary care centers of excellence, chronic disease targets are met and sustained less than 50% of the time, despite extra resources, such as health coaches.
"The impossibility of achieving 100% uptake makes it much more difficult to draw a summative conclusion about which primary care practices are providing high-quality care when contrasted against elective surgeries where nearly 100% compliance with preoperative antibiotic guidelines could reasonably be achieved."
Herein is the problem: Traditional quality paradigms -- the ones that have guided the past several decades of checkboxes and clicking and reporting -- assume that "there is a definite and measurable right answer in a given situation. In contrast," the Annals authors wrote, "primary care physicians often deliver high-value care by doing the best they can with the patient care card they are dealt, knowing that perfection will never be achieved."
This is known as the "paradox of primary care,"(www.annfammed.org) as outlined a decade ago by the editors of Annals of Family Medicine: "Compared with specialty care or with systems dominated by specialty care, primary care is associated with the following: (1) apparently poorer quality care for individual diseases; yet (2) similar functional health status at lower cost for people with chronic disease; and (3) better quality, better health, greater equity and lower cost for whole people and populations."
In plain English: Although primary care may miss targets on specific disease metrics and data points, we accomplish the triple aim of improving population health at a reduced cost through a better patient experience -- but only when you take a look with a wider lens and back off from the minute details.
Before you scoff and write me off as passé because I said "triple" and not "quadruple" aim -- that was intentional. Traditional, fee-for-service primary care is still struggling to take care of its physicians. And I don't care how many burnout prevention sessions a hospital hosts, I can't deep-breathe, yoga or meditate the burnout out of me. To me, that's rubbish and blames the victim of an abusive system. The system that turned me into nothing more than a cog in the wheel, churning through visits, is what burned me out and sparked my interest in DPC. The health care system has created robots out of the caring professionals who need agility, autonomy and agency to make the right decisions for the complex humans sitting in front of them. It turns us into data processors. I'm over it.
We need a better way to think about data in primary care if we're going to keep physicians healthy -- which keeps our patients healthy.
And this is where this gets exciting. (I hope.)
In direct primary care, there are no arbitrary data-input rules. This means that we have the opportunity to write the new rules on data.
We have a blank canvas. A clean slate.
So where should we start?
I propose three new rules for metrics and data reporting in primary care -- and maybe all of medicine.
Measure Data for Ourselves and Our Patients
Physicist Richard Feynman, Ph.D., once said, "You must not fool yourself, and you are the easiest person to fool." One reason we need to get smart about data and measuring what we're doing is because we have to protect ourselves from ourselves.
To start, it helps to put parameters around best practices in tracking metrics. A 2014 article in the Annual Review of Public Health(www.annualreviews.org) provided the following guideposts:
- Metrics cannot be punitive.
- Metrics should be used to "foster reflection, experimentation and assessment" to "advance knowledge, healing and health."
- Metrics are most useful in "environments that enable individual reflection … and have supportive systems for shared rapid-cycle learning, deep remembering and collective action."
The good news? As a DPC doctor in a clinic led by physicians engaged in the betterment of both the business and our patients' health -- these tenets are intrinsic. Conveniently, we have nobody to overtly punish (No. 1); rather, we're looking at our metrics as a means to foster reflection and experimentation and to test our assumptions about the care we provide (No. 2). And nationally, we're at the point that as individuals contributing to a larger group, we are developing the scale to share our experiences and quickly adapt and change what we're doing (No. 3).
Metrics Should Never Disrupt Flow
Because we are measuring our metrics and data purely for the betterment of the patient experience -- and we know that intrusions into the exam room diminish the patient experience -- all of our data collection must be within the flow of a physician's thinking and healing process. I'll talk more about this at the DPC Summit and will engage our technology partners on this front.
Metrics Can -- and Should -- Be Retired Over Time
If a metric doesn't fly in the face of rule No. 1 or No. 2, it must be reconsidered as a valid metric to pursue -- and it warrants a closer look. Does this improve the patient experience? Does it make the physician experience better or worse? Who is the data collector? For whom are we collecting data?
After all, as noted in the paradox article mentioned above, "When a metric becomes a target in itself, it ceases to be useful."
(I should note here that there is interesting data about pay-for-performance that I'm not going to get into in detail, but it shows that when you incentivize doctors to achieve certain targets or metrics or to report certain data or file in a certain way, doctors and health systems do it. But they perform the processes to prove the metric more than they actually move the needle on the metric/outcome that's supposed to be measured. In other words: Data reporting becomes a game, and we're quite good at playing games. Pay-for-performance turns into payment for those who know how to report better. There's a great article about the Quality and Outcomes Framework(www.annfammed.org) out of the United Kingdom that demonstrated this in a harrowing, expensive way.)
With these three rules as guideposts, what should we measure? How should we measure it? And how can our technology partners facilitate this process?
I don't have the answers -- yet -- but I've got a few ideas I'll be sharing at the DPC Summit.(www.dpcsummit.org) More than anything, I want to start the conversation. Join me this week at the summit or follow the event's livestream(www.dpcsummit.org) (the livestream link will go live when the summit begins). You can also keep the conversation going on Twitter @Dr_A_Edwards,(twitter.com) or share this article to start your own conversation.
We're all highly intelligent empiricists. Let's start acting that way.
Allison Edwards, M.D., founded and cares for patients at Kansas City Direct Primary Care;(www.kansascitydirectprimarycare.com) provides locums coverage at rural hospitals with Docs Who Care in Missouri, Kansas and Colorado; and is volunteer faculty at both the University of Colorado and the University of Kansas. You can follow her on Twitter @Dr_A_Edwards.(twitter.com)
Posted at 04:10PM Jun 24, 2019 by Allison Edwards, M.D.