• It's High Time We Took the Tech Bull by the Horns

    September 13, 2021, 3:59 p.m. — It was my first New York State AFP Congress of Delegates as an attending. The NYSAFP had invited Timothy Hoff, Ph.D., professor of management, health care systems and health policy at Northeastern University in Boston, to speak about his new book Next in Line: Lowered Care Expectations in the Age of Retail- and Value-based Health. (I picked it up recently and man, if you want some dystopian bedtime reading, forget The Handmaid’s Tale and read this instead.)

    health technology word cloud

    In the book, Hoff expounds on the systematic deterioration of the patient-physician relationship in this time of retail and so-called value-based care sold to us with fancy words like “disruptive,” “efficiency,” “holistic care” and “innovation.” He argues that no matter how much we want to go back to the time of the idealized Norman Rockwell version of family doctors, that era is over. He also contends that unless we, as physicians, prioritize and advocate for relational care above all else (lifestyle, salary, etc.), there is no chance that the patient-physician relationship will survive the consumerization and technologization of health care.

    You can imagine the effect his presentation had on me. I went from bright-eyed, bushy-tailed recent graduate to suddenly tailspinning into the abyss of an existential crisis. I watched as doctors with 30 years of experience angrily, defensively and fearfully demanded answers about what this meant for family medicine and the patients they served.

    I walked out deflated. But the dread I felt seemed to offer two palpable choices: Resist the coming tsunami of digitization, data sets, machine learning, direct-to-consumer products, and the commodification of health and medicine, or run toward it as fast as possible with all the passion and philosophy of a family physician to break the wave and shape it into something truly revolutionary.

    It’s been a slow process. Although I am what you might call a native technology user (I grew up with computers, the internet, cell phones), I am still a tech novice. It wasn’t until recently that I understood that “artificial intelligence” is nothing more than a programmed algorithm that requires vast data sets, statistical analysis, and positive and negative reinforcement of right and wrong answers. For those reinforcements to occur, someone (a human being) needs to monitor the software program and correct it as it “learns.” A well-designed algorithm can lead to automation of daily tasks that are otherwise rote. Having the equivalent of a Star Wars’ R2-D2 droid accompany physicians throughout the patient’s care journey could substantially minimize redundancy in care.

    As a technology novice who would love to have a friendly, yet sassy, robot sidekick, I see four major challenges to widespread adoption of artificial intelligence/machine learning in primary care: transparency concerns, cost and lack of trust ― both among clinicians and among patients.

    Lack of transparency is the biggest psychological barrier to clinical uptake of these technologies. Few physicians have an informatics or software design background, much less an understanding of what goes into developing AI/ML. For a profession with a tenet of “first, do no harm,” not knowing how a diagnostic tool or algorithm works leads to suspicion. It’s a philosophical outlook that protects patients by intentionally baking redundancies into workflows but, consequently, also impedes innovation.

    Cost is the material barrier to implementing new technologies. Even if a particular type of software is proven to accurately detect retinopathy, for example, independently owned and operated clinics have very little margin to purchase and implement such technologies. The manpower required to operate each of these tools is considerable. Earlier this year, after spending time speaking with a continuous glucose monitoring company, my takeaway was that just having the machines themselves was inadequate unless there were clinical staff dedicated to teaching patients how to use the machines, as well as following up on the massive increase in the amount of data created through continuous monitoring. Therefore, incorporating these technologies into clinical practice often requires an overhaul rather than an overlay.

    From the physician’s perspective, the trust deficit derives from the lack of transparency inherent in adopting many of these technologies. It also extends to trust of clinical diagnosis and output. Algorithms are only as good as the data they learn from, and if the data is incomplete or is inappropriately collected, the algorithm can recommend inadequate treatment for a woman or person of color, for example, whose demographic wasn’t appropriately included in the original data set. Although these biases are no different than the biases that already exist in medicine, as an output of a “trusted” algorithm, the consequences can be disproportionately scaled.

    From a patient perspective, the trust deficit encompasses not only the mushrooming of possible bias within the exam room but also the ability to find a solution for that bias. If patients are not able to trust a technology, why would they agree to having their data used? Patient advocacy organizations such as Ciitizen are leading the way in tackling these concerns, trying to work with underrepresented groups to build trust and also to protect existing data from undue profiteering. However, the Catch-22 is that without the data, it is difficult ― if not impossible ― to demonstrate positive outcomes.

    In February of 2020, which now feels like a lifetime ago, the AAFP Innovation Lab hosted an “AI and Primary Care Executive Roundtable” in San Jose, Calif. The exclusive event drew a diverse group of participants, including leadership from the Academy as well as representation from leaders in health technology, startups, patient advocacy groups and health tech funding. (For those who don’t know, the AAFP Innovation Lab “was created to partner with industry to drive innovation with the latest proven technologies: cloud, artificial intelligence [AI]/machine learning, and voice and mobile technologies, to optimize the family medicine experience.” If you would like to work with the Innovation Lab to pilot new technologies in your practice, email eHealth@aafp.org to learn about available opportunities.)

    As an attendee at the roundtable, I was pleased to find that the issues of transparency, cost and trust were robustly discussed by all participants. Through those conversations, it became clear that the AAFP had an opportunity to lead in setting standards for the industry from a family medicine perspective. By drawing on its vast and diverse member base ― both geographically and by type of clinical practice ― the AAFP could set principles of AI/ML in primary care to infuse this tech-based health care revolution with the soul of a small-town community family doc.

    So, in collaboration with the Hawaii AFP, our very new, very small EMR Optimization Member Interest Group — chaired by Marti Taba, M.D. — submitted a resolution to the 2020 AAFP Congress of Delegates to establish a work group that will build a set of principles specifically targeted to addressing data collection, anti-bias algorithms, equitable access and agility of use by smaller physician practices, technological transparency, financial costs, and patient privacy. The overall goal of the resolution was to use the weight of the AAFP to prevent the rampant commodification of primary care; humanize technology within the scope of family medicine; and build a mutually beneficial and sustainable relationship between tech, primary care and the AAFP.

    Delegates adopted the measure with some slight modifications, and ― while recognizing we’re in the midst of pandemic-induced delays ― we eagerly await the establishment of the working group. But that alone is not enough.

    Collectively, we can approach these new technologies with curiosity ― what they are and how they work ― so that we know what kind of input is helpful and imperative. To maintain our autonomy, family physicians can and should partner and collaborate with software engineers and designers to build the tools that our profession actually needs, instead of relying on tech leaders approximating solutions needed for robust primary care practices. In fact, there are only a handful of companies that are led by family physicians with clinical backgrounds — a niche just waiting to be filled.

    If you’re interested in learning more, reach out to the Innovation Lab or to our EMR Optimization MIG. Although navigating a synergy between health and tech is daunting, we can wade through it together and maybe even avoid the dystopian deterioration of the patient-physician relationship.

    At our core, we family physicians value the power of storytelling, the philosophy that comes with caring for patients from birth to death, and the power of the patient-physician relationship. This philosophy is exactly what we need to build technologies that stand to make diagnostics easier and expand access to care in ways that could scale the joys of practicing primary care exponentially. It’s up to us to lead the technology there.

    Lalita Abhyankar, M.D., M.H.S., is a family physician practicing in New York City. You can follow her on Twitter @L_Abhyankar.

    Read more posts from this blogger.



    Feeds

    RSS     About RSS

    Our Other AAFP News Blogs

    Leader Voices Blog - An AAFP Leaders Forum
    In the Trenches - AAFP Advocacy Updates
    FPs on the Front Lines - Meeting the Challenge

    Disclaimer

    The opinions and views expressed here are those of the authors and do not necessarily represent or reflect the opinions and views of the American Academy of Family Physicians. This blog is not intended to provide medical, financial, or legal advice. All comments are moderated and will be removed if they violate our Terms of Use.