• Ethical Application of Artificial Intelligence in Family Medicine

    Family physicians provide and coordinate comprehensive health care for people of all genders, races, ethnicities, sexual orientations, sexual preferences, and ages. They provide preventive care, address mental health, and diagnose, treat and manage acute and chronic conditions. Often, family physicians are the sole physicians providing care in their communities, especially in rural or underserved areas, and can adapt their care to fit the unique needs of these communities. Family physicians also mitigate health inequity, including systemic racism, by collaborating with community stakeholders to affect positive change for the populations they serve. Because they build long-term relationships with patients, family physicians are intimately familiar with their patients’ medical history through regular checkups and provide a safe space for patients to talk about sensitive topics such as reproductive health, alcohol and drug use, and mental health illness. The family medicine experience is based on a deeply personal physician-patient interaction that requires support from technology, including Artificial Intelligence (AI).

    AI or machine learning (AI/ML) has the potential to support family medicine by supporting the four C’s of primary care (first contact, comprehensiveness, continuity and coordination of care) and enhancing capacity and extending capabilities. AI/ML can be leveraged to help achieve the quintuple AIM if applied appropriately in family medicine. To that end we believe that AI/ML based solutions should adhere to a set of principles which can help ensure the appropriate application of AI/ML in family medicine.

    AI/ML should be evaluated with the same rigor as any other tool utilized in health care. Because AI/ML has the potential to profoundly disrupt the practice of medicine and the patient experience at scale, it is important for the governance, design, implementation, and use of AI/ML to adhere to a set of principles that help ensure AI/ML is applied appropriately and to maximum value for the health care system and family medicine. For this reason, the AAFP sets forth an initial set of principles that we believe must be followed in the domain of AI/ML if they are to be applied in family medicine.

    1.     Preserve and Enhance Primary Care

    As the patient-physician dyad is expanded to a triad with AI, the patient-physician relationship must at the minimum preserved, and ideally, enhanced. When AI/ML is applied to primary care, it must enhance the 4 C’s of primary care and expand primary care’s capacity and capability to provide longitudinal care that achieves the quintuple aim.

    2.     Maximize Transparency

    AI/ML solutions must provide transparency to the physician and other users so that the solution’s efficacy and safety can be evaluated. Companies must provide transparency around the training data used to train the models. Companies should provide clear, understandable information describing how the AI/ML solution makes predictions. Ideally, this would be for each inference but at least provide a conceptual model for decision-making, including the importance of data leveraged for the inference.

    3.     Address Implicit Bias

    Companies providing AI/ML solutions must address implicit bias in their design. We understand implicit bias cannot always be completely eliminated. Still, the company should have standard processes in place to identify implicit bias and to mitigate the AI/ML models from learning those same biases. In addition, when applicable, companies should have processes for monitoring for differential outcomes, particularly those that affect vulnerable patient populations.

    4.     Maximize Training Data Diversity

    To maximize the generalizability of AI/ML solutions, training data must be diverse and representative of the populations cared for by family medicine. Companies must provide clear documentation on the diversity1 of their training data. Companies should also work to increase the diversity of their training data to not increase or create new health inequities1.

    5.     Respect the Privacy of Patients and Users

    AI/ML requires large volumes of data for training. It is critical for patients and physicians to trust companies will maintain confidentiality of data from them. Companies must provide clear policies around how they collect, store, use and share data from patients and end-users. Companies must get consent for collecting any identifiable data and the consent should clearly state how the data will be used or shared.

    6.     Take a Systems View in Design

    An AI/ML solution will be a component in a larger work system, and therefore it must be designed to be an integrated component of the system. This means that the company must understand how the AI/ML solution will be used within a workflow. The company needs to have a user-centered design approach. Since the vast majority of AI/ML solutions in health care will not be autonomous, the company must understand and leverage the latest science around human/AI interaction as well as quality assurance.

    7.     Take Accountability

    If an AI/ML solution is going to take a prominent role in health care, the company must take accountability for assuring the solution is safe. For those solutions design for use in direct patient care, they must undergo a similar rigorous evaluation as any other medicine intervention. We also believe that companies should take on liability where appropriate. The appropriateness should be based on a risk-based model that accounts for the role played by AI/ML and the situation in which it is being applied. A good starting point for such a risk-model is the Food & Drug Administration (FDA) and Office of the National Coordinator for Health Information Technology (ONC) framework for software as a medical device.

    8.     Design for Trustworthiness

    Maintaining the trust of physicians and patients is critical for a successful future of AI/ML in health care. Companies must implement policies and procedures that ensure the above principles are appropriately addressed. Companies must strive to have the highest levels of safety, reliability and correctness in their AI/ML solutions. Companies should consider how they can maximize trust with physicians and patients throughout the entire product lifecycle. AI/ML will continue its rapid advancement, so companies must continually adopt the latest state of the art best practices.

     

    References

    1.     AAFP Definitions of Diversity, Equity and Inclusion – https://www.aafp.org/membership/initiatives/diversity-equity-and-inclusion.html.  (October 2023 COD)