2021-04509. Request for Information on the Use of Clinical Algorithms That Have the Potential To Introduce Racial/Ethnic Bias Into Healthcare Delivery  

  • Start Preamble

    AGENCY:

    Agency for Healthcare Research and Quality (AHRQ), HHS.

    ACTION:

    Notice of Request for Information.

    SUMMARY:

    The Agency for Healthcare Research and Quality (AHRQ) is seeking information from the public on clinical algorithms that are used or recommended in medical practice and any evidence on clinical algorithms that may introduce bias into clinical decision- making and/or influence access to care, quality of care, or health outcomes for racial and ethnic minorities and those who are socioeconomically disadvantaged.

    DATES:

    Comments must be submitted on or before May 4, 2021. The EPC Program will not respond individually to responders but will consider all comments submitted by the deadline.

    ADDRESSES:

    Submissions should follow the Submission Instructions below. We prefer that comments be submitted electronically on the submission website. Email submissions may also be sent to: epc@ahrq.gov

    Start Further Info

    FOR FURTHER INFORMATION CONTACT:

    Anjali Jain, Email: Anjali.Jain@ahrq.hhs.gov.

    End Further Info End Preamble Start Supplemental Information

    SUPPLEMENTARY INFORMATION:

    The Agency for Healthcare Research and Quality (AHRQ) is seeking information from the public on clinical algorithms that are used or recommended in medical practice and any evidence on clinical algorithms that may introduce bias into clinical decision-making and/or influence access to care, quality of care, or health outcomes for racial and ethnic minorities and those who are socioeconomically disadvantaged.

    Information received in response to this request will be used to inform an AHRQ Evidence-Based Practice Center Program (EPC) evidence review and may inform other activities commissioned by or in collaboration with AHRQ. Established in 1997, the mission of the EPC Program (https://effectivehealthcare.ahrq.gov/​about/​epc) is to create evidence reviews that improve healthcare by supporting evidence-based decision-making by patients, providers, and policymakers. Evidence reviews summarize and synthesize existing literature and evidence using rigorous methods. AHRQ is conducting this review pursuant to sections 902 and 901(c) of the Public Health Service Act, 42 U.S.C. 299a and 42 U.S.C. 299(c).

    AHRQ intends to commission an evidence review that will critically appraise the evidence on commonly used algorithms, including whether race/ethnicity is included as an explicit variable, and how algorithms have been developed and validated. The review would examine how race/ethnicity and related variables included in clinical algorithms impact healthcare use, patient outcomes and healthcare disparities. In addition, the review will identify and assess other variables with the potential to introduce bias such as prior utilization. The review will identify and review approaches to clinical algorithm development that avoid the introduction of racial and ethnic bias into clinical decision making and resulting outcomes.

    For the purposes of this evidence review, clinical algorithms are defined as a set of steps that clinicians use to guide decision-making in preventive services (such as screening), in diagnosis, clinical management, or otherwise assessing or improving a patient's health. Algorithms are informed by data and research evidence and may include patient-specific factors or characteristics which may be sociodemographic factors such as race/ethnicity, physiologic factors such as, for example, blood sugar level, or others such as patterns of healthcare utilization.

    When used appropriately, algorithms can improve disease management and patient health by creating efficiencies in place of individuals having to weigh multiple and complex factors when making a clinical judgement. As a result, the use of clinical algorithms has become widespread in healthcare and includes a heterogeneous set of tools including clinical pathways/guidelines, the establishment of norms and standards that may vary according to patient-specific factors, clinical decision support embedded in electronic health records (EHRs) or within medical devices, pattern recognition software used for diagnosis, and apps and calculators that predict patient risk and prognosis. Some clinical algorithms include information about a patient's race or ethnicity among its inputs and thus lead clinicians to decision-making that varies by race/ethnicity, including decisions about how best to diagnose and manage individual patients.

    The purpose of this evidence review is to understand which algorithms are currently used in different clinical settings; the type and extent of their validation; their potential for bias with impact on access, quality, and outcomes of care; awareness among clinicians of these issues; and strategies for developing and testing clinical algorithms to assure that they are free of bias in order to inform the scope of a future evidence review. We are interested in understanding which algorithms are currently in use in clinical practice including those related to the use of clinical preventive services. How many include race/ethnicity and other factors that could lead to bias within the algorithm? We are interested in all algorithms including clinical pathways/guidelines, norms and standards (including laboratory values) that vary according to patient-specific factors such as race/ethnicity and related variables, clinical decision support embedded in EHRs, pattern recognition software, and apps and calculators for patient risk and prognosis. We are interested both in algorithms developed through traditional methods and through new and ongoing methods including machine learning and artificial intelligence. AHRQ seeks information

    • From healthcare providers who use clinical algorithms to screen, diagnose, triage, treat or otherwise care for patients
    • From laboratorians or technicians who use algorithms to interpret lab or radiology data
    • From researchers and clinical decision support developers who develop algorithms used in healthcare for patients
    • From clinical professional societies or other groups who develop clinical algorithms for healthcare
    • From payers who use clinical algorithms to guide payment decisions for care for patients
    • From healthcare delivery organizations who use clinical algorithms to determine healthcare practices and policies for patients
    • From device developers who incorporate algorithms into device software to interpret data and set standards
    • From patients whose healthcare and healthcare decisions may be informed by clinical algorithmsStart Printed Page 12949

    Specific questions of interest to the AHRQ include, but are not limited to, the following:

    1. What clinical algorithms are used in clinical practice, hospitals, health systems, payment systems, or other instances? What is the estimated impact of these algorithms in size and characteristics of population affected, quality of care, clinical outcomes, quality of life and health disparities?

    2. Do the algorithms in question 1 include race/ethnicity as a variable and, if so, how was race and ethnicity defined (including from whose perspective and whether there is a designation for mixed race or multiracial individuals)?

    3. Do the algorithms in question 1 include measures of social determinants of health (SDOH) and, if so, how were these defined? Are these independently or collectively examined for their potential contribution to healthcare disparities and biases in care?

    4. For the algorithms in question 1, what evidence, data quality and types (such as claims/utilization data, clinical data, social determinants of health), and data sources were used in their development and validation? What is the sample size of the datasets used for development and validation? What is the representation of Black, Indigenous, and People of Color (BIPOC) and what is the power to detect between-group differences? What methods were used to validate the algorithms and measure health outcomes associated with the use of the algorithms?

    5. For the algorithms in question 1, what approaches are used in updating these algorithms?

    6. Which clinical algorithms have evidence that they contribute to healthcare disparities, including decreasing access to care, quality of care or worsening health outcomes for BIPOC? What are the priority populations or conditions for assessing whether algorithms increase racial/ethnic disparities? What are the mechanisms by which use of algorithms contribute to poor care for BIPOC?

    7. To what extent are users of algorithms including clinicians, health systems, and health plans aware of the inclusion of race/ethnicity or other variables that could introduce bias in these algorithms and the implications for clinical decision making? What evidence is available about the degree to which the use of clinical algorithms contributes to bias in care delivery and resulting disparities in health outcomes? To what extent are patients aware of the inclusion of race/ethnicity or other variables that can result in bias in algorithms that influence their care? Do providers or health systems communicate this information with patients in ways that can be understood?

    8. What are approaches to identifying sources of bias and/or correcting or developing new algorithms that may be free of bias? What evidence, data quality and types (such as claims/utilization data, clinical data, information on social determinants of health), and data sources and sample size are used in their development and validation? What is the impact of these new approaches and algorithms on outcomes?

    9. What challenges have arisen or can arise by designing algorithms developed using traditional biomedical or physiologic factors (such as blood glucose) yet include race/ethnicity as a proxy for other factors such as specific biomarkers, genetic information, etc.? What strategies can be used to address these challenges?

    10. What are existing and developing standards (national and international) about how clinical algorithms should be developed, validated, and updated in a way to avoid bias? Are you aware of guidance on the inclusion or race/ethnicity, related variables such as SDOH, prior utilization, or other variables to minimize the risk of bias?

    11. To what extent are users of clinical algorithms educated about how algorithms are developed or may influence their decision-making? What educational curricula and training is available for clinicians that addresses bias in clinical algorithms?

    AHRQ is interested in all of the questions listed above, but respondents are welcome to address as many or as few as they choose and to address additional areas of interest not listed.

    This RFI is for planning purposes only and should not be construed as a policy, solicitation for applications, or as an obligation on the part of the Government to provide support for any ideas identified in response to it. AHRQ will use the information submitted in response to this RFI at its discretion and will not provide comments to any responder's submission. However, responses to the RFI may be reflected in future solicitation(s) or policies. The information provided will be analyzed and may appear in reports. Respondents will not be identified in any published reports. Respondents are advised that the Government is under no obligation to acknowledge receipt of the information received or provide feedback to respondents with respect to any information submitted. No proprietary, classified, confidential, or sensitive information should be included in your response. The contents of all submissions will be made available to the public upon request. Materials submitted must be publicly available or can be made public.

    Start Signature

    Dated: March 1, 2021.

    Marquita Cullom,

    Associate Director.

    End Signature End Supplemental Information

    [FR Doc. 2021-04509 Filed 3-4-21; 8:45 am]

    BILLING CODE 4160-90-P