2024-22974. Safety Considerations for Chemical and/or Biological AI Models  

  • AGENCY:

    U.S. Artificial Intelligence Safety Institute (AISI), National Institute of Standards and Technology (NIST), U.S. Department of Commerce.

    ACTION:

    Notice; Request for Information (RFI).

    SUMMARY:

    The U.S. Artificial Intelligence Safety Institute (AISI), housed within the National Institute of Standards and Technology (NIST) at the Department of Commerce, is seeking information and insights from stakeholders on current and future practices and methodologies for the responsible development and use of chemical and biological (chem-bio) AI models. Chem-bio AI models are AI models that can aid in the analysis, prediction, or generation of novel chemical or biological sequences, structures, or functions. We encourage respondents to provide concrete examples, best practices, case studies, and actionable recommendations where possible. Responses may inform AISI's overall approach to biosecurity evaluations and mitigations.

    DATES:

    Comments containing information in response to this notice must be received on or December 3, 2024, at 11:59 p.m. Eastern time. Submissions received after that date may not be considered.

    ADDRESSES:

    Comments must be submitted electronically via the Federal e-Rulemaking Portal.

    1. Go to www.regulations.gov and enter 240920-0247 in the search field,

    2. Click the “Comment Now!” icon, complete the required field, including the relevant document number and title in the subject field, and

    3. Enter or attach your comments.

    Additional information on the use of regulations.gov, including instructions for accessing agency documents, submitting comments, and viewing the docket is available at: www.regulations.gov/​faq. If you require an accommodation or cannot otherwise submit your comments via regulations.gov, please contact NIST using the information in the FOR FURTHER INFORMATION CONTACT section below.

    NIST will not accept comments for this notice by postal mail, fax, or email. To ensure that NIST does not receive duplicate copies, please submit your comments only once. Comments containing references, studies, research, and other empirical data that are not widely published should include copies of the referenced materials.

    All relevant comments received by the deadline will be posted at: https://www.regulations.gov under docket number 240920-0247 and at: https://www.nist.gov/​aisi without change or redaction, so commenters should not include information they do not wish to be posted publicly ( e.g., personal or confidential business information).

    FOR FURTHER INFORMATION CONTACT:

    For questions about this RFI contact aisibio@nist.gov or Stephanie Guerra, U.S. Department of Commerce, 1401 Constitution Ave. NW, Washington, DC. Direct media inquiries to NIST's Office of Public Affairs at (301) 975-2762. Users of telecommunication devices for the deaf or a text telephone may call the Federal Relay Service toll free at 1-800-877-8339.

    Accessible Format: NIST will make the RFI available in alternate formats, such as Braille or large print, upon request by persons with disabilities.

    SUPPLEMENTARY INFORMATION:

    The rapid advancement of the use of AI in the chemical and biological sciences has led to the development of increasingly powerful chemical and biological (chem-bio) AI models. By reducing the time and resources required for experimental testing and validation, chem-bio AI models can accelerate progress in areas such as drug discovery, medical countermeasure development, and precision medicine. However, as with other AI models, there is a need to understand and mitigate potential risks associated with misuse of chem-bio AI models. Examples of chem-bio AI models include but are not limited to foundation models trained using chemical and/or biological data, protein design tools, small biomolecule design tools, viral vector design tools, genome assembly tools, experimental simulation tools, and autonomous experimental platforms. The dual use nature of these tools presents unique challenges—while they can significantly advance beneficial research and development, they could also potentially be misused to cause harm, such as through the design of more virulent or toxic pathogens and toxins or biological agents that can evade existing biosecurity measures. The concept of dual use biological research is defined in the 2024 United States Government Policy for Oversight of Dual Use Research of Concern and Pathogens with Enhanced Pandemic Potential (USG DURC/PEPP Policy, https://www.whitehouse.gov/​wp-content/​uploads/​2024/​05/​USG-Policy-for-Oversight-of-DURC-and-PEPP.pdf).

    As chem-bio AI models become more capable and accessible, it is important to proactively address safety and security considerations. The scientific community has taken steps to address these issues, as demonstrated by a recent community statement outlining values and guiding principles for the responsible development of AI ( print page 80887) technologies for protein design. This statement articulated several voluntary commitments in support of such values and principles that were adopted by agreement by more than one hundred individual signatories (see https://responsiblebiodesign.ai/​).

    The following questions are not intended to limit the topics that may be addressed. Responses may include any topic believed to have implications for the responsible development and use of chem-bio AI models. Respondents need not address all statements in this RFI. All relevant responses that comply with the requirements listed in the DATES and ADDRESSES sections of this RFI and set forth below will be considered.

    For your organization, or those you assist, represent, or are familiar with, please provide information on the topics below as specifically as possible. NIST has provided this non-exhaustive list of topics and accompanying questions to guide commenters, and the submission of any relevant information germane to the responsible development and use of chem-bio AI models, but that is not included in the list of topics below, is also encouraged.

    1. Current and/or Possible Future Approaches for Assessing Dual-Use Capabilities and Risks of Chem-Bio AI Models

    a. What current and possible future evaluation methodologies, evaluation tools, and benchmarks exist for assessing the dual-use capabilities and risks of chem-bio AI models?

    b. How might existing AI safety evaluation methodologies ( e.g., benchmarking, automated evaluations, and red teaming) be applied to chem-bio AI models? How can these approaches be adapted to potentially specialized architectures of chem-bio AI models? What are the strengths and limitations of these approaches in this specific area?

    c. What new or emerging evaluation methodologies could be developed for evaluating chem-bio AI models that are intended for legitimate purposes but may output potentially harmful designs?

    d. To what extent is it possible to have generalizable evaluation methodologies that apply across different types of chem-bio AI models? To what extent do evaluations have to be tailored to specific types of chem-bio AI models?

    e. What are the most significant challenges in developing better evaluations for chem-bio AI models? How might these challenges be addressed?

    f. How would you include stakeholders or experts in the risk assessment process? What feedback mechanisms would you employ for stakeholders to contribute to the assessment and ensure transparency in the assessment process?

    2. Current and/or Possible Future Approaches To Mitigate Risk of Misuse of Chem-Bio AI Models

    a. What are current and possible future approaches to mitigating the risk of misuse of chem-bio AI models? How do these strategies address both intentional and unintentional misuse?

    b. What mitigations related to the risk of misuse of chem-bio AI models are currently used or could be applied throughout the AI lifecycle ( e.g., managing training data, securing model weights, setting distribution channels such as APIs, applying context window and output filters, etc.)?

    c. How might safety mitigation approaches for other categories of AI models, or for other capabilities and risks, be applied to chem-bio AI models? What are the strengths and limitations of these approaches?

    d. What new or emerging safety mitigations are being developed that could be used to mitigate the risk of misuse of chem-bio AI models? To what extent do mitigations have to be tailored to specific types of chem-bio AI models?

    e. How might the research community approach the development and use of public and/or proprietary chem-bio datasets that could enhance the potential harms of chem-bio AI models through fine tuning or other post-deployment adaptations? What types of datasets might pose the greatest dual use risks? What mechanisms exist to ensure the safe and responsible use of these kinds of datasets?

    3. Safety and Security Considerations When Chem-Bio AI Models Interact With One Another or Other AI Models

    a. What areas of research are needed to better understand the risks associated with the interaction of multiple chem-bio AI models or a chem-bio AI model and other AI model into an end-to-end workflow or automated laboratory environments for synthesizing chem-bio materials independent of human intervention? ( e.g., research involving a large language model's use of a specialized chem-bio AI model or tool, research into the use of multiple chem-bio AI models or tools acting in concert, etc.)?

    b. What benefits are associated with such interactions among AI models?

    c. What strategies exist to identify, assess, and mitigate risks associated with such interactions among AI models while maintaining the beneficial uses?

    4. Impact of Chem-Bio AI Models on Existing Biodefense and Biosecurity Measures

    a. How might chem-bio AI models strengthen and/or weaken existing biodefense and biosecurity measures, such as nucleic acid synthesis screening?

    b. What work has your organization done or is your organization currently conducting in this area to strengthen these existing measures? How can chem-bio AI models be used to strengthen these measures?

    c. What future research efforts toward enhancing, strengthening, refining, and/or developing new biodefense and biosecurity measures seem most important in the context of chem-bio AI models?

    5. Future Safety and Security of Chem-Bio AI Models

    a. What are the specific areas where further research to enhance the safety and security of chem-bio AI models is most urgent?

    b. How should academia, industry, civil society, and government cooperate on the topic of safety and security of chem-bio AI models?

    c. What are the primary ways in which the chem-bio AI model community currently cooperates on capabilities evaluation of chem-bio AI models and/or mitigation of safety and security risks of chem-bio AI models? How can these organizational structures play a role in ongoing efforts to further the responsible development and use of chem-bio AI models?

    d. What makes it challenging to develop and deploy chem-bio AI models safely and what collaborative approaches could make it easier?

    e. What opportunities exist for national AI safety institutes to advance safety and security of chem-bio AI models?

    f. What opportunities exist for national AI safety institutes to create and diffuse best practices and “norms” related to AI safety in chemical and biological research and discovery?

    Alicia Chambers,

    NIST Executive Secretariat.

    [FR Doc. 2024-22974 Filed 10-3-24; 8:45 am]

    BILLING CODE 3510-13-P

Document Information

Published:
10/04/2024
Department:
National Institute of Standards and Technology
Entry Type:
Notice
Action:
Notice; Request for Information (RFI).
Document Number:
2024-22974
Dates:
Comments containing information in response to this notice must be received on or December 3, 2024, at 11:59 p.m. Eastern time. Submissions received after that date may not be considered.
Pages:
80886-80887 (2 pages)
Docket Numbers:
Docket No. 240920-0247
PDF File:
2024-22974.pdf