The U.S. Department of Energy (DOE) and the U.S. Department of Commerce (DOC), as represented by the National Institute of Standards and Technology (NIST), announced a memorandum of understanding (MOU) signed earlier this year to collaborate on safety research, testing, and evaluation of advanced artificial intelligence (AI) models and systems.
This partnership is a key example of the Biden-Harris Administration’s whole-of-government approach to ensuring the safe, secure, and trustworthy development and use of AI. This announcement follows the recent release of the first-ever National Security Memorandum on AI, which designated the U.S. AI Safety Institute (US AISI), which is housed within NIST, as a key hub of the U.S. government’s AI safety efforts and identifies a substantial role for DOE in helping the U.S. government understand and mitigate AI safety risks and improve the performance and reliability of AI models and systems.
“There’s no question that AI is the next frontier for scientific and clean energy breakthroughs, which underscores the Biden-Harris Administration’s efforts to push forward scientific innovation in a safe and secure manner” said U.S. Secretary of Energy Jennifer M. Granholm. “Across the federal government we are committed to advancing AI safety and today’s partnership ensures that Americans can confidently benefit from AI-powered innovation and prosperity for years to come.”
In addition to facilitating joint research efforts and information sharing, this agreement enables the Department of Energy and its National Laboratories to lend both their technical capacity and their subject matter expertise to the US AISI and NIST.
“By empowering our teams to work together, this partnership with the Department of Energy will undoubtedly help the U.S. AI Safety Institute and NIST advance the science of AI safety,” said U.S. Secretary of Commerce Gina Raimondo. “Safety is key to continued innovation in AI, and we have no time to waste in working together across government to develop robust research, testing, and evaluations to protect and advance essential national security priorities.”
Through this MOU, the DOE and DOC intend to evaluate the impact of AI models on public safety, including risks to critical infrastructure, energy security, and national security. Key focus areas include developing classified evaluations of advanced AI models’ chemical and biological risks, as well as developing and evaluating evaluate privacy enhancing technologies that aim to protect personal and commercial proprietary data. These efforts, combined with DOE’s AI testbeds, will help lay the foundation for a safe and innovative future for AI.
Read the full MOU here.