The Biden Administration on Monday issued what it’s calling a “landmark” executive order designed to help channel the significant promise and manage the many risks of artificial intelligence and machine learning.
WHY IT MATTERS
The wide-ranging EO is meant to set new standards for AI safety and security, while offering guidance to help ensure algorithms and models are equitable, transparent and trustworthy.
As part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, the Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI.
Among its many prescriptions for safer and more standardized AI innovation, the order contains some specific directives related to algorithms used in healthcare settings, designed to protect patients from harm.
The EO acknowledges the potential for “responsible use of AI” to help advance care delivery and power the development of new and more affordable drugs and therapeutics.
But, recognizing that AI “raises the risk of injuring, misleading, or otherwise harming Americans, President Biden also instructs the U.S. Department of Health and Human Services to establish a safety program that will allow the agency to “receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI.”
Among its other provisions, the order calls for a new pilot of the National AI Research Resource to catalyze innovation nationwide, combined with promotion of policies to provide small developers and entrepreneurs access to more technical assistance and resources.
It also seeks to modernize and streamline visa criteria to help expand the ability of highly skilled immigrants with expertise in critical areas to study and work in the United States.
The EO also contains numerous provisions to promote standards for AI safety and security:
-
A requirement that developers of powerful AI systems share safety test results and other critical information with the federal government. In accordance with the Defense Production Act, it calls for any companies developing machine learning models that pose potential risk to “national security, national economic security or national public health and safety” to notify the government when training those models, and share the results of all red-team safety tests.
-
The National Institute of Standards and Technology will set rigorous standards for testing to ensure safety before public release, with the Department of Homeland Security applying those standards to critical infrastructure sectors and establishing the AI Safety and Security Board.
-
Additionally, agencies that fund life-science projects will establish standards designed to protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening. as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.
On the privacy front, President Biden is calling on Congress to pass bipartisan legislation that prioritizes federal support for “accelerating the development and use of privacy-preserving techniques – including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data.”
The EO also focuses on workforce impacts of AI. It seeks to develop “principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection,” and calls for federal officials to produce a report on AI’s potential labor-market impacts, and study and identify options for strengthening federal support for workers facing labor disruptions, including from AI.
The White House order also aims to prevent algorithmic discrimination in part through training, technical assistance and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI.
THE LARGER TREND
Since first taking office, President Biden has been clear about the need to support healthcare information technology, while maintaining safety and security guardrails around IT Innovation.
The AI executive order – which was developed after gathering feedback on AI R&D from a wide array of industry stakeholders – follows the White House’s privacy-focused AI Bill of Rights proposed a year ago.
It also comes on the heels of the White House’s similarly ambitious National Cybersecurity Strategy from earlier this year (as well as another plan for the U.S. cyber workforce).
ON THE RECORD
“The actions that President Biden directed today are vital steps forward in the U.S.’s approach on safe, secure, and trustworthy AI,” said the White House in the executive order. “More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”
Mike Miliard is executive editor of Healthcare IT News
Email the writer: mike.miliard@himssmedia.com
Healthcare IT News is a HIMSS publication.
Credit: Source link