The US government will need to get important information and safety outcomes from developers of powerful AI.
As worries about artificial intelligence’s (AI) possible effects on everything from public health to national security have grown, US President Joe Biden has issued a broad executive order to govern the field.
“We need to govern this technology in order to realize the promise of AI and minimize the risk,” Biden stated on Thursday. “When AI is misused, it can facilitate hackers’ attempts to take advantage of flaws in the software that powers our society.”
One of the provisions of the executive order requires creators of the strongest AI models to report their work and provide safety test findings to the government.
In order to ensure AI cannot engineer biohazards, it also calls on the Department of Commerce to create guidelines for identifying content generated by AI, the National Institute of Standards and Technology to establish “rigorous standards” for testing AI before it is released, and agencies funding “life science projects” to create “strong new standards of biological synthesis screening.”
In addition, Biden urged Congress to enact laws protecting data privacy and the Department of Justice to investigate “algorithmic discrimination” in government welfare programs and among landlords.
The actions are the “strongest set of actions any government in the world has ever taken on AI safety, security, and trust,” according to White House Deputy Chief of Staff Bruce Reed, who also referred to them as the “next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks.”
Since Open AI’s ChatGPT was released last year and its capabilities shocked governments and regulators worldwide, concerns about the dangers of AI have escalated dramatically.