The United States government intends to fully embrace the use of artificial intelligence (AI), but a government oversight organization has pointed out that it currently lacks a comprehensive strategy for doing so.

ChatGPT

The US government intends to significantly increase its utilization of artificial intelligence (AI), but it is considerably behind in establishing responsible policies for acquiring and utilizing AI technology from the private sector, as highlighted in a recent federal oversight report.

According to the Government Accountability Office (GAO), which serves as the government’s leading accountability watchdog, the absence of a standardized approach for AI procurement across government agencies poses a potential threat to American security. This assessment is part of a long-awaited review encompassing nearly twenty agencies, examining their current AI implementations and future plans.

The 96-page report, unveiled on Tuesday, represents the most comprehensive effort by the US government to date in documenting the extensive use of AI and machine learning by non-military agencies (over 200 applications) and the numerous upcoming AI initiatives (over 500 projects) within the government.

This situation arises as AI developers continue to release increasingly advanced AI models, while policymakers rush to establish regulations, particularly for the most sensitive applications of AI technology. Governments worldwide have highlighted the potential benefits of AI, such as its ability to discover disease cures and enhance productivity. However, they also express concerns about its associated risks, including potential job displacement, the spread of election-related misinformation, and the possibility of harming vulnerable groups due to algorithmic biases. Furthermore, experts have warned of new national security threats arising from AI, as it may offer malicious actors novel means to develop cyberattacks or biological weapons.

The Government Accountability Office (GAO) conducted a comprehensive survey involving 23 agencies, ranging from the Departments of Justice and Homeland Security to the Social Security Administration and the Nuclear Regulatory Commission. The report reveals that the federal government is already employing AI in 228 different ways, and nearly half of these applications have been initiated within the past year, highlighting the rapid adoption of AI across various sectors of the US government.

The majority of the current and planned government uses of AI identified by the GAO in its report, approximately seven out of ten, are either science-related or aimed at enhancing internal agency management. For example, NASA uses artificial intelligence to monitor global volcano activity, while the Department of Commerce employs AI to track wildfires and automatically count seabirds, seals, or walruses in drone photos. Closer to home, the Department of Homeland Security uses AI to identify noteworthy border activities by applying machine learning techniques to camera and radar data, as outlined in the GAO report.

Government entities increasingly integrating AI

The report also underscores that federal agencies employ AI in numerous undisclosed ways. While about 70% of the total 1,241 active and planned AI use cases were publicly disclosed by federal agencies, the report noted that over 350 applications of the technology remained undisclosed due to their sensitive nature.

Some agencies were particularly reserved when it came to their AI usage. For instance, the State Department listed 71 different AI use cases but indicated that only 10 of them could be publicly identified by the Government Accountability Office (GAO).

Although certain agencies reported relatively few AI applications, these select uses have garnered significant attention from government oversight bodies, civil liberties organizations, and AI experts who express concerns about potential adverse AI outcomes.

For instance, the Departments of Justice and Homeland Security, as revealed in the GAO’s report, mentioned a total of 25 current or planned AI use cases, which is a small fraction compared to NASA’s 390 or the Commerce Department’s 285. However, the limited number of use cases belies the sensitivity and potential implications of the AI applications employed by the DOJ and DHS.

As recently as September, the Government Accountability Office (GAO) cautioned that federal law enforcement agencies had conducted thousands of facial recognition searches powered by AI, accounting for 95% of such searches across six US agencies between 2019 and 2022. These searches were conducted without appropriate training requirements for the officials conducting them, raising concerns about the potential misuse of AI. Privacy and security experts have consistently warned that heavy reliance on AI in policing could result in cases of mistaken identity, wrongful arrests, or discrimination against minority groups.

(The GAO’s September report on facial recognition coincided with a DHS inspector general report, which found that several agencies, including Customs and Border Patrol, the US Secret Service, and Immigration and Customs Enforcement, likely violated the law when officials purchased Americans’ geolocation histories from commercial data brokers without conducting required privacy impact assessments.)

While officials are increasingly turning to AI and automated data analysis to address critical issues, the Office of Management and Budget (OMB), responsible for coordinating federal agencies’ approach to various issues, including AI procurement, has not yet finalized a draft memorandum outlining how agencies should appropriately acquire and use AI.

The GAO stated, “The lack of guidance has contributed to agencies not fully implementing fundamental practices in managing AI.” It further added, “Until OMB issues the required guidance, federal agencies will likely develop inconsistent policies on their use of AI, which will not align with key practices or be beneficial to the welfare and security of the American public.”

According to the report, under a 2020 federal law related to AI in government, OMB was supposed to provide draft guidelines to agencies by September 2021 but missed the deadline. It only issued its draft memorandum two years later, in November 2023, which was a response to President Joe Biden’s October executive order on AI safety.

OMB stated that it agreed with the watchdog’s recommendation to issue AI guidance and explained that the draft guidance released in November was in line with President Biden’s executive order on AI safety.

Biden’s AI approach

In the recent executive order on AI issued by President Biden, there are several provisions, one of which entails that developers of “the most advanced AI systems” must share their model test results with the government. This requirement is outlined in a summary provided by the White House regarding the directive. Furthermore, this year, numerous leading AI companies also made commitments to the Biden administration, pledging to undergo external testing of their AI models before releasing them to the public.

President Biden’s executive order contributes to the increasing number of demands placed on federal agencies concerning AI policies. For instance, it assigns the Department of Energy the task of assessing the potential for AI to amplify threats related to chemical, biological, radiological, or nuclear weapons.

The Government Accountability Office (GAO) report released on Tuesday identified a comprehensive list of AI-related requirements imposed by Congress or the White House on federal agencies since 2019 and evaluated their performance. Besides criticizing the Office of Management and Budget (OMB) for not developing a government-wide plan for AI acquisitions, the report highlighted deficiencies in the AI approaches of several other agencies. For instance, as of September, the Office of Personnel Management had not yet prepared a mandated projection of the number of AI-related positions that the federal government may need to fill in the next five years. Additionally, the report noted that ten federal agencies, ranging from the Treasury Department to the Department of Education, had not established required plans for updating their lists of AI use cases over time. This lack of planning could impede the public’s understanding of how the US government utilizes AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like