According to an assessment released on Monday by the Office of the Director of National Intelligence (ODNI), artificial intelligence is enhancing, rather than revolutionizing, the influence operations by Russia and Iran targeting the upcoming U.S. elections in November. An ODNI official stated, “The U.S. intelligence community views AI as a malign influence accelerant, but not yet a transformative influence tool.”
This new U.S. assessment contrasts with the media and industry hype surrounding AI-related threats. Nonetheless, the technology remains a significant concern for U.S. intelligence as they monitor potential threats to the presidential election. The extent of the risk posed by foreign, AI-generated content hinges on the capability of foreign operatives to navigate the limitations of many AI tools, develop their own advanced AI models, or strategically target and disseminate AI-generated content. The official noted that foreign actors are currently behind in all three areas.
U.S. officials report that foreign operatives are utilizing AI to overcome language barriers while targeting American voters with disinformation. For instance, Iran has employed AI to create content in Spanish regarding immigration, a topic it views as divisive within U.S. politics. Tehran-linked operatives are also using AI to address voters on polarizing issues such as the Israel-Gaza conflict, with the intention of undermining former President Donald Trump’s candidacy.
Among foreign powers, Russia has produced the most AI-generated content related to the U.S. election. This AI-enhanced content—comprising videos, photos, text, and audio—aligns with Moscow’s strategy to support Trump’s campaign while disparaging Vice President Kamala Harris’s efforts.
Conversely, China is using AI to amplify divisive political issues in the U.S. but is not actively attempting to influence specific election outcomes, according to the latest intelligence assessment.
In addition to AI methods, foreign operatives have also relied on traditional influence tactics during this election cycle, such as staging videos rather than generating them through AI. U.S. intelligence agencies believe Russian operatives staged a video that circulated on X earlier this month, falsely alleging that Harris was responsible for paralyzing a young girl in a hit-and-run accident in 2011. This narrative was spread through a website that pretended to be a local San Francisco news outlet, as reported by Microsoft researchers.
Another video produced by Russian operatives, which garnered at least 1.5 million views on X, purported to show Harris supporters attacking a Donald Trump rally attendee, according to Microsoft.
In July, U.S. intelligence agencies warned that Russia planned to covertly utilize social media to sway public opinion and diminish support for Ukraine in swing states. An ODNI official remarked, “Russia is a much more sophisticated actor in the influence space, with a better understanding of U.S. elections and where to target.”
This is not the first general election where foreign entities have contemplated employing AI capabilities. During the final weeks of the 2020 election campaign, operatives associated with the Chinese and Iranian governments prepared fake, AI-generated content as part of a campaign to influence U.S. voters but ultimately decided not to disseminate it. Some U.S. officials reviewing that intelligence were skeptical, believing it demonstrated that China and Iran lacked the capability to effectively use deepfakes in a way that would significantly impact the 2020 presidential election.