Bipartisan concerns over AI-generated election interference have led to a variety of laws being implemented across the country. State lawmakers aim to mitigate the effects of misinformation and prevent deepfakes from confusing voters.
This year, over a dozen states, led by both Republicans and Democrats, have enacted laws to regulate the use of deepfakes—realistic fake video, audio, and other AI-created content—in political campaigns. These laws follow warnings from the Department of Homeland Security about the potential of deepfakes to deceive voters and amidst doubts about whether Congress will take effective action before November.
Florida, Hawaii, New York, Idaho, Indiana, New Mexico, Oregon, Utah, Wisconsin, Alabama, Arizona, and Colorado have passed laws this year requiring disclosures in political ads containing deepfake content. Michigan, Washington, Minnesota, Texas, and California already had regulations in place, with Minnesota updating its law this year to include penalties such as requiring a candidate to forfeit their office or nomination if they violate the state’s deepfake laws.
In states like New York, New Mexico, and Alabama, victims can seek court orders to halt the distribution of such content. Violations of deepfake-related laws can result in prison time in Florida, Mississippi, New Mexico, and Alabama. For instance, in Mississippi, violating the law with the intent to deter voting or incite violence can lead to a maximum of five years in prison, while in Florida, it is classified as a first-degree misdemeanor, punishable by up to one year in jail.
Some states also impose significant fines for violations. In Utah and Wisconsin, violators can be fined up to $1,000 per offense, and in Oregon and Mississippi, fines can reach up to $10,000.
“A whole new world”
Although there are already ways for candidates to challenge misleading ads, it’s uncertain whether these laws will be effective against deepfakes, according to Amy Beth Cyphert, a law lecturer at West Virginia University’s College of Law. She noted that AI presents a unique challenge due to its rapid evolution. “Anyone, even with minimal technical skills, could likely create a deepfake if they know where to look,” she said, highlighting the significant impact this could have.
Arizona state Rep. Alexander Kolodin, a Republican who sponsored a new law regulating AI-generated content, was motivated by the ability of deepfakes to create realistic voice depictions. His legislation allows candidates to obtain court orders declaring manipulated content as deepfakes. Kolodin believes such an order is “a powerful tool” that can help candidates counteract rapidly spreading deepfakes.
Kolodin emphasized that while deception in politics is not new, the technology to create deepfakes is. He even used ChatGPT to draft part of the bill that defines “digital impersonation.” Arizona Democratic Gov. Katie Hobbs signed this bill into law in May, along with another requiring disclosures in campaign ads.
Big Tech companies have also taken steps to moderate deepfake content. TikTok and Meta (the parent company of Instagram, Threads, and Facebook) recently announced plans to label AI-generated content, while YouTube requires creators to disclose when videos are AI-created.
Swift federal regulation of AI and deepfakes is uncertain.
Despite the progress at the state level, “the story is not optimistic on the federal side,” according to Robert Weissman, president of Public Citizen, a group advocating for state-level action and monitoring legislation on deepfakes in elections.
Bills requiring clear labeling of deepfakes have been introduced in Congress, but there’s little indication that lawmakers will act on them before November. While Senate Majority Leader Chuck Schumer supports such legislation, Minority Leader Sen. Mitch McConnell believes that the existing legal framework for removing deceptive campaign ads can be “easily” applied to deepfakes. In the House, bipartisan legislation has stalled in committee.
In the absence of congressional action, agencies like the Federal Election Commission (FEC) and the Federal Communications Commission (FCC) are responsible for regulating AI in campaign ads. Public Citizen petitioned the FEC to take action last year due to concerns that deepfakes could mislead voters, but the agency has yet to issue a rule. Weissman expressed skepticism about the FEC’s potential to act in a timely manner.
FEC Chairman Sean Cooksey stated that he expects the agency’s rulemaking to conclude later this year. Meanwhile, the FCC has unanimously voted to ban the use of AI-generated voices in robocalls and recently proposed requiring AI disclosures in political TV and radio ads. It remains uncertain if these rules will be finalized before the upcoming election. FCC Chair Jessica Rosenworcel is committed to following the regulatory process but emphasized the urgency of taking action, according to spokesperson Jonathan Uriarte.
Looking towards November
Not all state-level bills addressing deepfakes have made it to governors’ desks due to debates over their scope and impact. According to Public Citizen, over 40 states introduced deepfake-related bills in 2024.
In Georgia, a key battleground state from the 2020 presidential election, there’s no law requiring disclosure for political ads featuring deepfakes. Despite bipartisan support, a bill addressing this issue was passed by the House but eventually stalled in the Senate.
State Rep. Dar’shun Kendrick, a Democrat who was involved with the bill, expressed disappointment that it didn’t pass. She acknowledged the possibility of bad actors exploiting the lack of regulation but hoped any issues would be quickly addressed.
In the meantime, states are exploring other measures to combat harmful deepfakes. For instance, Arizona is training election workers to identify deepfakes, while New Mexico’s secretary of state has launched a voter education campaign to help people spot them.
According to Alex Curtas, a spokesperson for New Mexico’s secretary of state, this educational campaign complements the state’s disclosure law, emphasizing that both approaches need to work together effectively.