Meta, the parent company of Instagram and Facebook, has announced that it will require political advertisers globally to disclose any utilization of artificial intelligence in their advertisements. This move is part of a broader effort to restrict the proliferation of “deepfakes” and other digitally manipulated deceptive content.
This new policy is slated to come into effect next year, preceding the 2024 US election and upcoming elections worldwide. It applies to all political or social issue advertisements on Facebook and Instagram that employ digital tools to generate images of non-existent individuals, distort the accurate representation of events, or depict individuals as saying or doing things they did not, as detailed in a company blog post.
Small, insignificant uses of AI in ads that have no substantial impact on the claim, assertion, or issue being presented, such as image cropping or color correction, will not be subject to the disclosure requirement.
This announcement from Meta follows their recent decision to limit political advertisers from using Meta’s own AI advertising tools, which can create backgrounds, propose marketing text, or provide music for videos.
Similarly, Microsoft also took a comparable step by introducing a tool, set to be available for free to political campaigns in the spring, that can add a “watermark” to campaign content to confirm its authenticity to viewers.
Microsoft President Brad Smith explained that these credentials become an integral part of the content’s history, accompanying it wherever it’s shared, and establishing a permanent record and context. By clicking on an embedded pin containing Content Credentials, users encountering an image or video can discover information about its creator and origin.
The effort to curb politicians’ use of AI in advertisements reflects widespread concerns expressed by civil society groups and policymakers regarding the potential threats to democracy posed by the proliferation of AI-generated content in political discourse. Many have warned that the rise of disinformation, whether from foreign or domestic actors, could be greatly amplified by artificial intelligence. These concerns are exacerbated by recent reductions in content moderation teams across the industry.
This move also underscores a rare instance of Meta imposing regulations on political speech. The platform has faced criticism for permitting politicians to make false claims in their campaign advertisements and for exempting politicians’ speech from third-party fact-checking. In the past, Mark Zuckerberg, the CEO of the company, has argued that politicians should have the freedom to make false statements, with viewers and voters deciding how to hold them accountable.
However, the decisions to compel Meta’s political advertisers to disclose their use of AI and to restrict the use of Meta’s own AI tools in political ads indicate that there may be limits to how far Mark Zuckerberg is willing to allow politicians to leverage new technology.
Meta stated in its Wednesday blog post that if they find an advertiser failing to disclose as mandated, they will reject the advertisement, and repeated non-compliance with disclosure requirements could lead to penalties imposed on the advertiser.