Google has mandated that political advertisements must include information regarding their utilization of artificial intelligence (AI).

In the near future, Google will enforce a policy necessitating political advertisements on its platforms to explicitly inform viewers when images and audio have been generated through the use of artificial intelligence (AI).

The regulations have been introduced in response to the “increasing prevalence of tools that generate artificial content,” as confirmed by a spokesperson from Google in a statement to the BBC.

These changes are set to take effect in November, roughly a year ahead of the next US presidential election. Concerns have arisen about the potential for AI to amplify disinformation during election campaigns.

Google’s current advertising policies already prohibit the manipulation of digital media with the intent to deceive or mislead the public regarding political matters, social issues, or topics of public importance. However, this update will require political ads related to elections to prominently disclose the presence of “synthetic content” that portrays real or realistic-looking individuals or events.

Google has proposed using labels like “this image does not represent actual events” or “this video content was artificially generated” as indicators.

Google’s ad policy also prohibits demonstrably false claims that could erode trust in the electoral process.

In accordance with its existing practices, Google mandates that political ads disclose their source of funding, with information about these messages available in an online ads database. Any disclosures about digitally altered content within election ads must be easily noticeable and conspicuous.

Examples of content that would require labeling include synthetic images or audio depicting individuals saying or doing things they never did or representing events that never occurred.

In March, an AI-generated fake image of former US President Donald Trump being arrested was circulated on social media. Similarly, in March, a deepfake video surfaced featuring Ukrainian President Volodymyr Zelensky purportedly discussing surrendering to Russia.

In June, a campaign video by Ron DeSantis, which criticized former President Trump, included images that seemed to have been generated using AI, as indicated by certain markings.

The video was shared in a tweet and featured photos that appeared to have been manipulated to depict Mr. Trump embracing Anthony Fauci, a prominent member of the US coronavirus task force, and giving him kisses on the cheek.

AI experts have conveyed to the BBC that while the creation of manipulated imagery is not a new phenomenon, the rapid advancements in the field of generative AI and the potential for misuse are indeed causes for concern.

Google has stated that it is committed to investing in technology aimed at detecting and removing such content from its platforms.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like