Meta is taking steps to address the proliferation of AI-generated images that could potentially distort the information landscape, particularly in the lead-up to the 2024 election season. They plan to implement labels identifying such images shared on their platforms, produced by third-party AI tools like Google, Microsoft, OpenAI, Adobe, Midjourney, and Shutterstock. This initiative, announced by Meta Global Affairs President Nick Clegg, will extend the existing “imagined with AI” label, already applied to photorealistic images generated using Meta’s own AI generator tool.
To standardize this process, Meta is collaborating with other major firms in the AI field to establish common technical standards, such as embedding invisible metadata or watermarks within images, enabling their systems to detect AI-generated content originating from various tools. These labels will be introduced across Meta’s platforms, including Facebook, Instagram, and Threads, and will be available in multiple languages.
This move by Meta reflects growing concerns among experts, lawmakers, and tech leaders about the potential misuse of AI-generated images, especially when combined with social media’s rapid dissemination capabilities. There’s apprehension that these sophisticated AI tools could facilitate the spread of misinformation, potentially influencing voters in the upcoming elections, not only in the United States but also in numerous other countries.
This move by Meta follows criticism from its Oversight Board regarding the company’s manipulated media policy, particularly in light of a case involving an altered video of US President Joe Biden. The Biden campaign denounced Meta’s policy as “nonsensical and dangerous.” Meta pledged to review the Oversight Board’s recommendations and respond within 60 days.
Acknowledging the need for transparency, Meta’s Nick Clegg emphasized the importance of clearly labeling AI-generated imagery, especially as users encounter such content for the first time. This approach will be upheld over the next year, during which significant elections are scheduled globally, allowing Meta to gain insights into user preferences regarding transparency and the evolution of AI technologies.
While Meta plans to introduce industry-standard markers for labeling AI-generated images, this feature will not initially extend to videos and audio created by artificial intelligence. However, Meta intends to empower users to identify AI-generated video or audio content they share, requiring disclosure for digitally created or altered content. Failure to comply may result in penalties. In cases where such content poses a high risk of deceiving the public, Meta may apply more prominent labels.
Meta is also focusing on preventing the removal of invisible watermarks from AI-generated images to thwart potential misuse. Clegg highlighted the importance of vigilance, as malicious actors may attempt to bypass safeguards to deceive with AI-generated content. Users are advised to consider various factors, such as the credibility of the account sharing the content and the presence of unnatural details, to determine if content is AI-generated.
In addition, Meta announced the expansion of the “Take it Down” tool, developed in collaboration with the National Center for Missing & Exploited Children, aimed at combating sextortion. This tool enables teens and parents to create unique identifiers for intimate images, facilitating their removal from online platforms. Originally available in English and Spanish, “Take it Down” will now support 25 languages and expand to more countries.
This expansion follows Meta CEO Mark Zuckerberg’s recent appearance before the Senate, during which he faced scrutiny over the company’s protections for young users.