Meta aims to mark all AI photos on Instagram and Facebook in a crackdown on misleading information

Global executive Nick Clegg says customers ‘want to know where the boundary is’ despite increase of AI-generated content.

Meta is aiming to recognize and label AI-generated photographs on Facebook, Instagram, and Threads as part of its efforts to expose “people and organizations that actively seek to deceive people.”

Photorealistic photos created using Meta’s AI imaging tool are already labeled as AI, but the business’s president of global affairs, Nick Clegg, stated in a blog post on Tuesday that the company would strive to begin marking AI-generated images published on competing platforms.

Meta’s AI images already include metadata and invisible watermarks that can inform other organisations that the image was created by AI, and the company is working on tools to detect these types of markers when used by other companies in their AI image generators, such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock, according to Clegg.

“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” Clegg told the BBC.

“People are frequently encountering AI-generated content for the first time, and our consumers have told us that they value clarity surrounding this new technology. So it’s critical that we inform consumers that the photorealistic content they’re viewing has been made with AI.”

Clegg stated that the capacity was being developed, and that the labels would be implemented in all languages over the next few months.

“We’re taking this approach through the next year, during which a number of important elections are taking place around the world,” Clegg said in a statement.

Clegg stated that it was confined to photos, and AI systems that generate voice and video do not now carry these markers, but the company would allow users to disclose and label this information when it is put online.

He also stated that “digitally created or altered” photos, video, or audio that “creates a particularly high risk of materially deceiving the public on a matter of importance” will receive a more prominent label.

The company was also working into developing technology that could detect AI-generated content automatically, even if the invisible markings were missing or had been erased.

“This work is especially important as this is likely to become an increasingly adversarial space in the years ahead,” Clegg said in a statement.

“People and organisations who actively aim to deceive people with AI-generated content will seek ways to circumvent the measures put in place to detect it. We will need to continue to look for methods to stay one step ahead in our sector and society as a whole.”

AI deepfakes have already made their way into the US presidential election season, with robocalls of what appears to be an AI-generated deepfake of President Joe Biden’s voice discouraging voters from attending the Democratic primary in New Hampshire.

Last week, Nine News in Australia drew criticism for manipulating an image of Victorian Animal Justice Party MP Georgie Purcell to reveal her midriff and change her chest in an evening news program. The network blamed “automation” in Adobe’s Photoshop, which includes AI picture tools.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like