Georgie Purcell’s Photoshop incident demonstrates why transparency is critical when it comes to AI

Nine News’ editing blunder on MP’s photograph is a foreshadowing for an increasingly reliant media world on artificial intelligence

It’s been five years since Australia’s last Photoshop incident, involving then-Prime Minister Scott Morrison’s white sneakers, but it feels like a lifetime ago.

Georgie Purcell, an Animal Justice Party MP, had her photo modified this week to enhance her breasts and place an unnecessary crop into her top. Purcell, who had previously been a victim of image-based abuse, described the experience as violating, and claimed that Nine News’ explanation failed to address the issue.

Nine, for its part, accused a “automation” option in Photoshop – the recently announced “generative fill”, which, as the name implies, fills in the voids of an image when it is scaled using artificial intelligence. According to Nine, the company began with a cropped version of the original photograph and used the technique to extend beyond its existing bounds. However, whomever altered the image most likely exported the updated version without considering the implications of their alterations.

The Photoshop mishap seems to foreshadow a media world that increasingly relies on artificial intelligence, where distinguishing whether something was generated by a human or a machine becomes increasingly difficult, and AI becomes a handy scapegoat to explain away mistakes.

The incident also exposes that Nine is employing AI to manipulate visuals it broadcasts without disclosing it.

In August, Nine’s CEO, Mike Sneesby, stated that he saw “potential for Nine to use AI to drive meaningful, long-term benefits in content production, operational efficiency, and commercialisation throughout the business.”

Adobe’s generated fill technology undoubtedly improves “operational efficiency,” but should Nine have announced its use of generative fill and noted it in broadcast images?

Although Nine has apologized and accepted responsibility, the incident appears to violate the (voluntary) Australian AI ethics guidelines, which state that people who use AI should be traceable and liable for the outcomes, and there should be human oversight.

The Media, Entertainment, and Arts Alliance journalist code of ethics agrees, noting that pictures and sound must be honest and accurate, and that “manipulation likely to mislead should be disclosed”.

On the technical side, it calls into doubt Adobe’s AI training dataset. Guardian Australia’s tests this week found that Adobe’s generative fill on photographs of women generally resulted in shorter shorts, which Crikey was also able to reproduce.

Adobe claimed in a statement that it trained its algorithm with “diverse image datasets” and is constantly testing it to avoid “perpetuating harmful stereotypes.” The corporation also stated that it relied on user feedback for possibly biased outputs to enhance its processes.

“This two-way dialogue with the public is critical so that we can work together to continue to make generative AI better for everyone.”

Not only do AI tools generate bogus images, video, and audio, but they also sow doubt about everything else.

Australia has yet to see an incident with a politician saying an unpleasant audio recording or video is an AI deep fake, but it won’t be long.

In the United States, right-wing political operative Roger Stone said last month that leaked audio of him threatening to kill Democrats was artificial intelligence-generated. At the same time, an AI-fake version of US President Joe Biden’s voice was making robocalls spreading false information about the New Hampshire primary.

When you don’t know what’s real and what’s artificial intelligence, everything becomes dubious. This means that disclosure is critical for media companies, as well as technology companies.

Globally, policymakers are still working out how to create guardrails, and progress has been slow. Following the online transmission of deepfakes depicting Taylor Swift last week, legislation was presented in the United States to criminalize the spread of nonconsensual, sexualized photographs made by artificial intelligence.

Australia will most likely join in this prohibition through codes enforced by the eSafety Commissioner, but it has primarily watched from a distance. Last month, Australia declared that a “expert panel” would be briefed on the best next steps for high-risk AI.

And certain difficulties will be addressed by existing legislation. Dr Rita Matulionyte, a senior lecturer in law at Macquarie University, has written a paper about AI and moral rights. She told Guardian Australia that the copyright laws, for example, should prohibit “derogatory treatment” of copyright works, such as change or mutilation by AI, albeit there had been few cases where this had been successfully demonstrated.

Matulionyte said it was uncertain whether such a rule would help Purcell because she was not the photographer and the tampering may not be significant enough.

“If the person in the image was stripped of most/all of the clothes or a background were added that would mutilate the idea behind the picture, then the infringement of the right of integrity would be more likely to succeed,” the lawyer noted.

Finally, everything comes down to transparency.

The government has said that it will collaborate with industry to create a “voluntary code” to label or watermark AI-generated content. Leaving it on the goodwill of the major players in this technology to do the right thing is simply not an option.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like