Australian Experts Caution that Google and Meta May Be Exposed to Defamation Risks Due to AI-Generated Responses.

Meta acknowledges that its AI is still developing and may occasionally produce unintended responses. Meanwhile, Google states that its Gemini AI is designed to offer a balanced perspective in its outputs.

A lawyer suggests that tech platforms could be held accountable for content generated by their AI systems, as Google Maps introduces new features powered by Gemini.

Experts warn that Google and Meta face increased defamation risks by using AI to generate responses based on user comments or reviews, especially in contexts like restaurant queries or summarizing user sentiment.

In Australia, defamation actions typically target the individual user posting defamatory content on platforms like Google or Facebook. However, a 2021 High Court ruling in Dylan Voller’s case—where news outlets were held liable for defamatory comments on their Facebook pages—set a precedent that the host of defamatory content could also be liable. This ruling has potential implications for AI-driven content generated by tech companies.

Historically, Australian courts have occasionally held tech giants accountable. For example, in 2022, Google was ordered to pay over $700,000 to former NSW Deputy Premier John Barilaro for hosting defamatory content. Additionally, the company faced a $40,000 penalty in 2020 over search results linking to a defamatory news article about a lawyer, though this ruling was later overturned by the High Court.

Recently, Google began deploying its Gemini AI in the U.S. for Maps, enabling users to ask about places and activities, with AI summarizing user reviews for various locations. In Australia, Google also started rolling out search summaries. Similarly, Meta introduced AI-generated summaries of comments on Facebook posts, including those by news outlets.

Defamation expert Michael Douglas suggests that as AI becomes more integrated, legal challenges could follow. If AI on platforms like Meta generates defamatory responses, the platform may be considered a publisher and thus liable. While companies may argue for “innocent dissemination” defenses, Douglas questions the defense’s effectiveness if platforms are reasonably aware of potentially defamatory content.

Prof. David Rolph from the University of Sydney notes that AI’s replication of potentially defamatory remarks could be problematic. However, recent Australian defamation reforms, which introduced a “serious harm” threshold, might limit liability. Rolph also highlights the gap in defamation laws regarding AI, as they predate large-language model AI. He advocates for more frequent reforms in defamation law to address the rapid evolution of technology.

AI’s variability—generating diverse responses based on input—might limit exposure by reducing the number of users who encounter defamatory content, according to Rolph.

Addressing these concerns, Google Maps Vice President Miriam Daniel stated that their team actively removes fake reviews and aims for balanced AI-generated summaries by capturing themes across positive and negative feedback. Meta’s spokesperson acknowledged that AI may not always produce intended outputs and that the company continuously improves its models.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like