Opinion: The potential consequences if major technology companies fail to confront misleading AI-generated content.

A number of technology companies, such as Google, Microsoft, Meta, and X, have committed to tackling AI-generated disinformation.

On the most recent Friday, a coalition of 20 technology platforms committed to enhancing the identification and mitigation of AI-generated deceptive content, particularly aimed at misleading voters during significant election cycles. This pledge involves promising rapid and proportional responses to deceptive AI content related to elections, along with sharing educational resources on how citizens can safeguard themselves from manipulation or deceit. Notably, the agreement does not explicitly prohibit the utilization of political “deepfakes,” fabricated audio or video representations of candidates and public figures. Furthermore, the platforms have not committed to reinstating the extensive teams previously dedicated to safeguarding election integrity, despite the challenges faced during the 2020 election, which saw the spread of misinformation contributing to violence at the US Capitol.

In response, the platforms have committed to establishing robust measures in 2024 to manage the risks associated with deceptive AI content related to elections, guided by principles such as prevention, detection, evaluation, and public awareness. However, to prevent a recurrence of issues seen in 2020, greater proactive efforts are necessary, given the advancements in technology enabling the creation of highly convincing deceptive content. Furthermore, these commitments should be accompanied by tangible enforcement and debunking efforts, which have lacked consistency in the past.

Recent findings by Free Press reveal a regression in platform policies and significant layoffs in content moderation and trust and safety roles, indicating a decline in accountability. This trend undermines efforts to combat misinformation, especially considering the increasing accessibility of sophisticated AI tools for creating deepfakes. Without rigorous enforcement of regulations against voter disinformation, there is a heightened risk of election manipulation using high-tech means.

Instances of AI abuse are already emerging, such as a fake audio recording during the Chicago mayoral race. Free Press has urged major tech companies to implement comprehensive safeguards against AI tool abuse, including reinvestment in human resources for content moderation and transparency through regular sharing of core metrics data. Additionally, legislative action is essential to establish clear rules governing AI technology, particularly regarding deepfakes, with the Federal Trade Commission proposing regulations to address AI impersonation.

The evolving online landscape underscores the critical importance of reliable information for democracy’s survival. Voluntary commitments from tech companies must translate into substantive actions, including the permanent restoration of election integrity teams and stringent enforcement against AI tool abuse. Failure to address these issues could jeopardize democracies worldwide.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like