A group of media executives has called on lawmakers to pass new legislation that would compel artificial intelligence (AI) developers to compensate publishers for the use of their content in training AI models. This request was made during a hearing in the US Senate, prompted by concerns about the impact of AI chatbots, particularly OpenAI’s ChatGPT, on the media industry. These AI models have raised concerns about their potential to disrupt the industry, which has already seen a significant loss of jobs in recent years.
Roger Lynch, the CEO of Condé Nast, informed senators that current AI models are effectively using “stolen” content, as chatbots scrape and display news articles without obtaining permission or providing compensation to publishers. Lynch emphasized that news organizations typically have no control over whether their content is used in AI training or the subsequent output.
He pointed out that when AI companies offer opt-out options, it does not address the core issue since the models are already trained. The only result of such opt-outs would be to prevent new competitors from developing models to compete with existing ones.
While a lawsuit by The New York Times in December highlighted news publishers’ concerns about AI scraping their articles without compensation, this problem extends beyond the news media industry. In 2023, two significant lawsuits were filed against AI companies, one by Sarah Silverman and two authors, and another massive class-action lawsuit involving prominent authors like Margaret Atwood, Dan Brown, Michael Chabon, Jonathan Franzen, and George R. R. Martin.
To prevent the unauthorized use of news publishers’ content and to ensure their financial stability, Lynch proposed that AI companies use licensed content and compensate publishers for the content used in both training and output. This approach aims to create a sustainable and competitive ecosystem where high-quality content is produced, and reputable brands can thrive, providing the information needed by society and democracy.
Danielle Coffey, the President and CEO of the News Media Alliance, emphasized that there is already a robust licensing ecosystem in the news media industry, with many publishers digitizing archives spanning hundreds of years for digital consumption. She also noted that AI models can introduce inaccuracies and “hallucinations” when scraping content from less reputable sources, risking the spread of misinformation or damaging a publication’s reputation.
Curtis LeGeyt, the President and CEO of the National Association of Broadcasters, highlighted the trust that local personalities rely on from their audiences, which could be undermined by AI’s potential for creating deepfakes and spreading misinformation.
In conclusion, while legal safeguards against AI may protect news publishers from content misuse, they may also benefit developers in the long run. Coffey argued that generative AI models and products can coexist with quality content, ensuring a sustainable future for both.