OpenAI argues that the New York Times ‘hacked’ ChatGPT to develop a copyright case

In a Monday filing, OpenAI says a ‘hired gun’ took ‘tens of thousands of attempts to obtain the very abnormal results’.

OpenAI has requested a federal judge to reject parts of the New York Times’ copyright lawsuit against it, claiming that the newspaper “hacked” its chatbot ChatGPT and other artificial intelligence systems to provide deceptive evidence in the case.

In a complaint in Manhattan federal court on Monday, OpenAI claimed that the Times utilized “deceptive prompts that blatantly violate OpenAI’s terms of use” to force the technology to repeat its content.

“The allegations in the Times’s complaint do not meet its famously rigorous journalistic standards,” OpenAI said in a statement. “The truth, which will come out in the course of this case, is that the Times paid someone to hack OpenAI’s products.”

OpenAI did not identify the “hired gun” that it said the Times used to alter its systems, nor did it accuse the newspaper of violating any anti-hacking legislation. Representatives of the New York Times and OpenAI did not immediately reply to requests for comment on the filing.

In December, the Times sued OpenAI and its major financial supporter, Microsoft, alleging that they used millions of its stories without authorization to train chatbots to deliver information to consumers.

The Times is one of several copyright owners who have sued tech companies for allegedly misusing their work in AI training, along with authors, graphic artists, and music publishers.

Tech corporations have claimed that their AI systems make reasonable use of intellectual material, and that the cases jeopardize the potential multitrillion-dollar industry.

Courts have yet to decide whether AI training counts as fair use under copyright law. So far, judges have dismissed several infringement charges involving generative AI systems due to a lack of evidence that AI-created content resembled copyrighted works.

The New York Times’ complaint mentioned multiple occasions in which OpenAI and Microsoft chatbots provided consumers with near-verbatim portions of their stories when queried. It accused OpenAI and Microsoft of attempting to “free-ride on the Times’s massive investment in its journalism” by creating a substitute for the newspaper.

According to OpenAI, it took “tens of thousands of attempts to generate the highly anomalous results” for the Times.

“In the ordinary course, one cannot use ChatGPT to serve up Times articles at will,” a spokesperson for OpenAI said.

OpenAI’s brief also stated that it and other AI businesses would eventually win their cases based on the fair-use issue.

“The Times cannot prevent AI models from acquiring knowledge about facts, any more than another news organization can prevent the Times itself from re-reporting stories it had no role in investigating,” OpenAI stated in a statement.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like