The absence of Microsoft, OpenAI, or Google from the group could lead to a conflict on responsible conduct and regulations.
On Tuesday, IBM and Facebook’s parent company, Meta, formed the AI Alliance, a new organization that opposes Google, Microsoft, and ChatGPT creator OpenAI by promoting a “open-science” approach to AI development.
The question of whether to develop AI in a way that makes the underlying technology broadly available is one that divides these two opposing factions, the open and the closed. The discussion centers on safety, but it also revolves around who will benefit from AI’s advancements.
Open proponents support a strategy that is “not proprietary and closed,” according to IBM’s senior vice-president for research Darío Gil. “Therefore, it is not an object that is sealed away and remains unknown to all.”
In an interview with the Associated Press prior to its announcement, Gil stated that the AI Alliance is “coming together to articulate, simply put, that the future of AI is going to be built fundamentally on top of the open scientific exchange of ideas and on open innovation, including open source and open technologies.” The AI Alliance is led by IBM and Meta and includes Dell, Sony, the chipmakers AMD and Intel, several universities, and AI startups. The coalition will probably pressure lawmakers to guarantee new legislation works in their favor.
Yann LeCun, the chief AI scientist at Meta, attacked OpenAI, Google, and the startup Anthropic on social media this fall, accusing them of engaging in “massive corporate lobbying” to shape regulations so that their powerful AI models would be favored and their control over the field’s advancement concentrated. The three businesses have established the Frontier Model Forum, an industry association, with Microsoft, OpenAI’s principal partner.
On X, the former Twitter, LeCun expressed his concern that scientists’ scare tactics regarding AI “doomsday scenarios” were arming opponents of open-source research and development.
“We need the platforms to be open source and freely available so that everyone can contribute to them in a future where AI systems are poised to constitute the repository of all human knowledge and culture,” LeCun wrote. “The only way to ensure that AI platforms fully represent human knowledge and culture is through openness.”
The disagreement feeds into a much lengthier fight that comes before the AI boom, according to IBM, one of the early proponents of the open-source Linux operating system in the 1990s.
Chris Padilla, head of IBM’s worldwide government affairs team, described it as “an attempt to create fears about open-source innovation in a classic regulatory capture approach.” This has been Microsoft’s business strategy for decades, after all. They have consistently been against open-source software that could rival Office or Windows. Here, they’re adopting a similar strategy.
The phrase “open-source” refers to a long-standing technique in software development where the source code is freely or publicly available for anyone to review, alter, and expand upon.
There is more to open-source AI than just code, and depending on which parts of the technology are made available to the public and whether there are use-restrictions, computer scientists disagree on how to define it. Some refer to open science as the more general philosophy.
Despite its name, OpenAI, the business behind ChatGPT and the image generator Dall-E, develops AI systems that are unmistakably closed, which contributes to some of the misunderstanding surrounding open-source AI.
“To put it bluntly, there are short-term and financial incentives working against open source,” co-founder and chief scientist of OpenAI Ilya Sutskever stated in an April video interview that Stanford University hosted. However, he also raised a longer-term concern about the possibility of an AI system with “mind-bendingly powerful” capabilities that would be too risky to make available to the general population.
Sutskever used the example of an AI system that had discovered how to launch its own biological laboratory to argue against open-source risks.
Even the most recent AI models carry hazards; David Evan Harris of the University of California, Berkeley, pointed out that they could be used, for example, to intensify disinformation campaigns intended to sabotage democratic elections.
Open source is fantastic in a lot of technological areas, but Harris noted that artificial intelligence is distinct.
“Anyone who saw the Oppenheimer movie knows this: there are a lot of reasons to consider carefully how widely to share the details of all that information in ways that could end up in the wrong hands when big scientific discoveries are being made,” he said.
One of the organizations calling attention to the dangers of open-source or leaked AI models is the Center for Humane Technology, a longtime opponent of Meta’s social media policies.
The group’s Camille Carlton stated, “It’s just completely irresponsible to be deploying these models to the public as long as there aren’t guardrails in place right now.”
There is a growing public discussion on the advantages and risks of developing AI via an open-source methodology.
In the conversation around Joe Biden’s expansive executive order on AI, it was simple to overlook the “open-source” dispute.
Under the technical term “dual-use foundation models with widely available weights,” open models were defined in the US president’s order as needing more research. Numerical parameters called weights affect an AI model’s performance.
According to Biden’s order, “there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model,” when those weights are made available to the public online. He granted Gina Raimondo, the commerce secretary, until July to consult with specialists and submit suggestions on how to balance the possible advantages and disadvantages.
There is less time for the European Union to work things out. Officials trying to finish the approval of the world’s most important AI policy are still arguing over a number of issues, one of which would exempt certain “free and open-source AI components” from laws impacting commercial models. These negotiations are set to conclude on Wednesday.