Max Tegmark, a physicist, claims that the competition is too fierce for tech leaders to stop development and study the perils of AI.
The scientist behind a historic letter urging a halt to the creation of potent AI systems claims that tech CEOs did not stop working because they are engaged in a “race to the bottom.”
The Future of Life Institute co-founder Max Tegmark organized an open letter in March asking for a six-month moratorium on the creation of massive AI systems.
More than 30,000 people signed the document, including Elon Musk and Steve Wozniak, the co-founder of Apple, yet it was unable to stop the development of the most ambitious systems.
Six months later, Tegmark told the Guardian that he had not anticipated the letter to have any impact on tech companies’ efforts to develop AI models that are more potent than GPT-4, the huge language model that runs ChatGPT. Tegmark attributed this to the fierce rivalry in the market.
“I had the impression that many of the corporate leaders I spoke with wanted [a pause] in private, but they were caught in this downward spiral of competition. So no business can pause by itself,” he said.
The letter urged governments to step in if a moratorium on developing systems more powerful than GPT-4 could not be agreed upon by leading AI companies like Google, ChatGPT owner OpenAI, and Microsoft. It warned of a “out-of-control race” to develop minds that no one could “understand, predict, or reliably control.”
Should we create artificial intelligence that could one day outnumber, outsmart, and even replace humans? Should we run the danger of losing control over our society?
The letter, according to Tegmark, a professor of physics at the Massachusetts Institute of Technology, was a triumph.
“The letter has had more impact than I thought it would,” he added, citing a political awakening on AI that has included US Senate hearings with tech leaders and the UK government organizing an international summit on AI safety in November.
Since the letter’s release, Tegmark claimed, expressing fear about AI has moved from being taboo to becoming a mainstream viewpoint. A statement from the Center for AI Safety, endorsed by hundreds of tech CEOs and academics, was released in May in response to the letter from his think group. The statement stated that AI should be viewed as a societal risk on par with pandemics and nuclear weapons.
“I thought there was a lot of unspoken nervousness about moving forward with AI, which people all around the world were hesitant to express out of concern that they would be perceived as scare-mongering luddites. The letter made it socially acceptable to talk about it and gave it legitimacy.
“So you’re getting people like [letter signatory] Yuval Noah Harari saying it, you’ve started to get politicians asking tough questions,” said Tegmark, whose think tank investigates existential threats and potential advantages from cutting-edge technology.
Concerns about AI development span from the immediate, like the capacity to create deepfake videos and widely disseminate misinformation, to the existential risk posed by extremely intelligent AIs that elude human control or make decisions that are irreversible and have significant consequences.
Tegmark cautioned against considering the emergence of digital “god-like general intelligence” as a long-term danger, citing some AI experts who think it might occur in the next few years.
The Swedish-American scientist praised the upcoming UK AI safety meeting, which will take place at Bletchley Park in November. His think group has suggested that the summit should focus on three outcomes: developing a shared understanding of the gravity of the challenges posed by AI; acknowledging the need for a coordinated global response; and accepting the necessity of immediate government intervention.
He went on to say that unless universally accepted safety standards were met, development would still need to be suspended. Making models that are more powerful than those we currently use must be put on hold until they can adhere to established safety criteria. According to him, “Agreeing on what the safety standards are will naturally cause the pause.”
Tegmark also asked governments to act on open-source AI models that the general public can access and modify. One UK expert cautioned that Mark Zuckerberg’s Meta’s recent release of the open-source Llama 2 big language model was like to “giving people a template to build a nuclear bomb.”
Tegmark said that dangerous technology, whether it be bioweapons or software, shouldn’t be open source.