Financial crises could be “catastrophic” because to AI, claims Yuval Noah Harari

Authors of Sapiens and historians claim that because technology is so advanced, it is challenging to predict its risks.

Author and historian Yuval Noah Harari claims that artificial intelligence could trigger a financial catastrophe with “catastrophic” effects, but predicting its risks is challenging due to the technology’s complexity.

According to Harari, an issue with safety testing AI models is that they anticipate every issue that a potent system might create, as reported by the Guardian. He claimed that, in contrast to nuclear weapons, there was no single “big, dangerous scenario” that the public was aware of.

“When it comes to AI, you’re dealing with a plethora of potentially hazardous situations, each with a very low likelihood that, when combined, pose an existential threat to the continued existence of human civilization.”

The author of Sapiens, who has been a vocal opponent of AI development, called last week’s global AI safety summit in Bletchley Park’s multilateral declaration a “very important step forward” because it represented the coming together of major governments to voice concerns about the technology and take action.

The fact that they were able to persuade the Chinese government to sign the declaration in addition to the US, the UK, and the EU is perhaps the most positive or hopeful development, he added. That seemed like a pretty encouraging indication to me. It will be very difficult, if not impossible, to control the most hazardous possibilities of AI without international cooperation.

Following the summit, ten governments—including the US and UK but not China—the EU, and leading AI firms—including Google and OpenAI, the company that developed ChatGPT—agreed to work together on testing cutting-edge AI models both before and after they are made public.

According to Harari, one concern with model safety testing was anticipating every potential hazard that a powerful system would have. “AI is unique from all other technologies in human history since it is the first to be able to make decisions on its own, come up with new ideas on its own, and grow and learn on its own. It is almost impossible for humans, even the ones who developed the technology, to anticipate every possible risk and issue.

Governments have used the possibility that AI systems could aid in the development of bioweapons as a means of warning the public about the dangers of AI, but other possibilities exist, according to Harari. The author identified the finance industry as both a potential source of a significant AI-caused disaster and an area most suited for artificial intelligence systems, arguing that “this is the ideal field for AI because it’s only data.”

“What would happen if artificial intelligence (AI) not only gained more authority over the global financial system but also began developing new financial instruments that only AI could comprehend and that no human could comprehend?” According to Harari, the financial crisis of 2007–2008 was brought on by debt instruments like collateralized debt obligations (CDOs), which were poorly regulated since they were not well understood by the public.

“Financial devices that are orders of magnitude more complex than CDOs could be created with AI.” And simply consider the scenario in which we have a financial system that is incomprehensible to humans and, as a result, impossible for them to control,” the speaker stated. “And after that, there is a financial crisis, and no one knows why.”

The UK government voiced fears last month about the possibility that a sophisticated AI model may control and manipulate financial institutions, posing an existential threat. However, according to Harari, “at least not directly,” a financial catastrophe brought on by AI will not end human civilization. “It might, indirectly, if it triggers certain kinds of wars or conflicts,” he continued. Although there is a significant risk—political, social, and economic—I wouldn’t characterize it as existential on its own.

The Israeli author, who has supported calls for a six-month halt to the development of advanced artificial intelligence and believes that companies using AI should be held accountable for any harm their products cause, stated that regulatory bodies possessing technological know-how and the ability to act swiftly in response to new developments should be the main focus instead of particular laws and regulations.

“Given the knowledge that we cannot foresee every risk and issue in advance and enact laws to prevent them, we must establish strong regulatory institutions as soon as possible that can recognize and respond to threats as they materialize.” “This should be the main effort,” he continued, “not the effort to write some very long and complicated regulation that might be outdated by the time it passes parliament or congress.”

According to Harari, as part of that arrangement, AI safety institutions ought to employ specialists who are aware of AI’s possible influence on the financial industry.

The White House unveiled plans for a body like to the UK AI safety institute a few days after Rishi Sunak announced the creation of one last month. Both organizations are anticipated to play significant roles in testing cutting-edge AI models. Speaking at the conference, Sunak stated that before passing legislation to address advanced models, the UK needs to comprehend their potential.

According to a representative for the Department of Science, Innovation, and Technology, the Financial Conduct Authority and Prudential Regulation Authority of the United Kingdom are the right watchdogs when it comes to AI and finance, according to a recent white paper on the subject.

The spokesman claimed, “They are best positioned to take a proportionate approach to regulating AI because they understand the risks in their sectors.”

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like