Artificial intelligence, like the climate crisis, is a worldwide issue. If we all work together, we can overcome the threats it poses.
The European Union’s artificial intelligence legislation, which leaders eventually revealed at nearly midnight on Saturday, are on track to completely overshadow Britain’s six-week-old Bletchley declaration on artificial intelligence. The language of the agreement on a suite of comprehensive legislation to regulate AI is not finalized, and there will be many devils in the details, but its coming arrival indicates a sea change in how democracy may steer AI toward the public interest.
The Bletchley declaration was a tremendous accomplishment, particularly in bringing countries like China, Saudi Arabia, and the United Arab Emirates to agree on a formal statement about AI legislation. The issue is that it was only a statement, with no legal force or enforcement mechanism. Now that the EU has taken measures to place strict legal constraints on artificial intelligence development, it is up to other countries to step up and solve the puzzle.
The final hurdle that negotiators overcame over the weekend was determining which uses of AI would be explicitly prohibited. Prohibited techniques include “cognitive behavioral manipulation,” which refers to technologies that interpret our behaviors and preferences in order to influence our decisions.
They also include “untargeted scraping of facial images from the internet or CCTV footage,” a practice already used by some companies that sell surveillance databases; “emotion recognition in the workplace and educational institutions,” which could be used by companies to discipline, rank, or micromanage employees; and “social scoring,” a dystopian surveillance tool used in China to rate individuals on daily activities and allocate (or withhold) “soc points.”
But don’t exhale a sigh of relaxation just yet. In the same way that the climate crisis is a worldwide issue that can only be handled if all countries reduce emissions, AI is a global problem that can only be controlled by several nations working together. Powerful “general purpose AI” (GPAI) systems, such as the one powering ChatGPT, can generate customized misinformation and manipulation campaigns, non-consensual intimate imagery (NCII, often known as deepfake pornography), and even designs for biological weapons.
If one region of the world regulates things, but another distributes unprotected, “open-source” versions of these technologies that bad actors can weaponize at their leisure, the entire world may suffer the repercussions. These bad actors could include Russia’s military intelligence agency, the GRU, or digital mercenaries (troll farms for hire), who may lack the funds or technology to create their own world-class models, but who could obtain powerful AI tools built without these safeguards and use them to attempt to manipulate elections around the world.
Unfortunately, the proposed EU AI act is not flawless. While it establishes laudably robust rules on GPAI, including “open-source” systems, there are still gaps. According to one European Commission official I spoke with, if AI tools such as “undressing” apps are used to make NCII, culpability appears to fall exclusively on the individual user authoring this content, not the developer of the AI system that created it. I would prefer that developers refrain from disseminating technologies that have the potential to cause such irreparable harm, especially when children may be both offenders and victims.
Another concern is that the EU AI act will not be completely implemented until at least 2026. Some aspects of it will be phased in sooner, and it is intended to be “future proof,” but AI technology is advancing so quickly that there’s a good chance it could outpace legislation. This is a much greater risk if the EU goes it alone on AI legislation.
The Bletchley declaration, which emerged from the inaugural AI safety summit, was a significant component of a variety of parallel efforts underway within the G7, G20, UN, and Organization for Economic Cooperation and Development. In 2024, South Korea and France will host follow-up AI safety meetings.
The following are the most essential binding norms that these summits and parallel governance procedures must implement. 1) Confirm the prohibition on the above-mentioned usage. 2) Strictly regulate high-risk AI systems, including GPAI, requiring comprehensive risk evaluations, testing, and mitigations. 3) Require firms to safeguard their high-risk GPAI systems and not release them under “open-source” licenses unless independent experts agree they are safe. 4) Make it clear that GPAI system creators and deployers are liable for any harm caused by their systems. 5) Require that AI-generated content be “watermarked” in a way that both lay consumers and professionals can easily discover it. 6) When training AI systems, respect the copyright of creators such as authors and painters. Finally, 7), tax AI businesses and utilize the proceeds to safeguard society from any harm caused by AI, ranging from misinformation to job losses.
Ensuring that AI is developed in ways that benefit the public good is a monumental job that will necessitate the engagement of citizens and governments all across the world. Now is the moment for everyone, everywhere, to educate themselves on the risks and benefits of AI, and to insist that their political officials take its concerns seriously. The EU has made a good start; now the rest of the world must pass binding regulations to ensure that AI serves you and your community.