Party intends to replace voluntary agreement with statutory regime’so we can understand where this is heading us’.
Labour is to require artificial intelligence companies to share the results of road tests of their technology after warning that regulators and politicians had failed to rein in social media platforms.
The party proposes replacing a voluntary testing arrangement between tech companies and the government with a statutory regime that would require AI companies to share test results with regulators.
Peter Kyle, the shadow technology minister, stated that legislators and regulators were “behind the curve” on social media and that Labour will ensure that the same mistake was not made with artificial intelligence.
Following Brianna Ghey’s death, he called for increased transparency from tech businesses, saying companies working on AI technology – the phrase for computer systems that do activities typically associated with human levels of intelligence – would be compelled to be more open under a Labour government.
“We will move from a voluntary code to a statutory code,” said Kyle, who was interviewed on BBC One’s Sunday with Laura Kuenssberg on Sunday, “so that those companies engaging in that kind of research and development have to release all of the test data and tell us what they are testing for, so we can see exactly what is happening and where this technology is taking us.”
At the first global AI safety meeting in November, Rishi Sunak reached a voluntary agreement with top AI companies, including Google and ChatGPT creator OpenAI, to collaborate on testing sophisticated AI models before and after deployment. According to Labour’s ideas, AI corporations would be required to notify the government on a statutory basis if they planned to build AI systems with a specific degree of capacity, as well as undergo safety checks with “independent oversight”.
The EU and ten countries, including the United States, the United Kingdom, Japan, France, and Germany, agreed to test AI during the summit. Google, OpenAI, Amazon, Microsoft, and Mark Zuckerberg’s Meta have all agreed to test their models.
Kyle, who is visiting lawmakers and tech executives in the US, stated that the results of the tests will help the newly established UK AI Safety Institute “reassure the public that we are independently scrutinizing what is happening in some of the real cutting-edge parts of artificial intelligence.”
He continued, “Some of this technology will have a major impact on our workplace, society, and culture. And we must ensure that the development is done securely.”