Prime Minister Rishi Sunak opened London Tech Week by revealing that OpenAI, Google DeepMind, and Anthropic have agreed to give conference attendees “early or priority access” to their AI models in order to support research into evaluation and safety. This follows the UK government’s announcement last week that it intends to host a “global” AI safety summit this fall.
Following several interventions from AI powerhouses warning about the existential and even extinction-level hazards the technology might bring if it is not properly regulated, Sunak has become more and more interested in the topic of AI safety in recent weeks.
Today, Sunak promised, “We’re going to conduct leading-edge [AI] safety research here in the UK.” “With £100 million for our professional taskforce, we’re providing more cash than any other government for AI safety.
The government has stated that the AI foundation models will be the focus of this taskforce on AI safety.
“We’re working with the frontier labs — Google DeepMind, OpenAI, and Anthropic,” Sunak continued. And I’m happy to report that they have agreed to grant early or priority access to models for research and safety purposes in order to aid in the development of better evaluations and aid us in better comprehending the advantages and disadvantages of these systems.
In an attempt to compare the initiative to the COP Climate conferences, which strive to get global buy-in on combating climate change, the PM also restated his earlier announcement of the upcoming AI safety summit.
He added: “I want to make the UK not just the intellectual home but the geographical home, of global AI safety regulation.” “Just as we unite through COP to tackle climate change so the UK will host the first ever Summit on global AI Safety later this year,” he said.
Promoting AI safety is a significant shift in strategy for Sunak’s government.
It was in full AI cheerleader mode as recently as March, when it endorsed “a pro-innovation approach to AI regulation” in a white paper. The strategy outlined in the article minimized safety issues by forgoing the requirement for specific regulations for artificial intelligence (or an AI watchdog) in favor of outlining a few “flexible principles”. The government also advocated for existing regulatory agencies like the data protection authority and the antitrust police to oversee AI apps.
After a few months have passed, Sunank is now expressing a desire for the UK to serve as the home of a worldwide AI safety body. Or, at the very least, that it wants the UK to lead research on how to assess the results of learning algorithms if it wants to own the discourse around AI safety.
Rapid advancements in generative AI and public statements from a number of tech titans and leaders in the AI business warning the technology could spiral out of control seem to have prompted Downing Street to quickly reevaluate its policy.
It’s also noteworthy that CEOs of AI powerhouses OpenAI, DeepMind, and Anthropic met with the PM in recent weeks, just before the government’s stance on AI changed, to try and get Sunak’s attention.
There is a chance for the UK to take the lead in research into developing efficient evaluation and audit techniques if this trio of AI giants honors their commitments to give the UK advanced access to their models, including before any legislative oversight regimes mandating algorithmic transparency have spun up elsewhere (the European Union’s draft AI Act isn’t expected to be in legal force until 2026, for example; however, the EU’s Digital Services Act is already in effect).
However, there is a chance that the UK will leave itself open to industry takeover of its fledgling AI safety initiatives. And if AI powerhouses get to control the dialogue surrounding AI safety research by granting selective access to their systems, they may be in a good position to influence any future UK AI regulations that would affect their companies.
Prior to any legally binding AI safety framework being applied to them, major AI tech companies were actively involved in publicly funded research into the safety of their commercial technologies. This suggests that they will at least have some control over how AI safety is viewed and which elements, topics, and themes get prioritized (and which, therefore, get downplayed). And by affecting the type of research that takes place because it can depend on how much access they give.
Meanwhile, AI ethicists have long expressed concern that the focus on the potential dangers that “superintelligent” AIs may present to people is obstructing discussions of the actual harms that these technologies are already causing, such as bias and discrimination, invasion of privacy, copyright violations, and resource exploitation.
In order for the AI summit and broader AI safety efforts to produce robust and credible results, the UK government must ensure the participation of independent researchers, civil society organizations, and groups that are disproportionately at risk of harm from automation, rather than simply trumpeting its plan for a partnership between “brilliant AI companies” and local academics. This is because academic research is already frequently dependent on funding from teh private sector.