The UK and the US have opted not to sign an international agreement on artificial intelligence (AI) at a global summit held in Paris. The declaration—endorsed by multiple countries including France, China, and India—commits to an “open,” “inclusive,” and “ethical” approach to AI development.
The UK government issued a brief statement explaining that it refrained from signing due to concerns over national security and “global governance.”
Earlier, US Vice President JD Vance warned summit delegates that excessive regulation of AI could “kill a transformative industry just as it’s taking off.”
Open, Transparent, Ethical
The signed declaration stresses the importance of AI being “open, inclusive, transparent, ethical, safe, secure, and trustworthy.” It also calls for global collaboration to improve AI governance and drive ongoing international dialogue.
However, despite the summit’s emphasis on cooperation, the absence of the UK and the US from the agreement raises questions about the future of global AI regulation.
The decision from both countries also signals a growing divide in how AI should be approached. The UK has previously been a strong advocate for AI safety, hosting the world’s first AI Safety Summit in November 2023.
A Growing Atlantic AI Rift
Andrew Bolster, Senior R&D Manager at Black Duck, warned that the lack of alignment between the UK, US, and other countries creates a fragmented regulatory landscape, complicating the deployment of global AI solutions.
“This growing Atlantic AI Rift is a wake-up call for any organization looking to deploy or operate global AI solutions; the regulatory landscape is not as settled as it may seem, and while alignment to existing principles such as GDPR, California’s California Consumer Privacy Act (CCPA) (and its amendment, the California Privacy Rights Act (CPRA)) or Australia’s Privacy Act may stand you in good stead, that is no guarantee of continued operations.
“For instance, when US President Donald Trump rescinded former President Biden’s 2023 Executive Order on AI, he functionally removed any Federal-level guidelines for US cross-state operators managing the risks introduced by AI systems.
“We’re now in the position where this fractured regulatory landscape is tempering private investment appetites, just at the same time as public investment is ramping up, such as the UK’s earmarking of £14bn as part of the AI Opportunities Action Plan, Frances’ coordination of €109bn in public/private partnerships in AI over the next years, and the US’s $500bn partnerships around the ’Stargate’ program.
“In this kind of high-risk-high-value environment, the mergers and acquisitions markets are going to be particularly pressurized, with the mix of public and private requirements and a heightened threat model, driving the need for AI-aware security and quality attestation,” he ends
Information Security Buzz News Editor
Kirsten Doyle has been in the technology journalism and editing space for nearly 24 years, during which time she has developed a great love for all aspects of technology, as well as words themselves. Her experience spans B2B tech, with a lot of focus on cybersecurity, cloud, enterprise, digital transformation, and data centre. Her specialties are in news, thought leadership, features, white papers, and PR writing, and she is an experienced editor for both print and online publications.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.