As artificial intelligence (AI) continues to transform industries, governments worldwide are racing to implement regulations that ensure its safe and ethical use.
From the OECD AI Principles to the EU AI Act, new frameworks set new expectations for transparency, accountability, and risk management. However, when it comes to businesses integrating AI into their cybersecurity strategies, compliance is anything but straightforward.
We spoke to industry experts to explore how organisations can align their AI-driven cybersecurity practices with evolving global regulations. We also asked what challenges businesses face when navigating compliance across multiple jurisdictions and how AI regulations can help mitigate the growing risks posed by AI-powered cyber threats.
Taking a Step Back
When asked what key steps organisations should take to align their AI-driven cybersecurity practices with emerging global regulatory standards, Ross Moore, Information Security Researcher, says: “We need to take a step back, or up – somewhere to get a broader perspective before diving into actionable steps. AI risks are new, but the concept is much the same as all other technological advancements – the idea is the new software has created a much wider attack surface. If AI is outsourced, then there’s a much greater third-party risk. If the AI is brought in-house to avoid third-party risk, there’s an intensified need for secure development, resource protection, following an SDL process, and watching for package dependencies.”
Moore says with AI, even bringing the engine in-house may mean third-party risk because it could be trained on irrelevant, bad, or other people’s data. It requires more questions to be asked in the vendor vetting process, such as what data is the model trained on? How often is the model updated? Where precisely does the data reside? What information is transferred, and for how long does it remain after transfer?”
There are several aspects to consider in the bigger picture, adds Moore. “Growing regulations like GDPR and anticipated stricter cloud security compliance requirements, an emphasis on accountability and transparency in AI systems, compliance checks for relevant regulations, emphasis on data protection in AI regulations, anticipating and preparing for upcoming AI-specific certifications and standards (e.g., potential updates to ISO 27001 and IEC 62443), and efficiently managing the growing complexity of cybersecurity regulations.
Follow a Cautious Approach
The use of AI to tackle cybersecurity challenges presents great potential for threat detection and response, says Anastasios Arampatzis, Content Creation Strategy and Account Management at Bora Design.
“To reap the benefits of AI, organisations must follow a cautious approach to ensure that their practices also comply with the emerging safe and responsible AI regulations.”
Anastasios Arampatzis
This approach, says Arampatzis, must include the following steps:
- Regularly assess and test AI systems to ensure accuracy and performance and identify potential vulnerabilities and threats. AI system assessments should be performed regularly or when significant changes occur in business operations or technology to prevent model drift and the inclusion of biases.
- Integrate security protocols throughout the AI development lifecycle. AI systems are software applications; hence, practices such as secure coding, regular security audits, and adopting the principle of least privilege for data access are essential to ensure system quality and security.
- Develop and maintain governance policies and procedures to address AI-related cybersecurity risks. This step is required to ensure compliance with the quickly evolving regulatory landscape to oversee compliance and trustworthiness considerations.
- Always keep humans in the loop. Although these systems enhance organisations’ capabilities to address emerging cybersecurity threats, they shouldn’t operate with full autonomy. Human oversight is required to quickly identify risks and performance drifts and ensure accountability.
Understand Your Current Use of AI
The first step organisations need to make is to understand their current use of AI, so an audit of internal use of AI tools would be the first step, says Gary Hibberd, Co-Founder of Consultants Like Us. “Holding a workshop (in the business) or conducting individual audits will depend on the size of the business.
No matter what approach is taken, Hibberd says this audit should highlight what is being used and if it is in line with their current business strategy or if it has an impact on that strategy. “This will allow them to identify potential benefits or risks associated with the use of the AI.”
A Process for Sub-optimal Outcomes
“I think the largest impact or challenge AI has been working within global regulatory standards as they apply to data privacy and the storage, processing, and transmission of personal information,” adds Ian Thornton-Trump, CISO at Inversion 6.
Disclosure of the use of AI/ML (especially if a third party is providing those services) must be transparent, and a process must exist for “sub-optimal” outcomes because of AI/ML advice given on behalf of the company to customers, partners or even just visitors to a website, Thornton-Trump adds. “Data privacy laws, of course, vary from one jurisdiction to another, some jurisdictions are more permissive, others more restrictive. No matter the jurisdiction, it’s important to have mechanisms for anyone to address a concern in interaction with AI/ML acting on behalf of a company—it may become mandatory to be able to “able to speak to a human.”
This is where traditional cyber security ownership comes off the rails, and “exception” handling or “incidents” raised by a negative AI/ML interaction fall within a grey area—depending on the nature of the event—it could involve cyber security, GRC, Councils Office, corporate comms, marketing, customer service IT, information management and operations—just about every part of the business depending on the nature and scope of the AI/ML solution, Thornton-Trump explains. “Because AI/ML is providing advice, I think a discussion with E&O insurance providers to the business is warranted, and tabletop exercises on “What if AI… goes sideways” need to occur.”
A Clear Return on Investment
First and foremost, you start by deciding if there is a supported business case to be ‘AI-driven’ for your security practices; it’s pointless chasing this latest global trend if your return on investment isn’t clear or can’t be measured for success, adds Christian Toon, Founder & Chief Security Strategist at Alvearium Associates.
The question for Toon is: “Is the juice worth the squeeze?” Secondly, he says organisations need to connect their legal and cyber teams, often operating in silos; these two groups should come together. It’s not the role of the CISO to determine legal compliance, nor is it the role of the General Counsel to decide on the security strategy. They are both vital when it comes to defining the ‘appropriateness’ of an entity’s technical and organisational measures. Once this is in the bag, a clear governance framework should be established to define roles and responsibilities across the organisation. Additionally, regular risk assessments with the results reported through the chain of command can review potential biases, vulnerabilities or impacts on data privacy.
Align Practices with Global Regulatory Standards
Chloé Messdaghi, Founder of SustainCyber, says there are several key steps organisations must take to align their practices with emerging global regulatory standards. “These include implementing robust governance frameworks, conducting regular AI audits, and adopting explainable AI (XAI) models to ensure transparency, accountability, and fairness. Collaboration with regulators and industry peers is critical to stay informed about evolving standards, and investing in workforce training ensures ethical and compliant AI practices.”
Entities must also leverage reliable resources to guide AI safety and security, she says. “Tools such as the NIST AI Risk Management Framework, MITRE ATLAS, OWASP, and Databricks AI Security Framework (DASF) offer critical frameworks for managing AI risks. DASF, in particular, is highly recommended for its comprehensive approach to safeguarding AI systems. Staying informed and proactive is key to navigating this rapidly evolving landscape and mitigating risks effectively.”
Messdaghi adds that it’s also important to note that in the US, it’s imperative for organisations to monitor policy and regulatory changes related to AI safety and security. With the change in presidential administration, the Biden-era executive order 14110 on AI is no longer in effect, and a comprehensive federal law on AI regulation remains unlikely in the near term. This opens the door for individual states to shape their own AI legislation. For example, Colorado’s AI Act has set a precedent, while California’s vetoed SB1047 sought to impose liability on frontier AI model developers and introduce auditing requirements. Though vetoed, this bill may resurface in a revised form later this year, and New York legislators are exploring similar measures. Additionally, a similar AI regulation bill to the Colorado AI Act has been introduced in Texas under the Texas Responsible AI Governance Act (TRAIGA). Across the US, over 700 AI-related bills were introduced in 2024, and more than 40 proposals have already emerged in 2025.
The Diverse Compliance Requirements of AI Regulations Across Jurisdictions
When it comes to the challenges businesses face in meeting the diverse compliance requirements of AI regulations across jurisdictions, navigating fragmented regulations is one of the biggest challenges, as jurisdictions like the EU, US, and China impose different requirements, explains Arampatzis. “The global AI regulatory landscape is intricate and often inconsistent, with varying requirements and interpretations across countries. For example, the UK is following a more pro-innovative approach, while the EU has a stricter approach to regulating AI. The recent repeal of Executive Order 14110 in the US also underscores that political changes may also affect the regulatory landscape. This complexity makes it challenging for businesses to navigate and ensure compliance with confidence.”
Businesses also struggle with cross-border data transfers and operations, Arampatzis continues, as AI often processes vast amounts of sensitive data. The lack of extensive legal case history in AI regulation leaves companies without clear guidance on compliance, increasing the risk of unintentional violations, Arampatzis adds. “Finally, smaller organisations face financial and operational constraints due to the lack of specialised legal or technical teams. These constraints may hinder their efforts to comply with the emerging legislative environment. Implementing AI systems without comprehensive governance can expose businesses to legal challenges and fairness and trustworthiness dilemmas, especially concerning data privacy and algorithmic bias.”
Adapting to Rapid Changes
Companies have to address a mix of requirements across different global regions, and this fragmented environment complicates compliance efforts, especially for multinational corporations, adds Moore. “Different regions emphasise different aspects of AI regulation, such as data privacy concerns and AI security. Diverse standards, conflicting rules, and lack of harmonisation can greatly complicate one’s ability to navigate remaining compliance with global AI standards. Complying with diverse AI regulations demands significant resources that include legal, compliance, and personnel training.
Mitigating AI Exploitation – Ross Moore
In addition to reasonable, professional, and responsible activities in protecting people, systems, and data, here are some ways that regulations mitigate AI exploitation.
Addressing Ethical and Responsible AI Use
- Deter Weaponization: Regulations often explicitly prohibit developing certain types of high-risk AI systems that could be weaponised, such as autonomous weapons or AI for mass surveillance.
- Ethical Boundaries: Encouraging adherence to ethical standards dissuades the creation of AI systems designed for harm or misuse.
Enhancing Accountability
- Clear Ownership: Defining roles and responsibilities for AI systems ensures accountability for security breaches.
- Regulatory Oversight: Compliance with regulatory standards involves periodic checks, reducing the chances of unchecked vulnerabilities.
Encouraging Transparency
- Disclosure Requirements: Regulations may mandate that organisations disclose the use of AI systems and their potential risks, raising awareness and accountability.
- Public Reporting: Transparency requirements help expose vulnerabilities and misuse, enabling organisations to take proactive measures.
Moore also says the AI regulatory landscape is growing and changing quickly, with new proposals and laws expected to arrive rapidly across jurisdictions. This requires companies to update their compliance strategies constantly. “Organisations need to establish or expand governance mechanisms that involve multiple stakeholders, including compliance teams, government liaisons, technologists, data privacy experts, legal professionals, and responsible AI specialists. Companies must also develop or adopt AI models that can provide clear explanations for their decisions, avoiding “black box” (AI systems that make predictions without revealing how they reach those conclusions) models to comply with regulations emphasising accountability and transparency. In addition, the complex regulatory environment is likely to lead to higher compliance costs for businesses, including expenses related to new expertise, advanced processes, and regular updates from AI legal specialists. Finally, regulatory agencies are enforcing AI rules across different policy areas, with monetary consequences for non-compliance. Companies must be prepared to face investigations and potential penalties.”
A Data-Driven Decision
Differing acts of law require common, similar or additional measures in the spirit of AI, technology or digital resilience, says Toon. Keeping on top of these requirements, coupled with how they’re mapped across an organisation’s control framework, can be resource-heavy and raises questions of whether this should sit with the legal team or the technology team that is working to implement the technology. “Additionally, with multi-jurisdictional laws, you can capture all those requirements, but what happens when you have conflicting views? Having access to the right legal advice at this point is vital; the decisions your business takes today need to be ‘defensible’ for the future. Should the regulators, customers or investors come calling, it’s vital to have that decision being data-driven and supported by legal than not.
“The world develops at differing paces, so actually, now you have varying degrees of maturity on AI regulations; how do you choose which one to align with?”
Christian Toon
“Some clients have opted to take a similar approach as they did with GDPR – in some views, the penultimate Privacy Regulation. In Europe, this was seen as the gold standard, so companies operating elsewhere across the world, including the EU, used ‘privacy by design’ and the GDPR as their benchmark for privacy around the world. When operating globally, it’s not a bad thing to align to the common high standard,” Toon adds.
Develop a GRC Framework to Oversee AI Adoption
It’s Hibberd’s opinion that this review needs to include a broad range of stakeholders and should not be run by the IT department. “Organisations should develop a Governance Risk and Compliance (GRC) framework or function that will oversee the adoption and use of AI technologies. The key challenge is that many organisations don’t have these skills or frameworks in place. The AI landscape is changing; it will take some time before legislation and regulations catch up with innovation. Therefore, the challenge/risks organisations face is that their structures and strategy will need to be more responsive, or resilient, to the shifting sands of the AI landscape.”
Privacy Policies Aligned to Regulatory Jurisdictions
For companies with a global presence, it becomes necessary in many cases to have bespoke privacy policies aligned to regulatory jurisdictions and, in some cases, “guardrails” on certain interactive features of a public-facing website, comments Thornton-Trump.
“Managing global data protection requirements in an ever-changing regulatory environment can be an immense challenge. This is no longer about determining the preferred native language of a guest based on IP – potentially a privacy violation if not consented to when the service is provided by a third party requiring data transfer of the visiting IP address. What used to be simple interactions now must be “covered” by cookie policies and privacy statements. What this comes down to is the due diligence in GRC but also a lot of testing and documenting how interactions and API had offs work. AI/ML introduces a layer of uncertainty which, I think, to put it mildly, is not welcome – but the potential business value of AI/ML for some tasks – especially related to automation of routine interactions may be significant,” Thornton-Trump explains.
The Role AI Regulations Play in Mitigating Risks Associated With AI Exploitation
AI regulations also play a crucial role in mitigating the risks posed by malicious actors who seek to exploit AI for cyberattacks, fraud, and other threats. When asked about this, Moore says a positive role of AI regulation is that, properly crafted, it provides a better global view of concepts, vulnerabilities, threats, and possibilities that readers may not have considered.
“Regulations are important, and it’s a fine line they have to draw to create guardrails to protect people while also promoting technological advancements. As we’ve seen with other regulations, it’s a mixed bag of protections. Some regulations are specific in their requirements for doing business (2FA, firewall, secure code development). Others provide general, vague, or even overly broad requirements, such as encrypting data when possible and protecting information equal to or better than standards created in the 1980s, but are limited in scope. For instance, international data transfer must have protections when crossing into countries X, Y, and Z. Many regulations are fairly easy to attain and implement, and even those protections aren’t always sufficient to deter threat actors,” adds Moore.
Each nation has its own standards of conducting business, and if regulations are too stringent, Moore says they can negatively impact international business and stifle innovation and progress, while less stringent standards can create ineffective protections while also dismissing the security standards of other nations.
AI is Not Designed Around Attack Surface Reduction
“I think AI regulations are not being designed around attack surface reduction or mitigation of threats and are more concerned with data privacy controls and ensuring an optimal outcome of a process which is handed off to an AI/ML service for the human.”
Thornton-Trump.
“Improvement of data privacy regulations seems to be driven by both successful threat actor activity and human mistakes when it comes to securing personal and/or sensitive information. Globally, some privacy regulations such as the GDPR are sparse on prescriptive solutions, whereas other regulators such as The NYDFS are highly prescriptive regarding security control requirements.”
Ultimately, Thornton-Trump believes it is no surprise that mandatory “incident response plans” are being incorporated into the regulatory landscape – including a requirement to test the effectiveness. AI/ML events with significant business impact become another part of this incident response requirement and potential notification to regulatory authorities if the event is “financially material”. “We certainly can’t predict all the potential outcomes of AI/ML, but some thought should be given to potentially negative AI/ML outcomes, so some AI/ML incident handling play-books that anticipate likely scenarios and how to remediate them is advisable.”
Aimed at the Ethical Use of AI
It’s challenging to see how regulations can mitigate most risks, but if the AI Regulations can speak specifically to AI use and the data they consume, we may go some way to managing the risks associated with AI and threat actors, comments Hibberd.
“For AI regulations to be effective, they must be aimed at the ethical use of AI and mandate that organisations take due care with the kind of AI tools they use and the transparency in their use,” Hibberd continues. “This would fall into the world of GDPR in the UK and Europe, so there is little need for further regulations. However, if there were to be additional regulations, then it could focus on mandatory risk management and data protection for ALL organisations. This is because everyone now uses AI, even when they don’t specifically purchase it (for instance, AI is embedded into many Microsoft and Apple products).”
Regulated and Detailed in Law
As we’ve seen with other regulations such as cyber resilience or data privacy, businesses adopt controls more easily when they are regulated and detailed in law, says Toon. “These regulations are important in mitigating risks associated with AI because they will detail the requirements for robust and resilient AI instances. They will not move as quickly as the threat actors, so businesses will still need to develop controls that support their business and manage their threats. Still, the AI regulations will be a good baseline for foundation controls.
“I expect other standards and frameworks to mature and support the management of AI being exploited by threat actors, which organisations will need to take heed of. Take MITRE’s Adversarial Threat Landscape for Artificial Intelligence Systems, akin to its ATT&CK framework, which helps highlight a global framework of adversary tactics and techniques against Al-enabled systems based on real-world attack observations and realistic demonstrations from Al-red teams and security groups. It is open-source, industry-accepting standards that will mitigate the risks in the first instance; we can only hope AI regulations can keep pace,” ends Toon.
Establishing Clear Boundaries
AI regulations help establish clear boundaries for responsible and safe development and deployment, says Arampatzis. “For example, EU, US, and China regulations strongly emphasise the responsible, trustworthy, and safe use of AI, consistent with constitutional, societal, and legal standards and principles.
Frameworks like the EU AI Act prioritise risk management and fair usage, prohibiting the procurement and use of systems with unacceptable risk, thereby setting deterrents for threat actors and enhancing the overall resilience of AI systems in the cybersecurity landscape. Finally, AI governance regulations help uncover vulnerabilities and mitigate risks posed by malicious actors by mandating transparency, auditing, and accountability. They also encourage businesses to adopt advanced monitoring tools to detect AI misuse.”
For Arampatzis, the overall goal is to ensure that these AI systems act as amplifiers of human capabilities to mitigate cybersecurity risks and not as creators of new risks and vulnerabilities. “However, in such a nascent technological landscape such as GenAI and agents, regulations alone do not suffice. It requires a more holistic view based on the lessons learned from past and international collaboration and information exchange.”
Stay ahead of evolving AI laws—download our AI Regulations Tracker for the latest updates and compliance insights.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.