As AI regulations expand daily, this new technology is becoming an era-defining challenge. Finding the balance between innovation and risk is always challenging, but even more so with AI, as businesses grapple with leveraging AI for governance, risk, and compliance.
In Diligent’s webinar, “Responding to AI’s Changing Regulatory Landscape”, industry-leading experts Antony Cook (Corporate Vice-President and Deputy General Counsel, Microsoft) and Nasser Ali Khasawneh (Global Head of AI at Eversheds Sutherland) shared their perspectives and experience of helping organisations navigate the emerging AI regulations. Some key insights are below, and you can watch the entire webinar on-demand here.
“One thing that’s changed in the past 18 months is the attention that boards are putting on making sure they have the right approach to governance and that it reflects the implication of the technology across their organisation…”
— Antony Cook, Corporate Vice President and Deputy General Counsel, Microsoft.
Setting the scene: regulatory scale and complexity
The EU AI Act is the most high-profile example of AI regulations at present, but there is a raft of regulations in development worldwide. The Organization for Economic Cooperation and Development (OECD) is currently tracking 61 countries in the process of developing AI policies. Besides these sit a wealth of sector-specific initiatives – with approximately 393 in progress – and governance authority programmes; the OECD is aware of 760 governance initiatives currently in the pipeline.
The scale, breadth, and depth of regulatory attention are confirmation – if needed – of the importance of ensuring that the path to AI adoption has sufficient guardrails. Nevertheless, there is diversity in how different jurisdictions approach the task.
Cook summarised the three main approaches:
- Safety first: The recognition that AI can be used to achieve malicious and positive outcomes is front and centre in some territories. The White House’s commitment to AI safety and the U.K.’s Bletchley Park AI safety summit are examples of this approach, bringing stakeholders together to discuss the threats of AI and how to contain them.
- Broad legislation: The EU AI Act is the most prominent example of an attempt to create broad legislation that covers many issues that AI creates. It seeks to establish guardrails without compromising the progress that AI can deliver.
- Sector- or issue-specific approach: Countries without resources for broad legislative development or those wishing to assess the likely impact of AI in their territory before legislating are addressing specific issues. For example, Japan has constantly amended its intellectual property to address copyright infringement issues in AI training data.
Some territories employ a mix of approaches or shifts between them as political leadership changes, such as in the U.K.
Harmonisation and international cooperation: a critical challenge
AI regulations are a jurisdictional, nation-state-focused challenge (or supranational in the case of the EU). Still, ultimately, a degree of harmonisation will be essential to help multinational organisations operate a compliant approach, as Khasawneh explained: “We always have to work within jurisdictions, within national laws, but it’s fair to say that AI knows no boundaries. It is a technology that flies across boundaries, so the need for harmonisation could not be greater as we consider various aspects of law that are affected by AI.”
He welcomes the U.K.’s initiative at Bletchley Park of bringing together several countries and organisations to work towards standardisation and harmonisation, wondering: “Will we move towards a global body that is the AI equivalent of the World Intellectual Property Organisation, for example?”
However, Nasser acknowledges that geopolitical issues will likely be a barrier to international cooperation, preventing any kind of global treaty on AI.

Key AI themes emerging in legal departments
As Global Head of AI at Eversheds Sutherland, Nasser is well-placed to give an overview of the common themes and issues clients seek external counsel. These include:
- Operational and policy guidance: Clients want help to devise governance policies that guide employees on the do’s and don’ts of AI use and how they are expected to minimise harmful consequences when employing or developing AI.
- Contracting support: How to structure contract terms with partners and suppliers, considering the nuances of artificial intelligence and GenAI.
- Interacting with IP laws: Organisations interacting with and developing their own AI want to understand legal risks regarding IP rights, copyright, and whether the platforms they use or develop might breach it.
- Interacting with data law: Similarly, businesses seek to understand the data privacy risks introduced by AI systems and providers to avoid infringement and protect any proprietary data exposed to AI.
- Understanding bias and other risks in employment law: Companies want to unlock the benefits of AI to support employees while mitigating risks around worker rights and mitigating bias in applications such as employee screening.
These topics demand a wide range of expertise. Because few organisations have the depth of experience in this new and expanding area, they underline the necessity of seeking external advice. Alongside that advice on practical elements of AI adoption, businesses need to focus on developing their framework for responsible governance involving AI regulations.
Responsible AI governance: Microsoft’s approach
Cook shared how Microsoft responded to the challenge of remaining responsible with AI regulations. The company’s approach was rooted in the realisation that while the engineers and developers who create AI systems and applications think about the technology through a particular lens, it is vital to go beyond these specific perspectives to establish globally applicable parameters for its ethical application and use.
Microsoft convened a multi-disciplinary and diverse set of stakeholders to explore responsible AI development and use. It included lawyers, humanists, sociologists and computing engineers to establish how to ground technology development appropriately.
The result was a set of principles focusing on reliability, safety, privacy, security, accountability and transparency. These amount to an AI standard that is applied across the business and is operationalised through engineering practices, for example, that ensure the principle is put into practice.
Once principles and frameworks have been developed, the next challenge is implementation, and leadership is critical.
Leading on AI: Board accountability
The EU AI Act already includes obligations for AI literacy among boards and leadership teams. Nasser believes: “AI accountability is going to become an absolute requirement for boards to comply with, and for CEOs to lead with.” He has witnessed growing focus from boards: “One thing that’s changed in the past 18 months is the attention that boards are putting on making sure they have the right approach to governance and that it reflects the implication of the technology across their organisation because this is a technology which is changing go-to-market, it’s changing research and development, it’s changing supply chain management, it’s changing employee productivity and workforce development. So it has a broad implication across organisations, which I think means that boards are much more focused.”
“We need to instill a self-learning culture. You can’t expect to arrive at a board meeting once a month and then learn about AI at the board meeting. We all need a commitment to double down on AI literacy because that puts you in a position to make informed decisions if you’re a boardroom, for example, or a CEO.”
— Dale Waterman, Principle Solution Designer, Diligent.
Cook acknowledges the scale of assimilating all the information boards needed to move forward on AI, but cautions that trying to figure everything out before moving forward is not a competitive approach: “The technology is so important to competitive differentiation and opportunity, so companies need to be involved in AI. The question is, how do they do that appropriately?”
He advises boards to draw on the expertise of large companies spearheading AI, like Microsoft, and trade associations: “There’s a lot of the trade associations, which are creating the sets of materials you can leverage to get yourself across the issues. Making sure you’re aware of what the technology is doing and how it’s being used in your organisation is a big way to manage the risks you may be exposed to.”
AI regulations: the ultimate risks
Perhaps the greatest business risk around AI regulations right now is the risk of doing nothing. You can decide how you’d like to approach it, but I think every company needs to have a considered approach, decide their ambitions, and then start a journey because sitting, watching, and waiting is not a fad; it’s not going away.