Search
Close this search box.

Navigating the Regulatory Landscape of AI Governance with Anil Sood

Reputation is primarily what motivates an organization to adhere to responsible AI practices.

In today’s rapidly evolving technological landscape, artificial intelligence (AI) holds immense potential to revolutionize industries and drive innovation. However, with this transformative power comes a pressing need for effective regulation and governance. Welcome to our exploration of the regulatory landscape of AI governance, where we navigate the complex web of policies, laws, and ethical considerations that shape the development and use of AI technologies.

Join us as we delve into the intricacies of AI governance, examining key issues such as data privacy, transparency, accountability, and ethical use. Through insightful discussions and expert insights, we aim to shed light on the challenges and opportunities facing policymakers, businesses, and society as they navigate the ever-evolving world of AI regulation.

In this week’s episode, we have the pleasure of welcoming Anil Sood, AI Governance and Model Data Management Leader at EY who brings his extensive expertise to the discussion. Drawing from over 18 years of cross-functional experience across technology, banking, and consulting industries, Anil shares his unique perspective on the driving forces behind his interest in regulatory frameworks and consultancy. His insights illuminate the convergence of technical expertise and industry knowledge that fuels his passion for navigating the intricate maze of AI governance.

Disclaimer: The views expressed by Anil Sood in this interview are solely his own and do not necessarily reflect the views of any company, organization, or institution with which he may be affiliated.

AIM Research: What drives your interest in regulatory governance frameworks and consultancy? What aspects of these subjects ignite your passion?

“I find AI governance particularly fascinating because it capitalizes on my experience in banking while also drawing upon my technical background.”

Anil Sood: I believe it aligns well with my background. Reflecting on my previous experience, I spent several years as a software engineer, roughly six to seven years, before transitioning into the banking industry. I find AI governance particularly fascinating because it capitalizes on my experience in banking while also drawing upon my technical background. This unique combination enables me to contribute effectively, which is a significant reason why I find it so appealing.

AIM Research: In light of AI’s exponential growth and its regulatory challenges, how do you assess the effectiveness of current AI governance? Are policymakers doing enough, and are we moving in the right direction? What are your initial thoughts?

“While there are numerous regulatory guidelines, none are as actionable as the EU AI Act.”

Anil Sood: Discussing the current state of AI governance, it’s evident that the EU is leading the initiative. While there are numerous regulatory guidelines, none are as actionable as the EU AI Act. Additionally, there are standards like the OECD guidelines and NIST, but these tend to offer more general guidance rather than enforceable regulations. These entities are more about setting standards and offering guidance to sovereign states. In my view, the EU AI Act is the closest we have to something enforceable.

AI governance remains in a transformative stage; we haven’t reached a steady state. The guidelines are evolving, and we haven’t conducted adequate testing to solidify these frameworks.

Addressing the issue of the ‘black box’ problem, the EU AI Act has set minimum requirements for explainability, introducing a tiered system that includes prohibited use cases, high-risk cases, and low-risk cases, among others. This segmentation establishes a baseline for explainability. However, explainability was less of a concern before the prominence of technologies like GPT, which have brought new attention to the need for regulations focused on this aspect.

Policymakers face several key challenges. Technology’s rapid evolution consistently outpaces legislative efforts, making it difficult to keep laws updated with the latest advancements. The EU AI Act, for instance, has seen significant changes since its initial drafting six years ago, reflecting the fast pace of technological innovation.

Another challenge is the bureaucratic process involved in amending laws, which is often slow and requires parliamentary approval. This lag means that policies frequently fall behind technological developments. Moreover, regulatory bodies and governments often lack sufficient time to thoroughly test these regulations. While they possess expertise, the scope of testing is not always adequate, which raises questions about the effectiveness and implementation of these laws.

AIM Research: How has the EU AI Act, with its distinctive risk-based regulatory framework, set the groundwork for future AI governance, demonstrating Europe’s leadership in developing regulatory frameworks for technology?

“The EU AI Act distinguishes itself by being punitive in nature, imposing stringent obligations around high-risk AI systems, and assumes big means are dangerous.”

Anil Sood: The EU is definitely at the forefront; it is the closest to what we have in the form of law right now. I think in a few weeks, or at most a couple of months, it will take the shape of a law. In terms of distinctive aspects, the very first thing that comes to my mind is that it is punitive in nature. Earlier, the other regulations that we have in the US, Canada, or any other place. There’s no direct action as such. Specifically talking about a law, it really prohibits certain use cases which are clearly called out, and then there are actually sorts of penalties, right? Or in terms of a certain percentage of the revenue. 

Secondly, there are again stringent obligations around AI systems classified as high risk. This too is different from global approaches where the analysis and classification of AI systems and imposition of associated obligations is not this detailed. So, I would say it is fairly comprehensive when it comes to the category of high-risk systems. 

And another thing I would say is fairly distinctive is, maybe even if it is in critique. It is that in the case of the EU, it sort of assumes that big means dangerous. For example, any model that uses more than a certain size of high-impact systems, so it sort of thinks that it is dangerous, or maybe it could be adverse, right, to the public at large. So, this may not necessarily be true. For example, smarter and more capable doesn’t necessarily mean that it is going to cause more harm. I would say this is also one key aspect that I could recollect from the EU Act. 

And one more important aspect is that the EU Act doesn’t really explicitly mandate red team testing. If we recall the executive order from Biden it clearly calls out for it, even though it only applies to federal agencies, but then it does sort of emphasize the use of testing in some of these AI applications, but then the EU doesn’t really explicitly call out. It doesn’t really mandate red team testing. 

So, I would say, going forward, it is certainly going to set the tone because the Act is gonna become the de facto standard going forward because of the scale at which it is going to operate. It’s certainly going to set the stone, and I would say other regulations as well globally can follow what is the best approach and leverage the key pieces from the EU Act.

AIM Research: What red team testing entails in the context of cybersecurity, and what implications it has for an enterprise’s security posture?

“Red team testing is essentially a form of testing but with the specific intent to identify issues.”

Anil Sood: Red team testing, in general, is a structured testing effort aimed at uncovering flaws and vulnerabilities in an AI system. It is typically conducted in a controlled environment and in collaboration with the developers of the AI. Once an AI system is in place, the goal is to identify use cases that could exploit the system’s weaknesses. Red team testing is essentially a form of testing but with the specific intent to identify issues. These could be security issues, privacy concerns, or any other types of problems, including transparency. The underlying intent here is to proactively identify and address potential issues in AI systems.

AIM Research: When should enterprises undertake red team testing, particularly before deploying solutions into production? Is this testing relevant for both B2B and other sectors, and is it specifically critical for high-risk applications?

“Red team testing is crucial before deployment to identify ethical, privacy, or security issues and should extend to post-deployment to ensure ongoing integrity and security.”

Anil Sood: Red team testing should generally be conducted once the model is ready for deployment or even during the development lifecycle. Developers typically engage in some form of unit testing throughout the development process of the model. However, it’s crucial to conduct red team testing before deployment to ensure that there are no ethical, privacy, or security issues within the model. This practice should also extend to the post-deployment phase.

Organizations can benefit from leveraging red team testing to continuously monitor the model after deployment. Although the initial version may be well-developed, red team testing enables the identification of potential issues during the monitoring stage of AI systems, ensuring ongoing integrity and security.

AIM Research: How do you envision the role of AI testing tools in influencing regulatory guidance, and how can these tools effectively mitigate risks associated with AI technologies while promoting Innovation and Industry collaboration?

“Reactive testing is always useful because when we do testing, a lot gets uncovered when we do thorough testing, regardless of the nature of testing. So, the intent is all good and makes sense.” 

Anil Sood: Reactive testing is always useful because when we do testing, a lot gets uncovered when we do thorough testing, regardless of the nature of testing. So, the intent is all good and makes sense. However, I think as for the implementation, this can become a bit of a challenge, especially around reporting expectations or enforcement more broadly.

As of now, the Biden order, I think applies only to federal agencies and not to a large extent to private institutions. But then, if it is mandated that it should be applied everywhere globally within the US, then it can lead to a bit of overburdening. So, for example, in the financial services sector as well, banks have a lot of regulatory reporting to take care of, and when it comes to AI, the number of players are a lot more. And if you mandate such red team testing, I don’t really think reporting it’s good to me.

So what you want to see is that once we do the testing, also share the results with the government of the US so that could lead to a bit of good governance, I would say. But, having that being said, I would say certainly these kinds of efforts would uncover issues that could be there in the model. And it’s certainly going to help the regulatory guidance in coming up with better and appropriate governance and guidance.

AIM Research: Why do institutions like the OECD and the EU prioritize the establishment of regulatory frameworks for technology, exemplified by initiatives such as the 1980 guidelines on privacy and trans-border data flows and the GDPR, especially in contrast to the perceived less proactive approach in the US? What are the primary incentives driving these organizations to take such proactive measures in technology policy?

“Anything that could adversely impact the economic and social well-being of people will definitely be under their watch. They would want to see what’s going on with these applications and how they could potentially impact people adversely.” 

Anil Sood:
OECD, as the name itself suggests, stands for the Organization for Economic Cooperation and Development. It’s an international organization comprising 38 member countries. Its mission is to promote policies that would improve the economic and social well-being of people around the world. That’s the mandate, and it doesn’t just focus on economic aid but also regulates various other aspects within its purview. It so happens that data privacy is one of the key issues we’re seeing these days. That’s why the OECD is scrutinizing economic aid with a magnifying glass. So now, to your question regarding what motivates them. Of course, anything that could adversely impact the economic and social well-being of people will definitely be under their watch. They would want to see what’s going on with these applications and how they could potentially impact people adversely.

AIM Research: In light of OECD standards and associated regulatory frameworks, how are these guidelines implemented at the ground level? What implications does this have for enterprises? Lastly, what is your perspective on the broader implications of this issue?

“OECD captures the ‘what’ and ‘why’ of things but not the ‘how.’ While it doesn’t set guidelines, its influence is significant.”

Anil Sood: OECD job isn’t to set guidelines and if I think about it at a high level, what it basically does is it captures the what and the why of things, but it won’t get into the how. It does have a significant impact because it’s fairly famous. OECD is pretty much the standard, I would say. For example, if you look at model definitions the EU’s action has borrowed the definition from OECD. And then OECD also has a number of working groups that function actively and very frequently.

So, these working groups do come up with sort of these guidelines on what’s next. So, they are always at the forefront of development. So, OECD is all the government bodies and all the standard organizations are definitely keeping a close watch and always citing it. So, it does have a significant impact, but then OECD will not really define what should be done. It will only be recommended, and then I think it has a big role to play. A lot of countries are looking to the OECD for guidance.

AIM Research: How do you envision the potential effects of the NIST Cybersecurity Framework, given its voluntary approach and focus on key areas like identification, protection, detection, response, and recovery, in strengthening cybersecurity practices within the context of AI development?

“While regulatory guidelines set the standards, frameworks like those from NIST provide the practical tools for compliance, bridging the gap between regulation and practical implementation.”

Anil Sood: The NIST Cybersecurity Framework is voluntary in nature. However, it has become a standard guide for many organizations managing cybersecurity risk. Even though it was initially targeted at the US, it has been adopted globally, essentially becoming a standard. This is similar to the role of the OECD, which we discussed as providing the “what” and “why” in cybersecurity practices. In contrast, NIST focuses on the “how” these practices should be implemented.

In addition to cybersecurity, NIST is developing an AI Management Framework, which, like its cybersecurity counterpart, is gaining global adoption. These risk management frameworks, whether they pertain to cybersecurity or AI, are crucial. They typically support regulatory mandates but do not form the basis for these mandates. Instead, they are developed after regulations are established, playing a critical role in the implementation of regulatory guidelines.

While regulatory guidelines set the standards, frameworks like those from NIST provide the practical tools for compliance, bridging the gap between regulation and practical implementation. In terms of informing the guidelines, the role often falls to the OECD, but for implementation, organizations turn to NIST.

AIM Research: What incentives might drive companies to embrace ethical and responsible AI use, especially in light of mixed opinions on whether ethics contribute to business profitability? How can businesses be encouraged to eschew AI solutions that pose high risks or serious consequences?

“Reputation is primarily what motivates an organization to adhere to responsible AI practices.

Anil Sood: Incentives are numerous for organizations to maintain responsible AI practices. For instance, Goodwill, a critical component on the balance sheet, is directly linked to an organization’s reputation. No company wants to suffer a reputational hit, as it can negatively impact market capitalization and overall market value. Therefore, I would say reputation is primarily what motivates an organization to adhere to responsible AI practices.

Secondly, there are regulations to consider. For example, in healthcare, there are significant regulations around the use of AI. Similarly, in hiring practices, New York City has implemented a law regarding the use of AI models that mandates a bias audit to ensure there is no demographic disparity. It’s not just about reputation; it’s also about complying with local laws that require organizations to follow responsible AI practices.

Beyond these points, companies face challenges in identifying the appropriate controls. Fairness is a commonly discussed topic, but it encompasses various metrics. The question arises: what is the right metric of fairness? It could be demographic parity, as imposed by New York City, equal opportunity, or equalized odds. The lack of prescriptive regulations means companies may be unsure about the correct approach. However, there’s significant merit in adhering to responsible AI practices, as it increases trust in an organization. This trust is directly linked to the market value of the firm, ultimately benefiting shareholders. So, there are ample reasons for organizations to ensure they follow responsible AI practices.

Picture of Anshika Mathews
Anshika Mathews
Anshika is an Associate Research Analyst working for the AIM Leaders Council. She holds a keen interest in technology and related policy-making and its impact on society. She can be reached at anshika.mathews@aimresearch.co
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!
Join AIM Research's Annual Subscription Today

Unlock Unlimited AI Insights for Just $9999!

50+ AI and data science reports
All new reports for the next 12 months
Full access to GCC Explorer and VendorAI
Stay ahead with cutting-edge insights