Article

AI for the CRO: Transforming AI governance, compliance and security

Successful AI risk management strategies for risk leaders

July 01, 2025

Key takeaways

 Line Illustration of an AI chip

Artificial intelligence and AI technologies are critical to the success of the risk function.

bulb

CROs can deploy AI to address compliance and navigate data governance and enterprise-level risks.

AI

Leaders may devise a strong AI plan, but support may be necessary to select tools and frameworks.

#
Artificial intelligence Predictive analytics Machine learning
Risk consulting Business risk consulting Generative AI Internal audit

With a long list of significant benefits and potential use cases, artificial intelligence and AI technologies are playing a pivotal role in the success of middle market businesses, especially for the risk function. As AI advancements continue to evolve, chief risk officers (CROs) are key contributors in implementing an effective AI strategy to proactively manage AI risk, establish a strong AI governance foundation, and drive responsible, transformative AI adoption.

To demonstrate how quickly AI has advanced in the middle market, the 2025 RSM Middle Market AI Survey: U.S. and Canada found that 91 per cent of middle market executives are either formally or informally using AI in business practices. But 53 per cent of organizations who have adopted and implemented generative AI believe they were only somewhat prepared to do so and 70 per cent using generative AI report they need outside help to get the most out of that tool. With the growth and potential of AI solutions, CROs have a significant opportunity to navigate an increasingly complex environment of evolving regulations, data privacy concerns and security risks to drive innovation and value across processes. 

Steve Biskie and Jason Proto, principals with RSM US LLP, along with RSM’s AI governance leader, Joseph Fontanazza, discussed how risk leaders must break down AI governance into manageable steps built upon existing corporate, IT and data governance principles during RSM’s webinar AI for the chief risk officer.

Below, we take a look at some critical details for CROs to consider when developing an AI strategy, as well as issues and opportunities and potential use cases for many AI tools and applications.

The role of CROs in AI adoption

With AI reshaping the business landscape, CROs can leverage emerging AI tools and models to address compliance and regulations, navigate data governance, and mitigate enterprise-level risks. Some key areas CROs must focus on when considering an AI strategy that properly considers risks include:

Checklist

Alignment: AI governance and data strategies must support the firm’s goals, strategy, values and regulatory requirements

consistemcy

Consistency: Data practices and governance standards should be uniform across the enterprise level

AI

Definition: Roles and responsibilities should be preset for model evaluation criteria, oversight structures and inclusion of all AI systems, including third-party tools

 Line Illustration of policy papers

Informed decision making: Incident response, stakeholder feedback and ongoing regulatory compliance, including user training and awareness, must be clearly defined

In many cases, AI tools and applications—often not approved or evaluated by the organization—are currently in use. Risk leaders must establish a view of this use and how it may increase potential risk exposure.

“Multiple free or open-source AI models are being used within organizations,” says Fontanazza. “CROs must understand their business purpose, suitability and potential risks. They must also understand the evaluation parameters. Often, Grammarly is the most commonly used AI model within an organization, yet we rarely see organizations consider the implications of such free-to-use AI tools. Such practices often result in loss of control over data [where sensitive data is leaving the secure organizational walls] and other data-related risks.”

Key issues

Given AI’s vast potential, companies can leverage the technology in many ways to manage compliance and drive innovation by effectively integrating third-party AI vendors into their business processes. Some critical factors to consider with third parties include:

response model

Shared responsibility models: Establishes clear accountability with risk distribution between the organization and vendor

Line Illustration of a shield

Data considerations: Addresses data privacy, security and compliance aspects

people

Model development process: Provides insight into the vendor's AI development methodologies and quality assurance

alignment

Alignment with values: Ensures ethical and regulatory compliance that aligns with organizational values

In addition, AI model evaluation for transparency and security protocols, along with end-user training and education resources, also play a critical role in successful AI deployment.

“Think of an open AI model, like ChatGPT. When you use such tools to analyze confidential financial statements or strategy, there will always be a risk of data loss,” says Biskie. “It’s not only about how to use the application, but also about using it properly and ethically to mitigate risks. CROs must stay extra vigilant while establishing AI deployment, considering employees have access to both open AI like ChatGPT and Microsoft Copilot, which operates within the firm’s secure environment.”

Ultimately, one of the most significant obstacles to successful AI use within the risk function is the company’s data foundation. Data challenges are a common theme during many projects, but they must be addressed to get the intended value from AI investments. 

“One of the things that is slowing down the adoption of AI in some organizations is the poor data governance and poor-quality data in their environments,” says Biskie. “It’s like if you feed bad data into an AI model, you can't expect it to come up with great solutions. The output is directly proportional to the input.”

Challenges and opportunities

It’s clear that risk management is an area with massive potential for significant optimization and process improvement with AI solutions. However, CROs must consider the potential for bias that can emerge in multiple forms, including:

  • Data bias: This occurs when irrelevant or inappropriate data is used to train models that, in turn, inappropriately influence model-generated outcomes. It also involves demographic information that is very individualized or taken from small subsets.
  • Human bias: Human bias occurs when users unconsciously influence data entry and model interactions, often leading to implicit bias.
  • Ethical bias: This sort of bias stems from organizational limitations, like collecting data from only one demographic subset. Prejudice and stereotyping are two driving factors for this bias.

“When you create an AI model with an expectation of a specific outcome, and the software generates that particular result, you must go back and analyze the testing data to ensure you have the right data in your AI model,” says Proto. “Most likely, what you're doing is you're bringing in the data to give you the result you are expecting. Instead, you should engage with these tools with a blank-slate mindset. You should never predict the outcome or use specific data with a preconceived notion.”

Multiple factors can contribute to AI monitoring and audit principles to determine model effectiveness, performance and risk management. The U.S. Government Accountability Office has developed a framework to standardize these approaches. The framework encompasses:

  • Proactive planning to identify bias, privacy risks and regulatory concerns with specific key performance indicator (KPI)-related impacts
  • Drift monitoring to create seamless execution of statistical properties and to predict output and patterns
  • Ensuring traceability and drafting actions to manage regulatory compliance, corrective actions and service level agreements
  • Instituting ongoing maintenance to ensure model viability against set goals, objectives and environment changes
  • Scaling and adapting AI models to new use cases or domains and evaluating both expansion success and risks  
“One of the most common queries we get is how to optimize and scale AI tools effectively. The answer here is to know that there’s never a single exact way you can expand AI models for two or multiple similar processes,” says Proto. “If a model works great for one process, it may or may not work for another. In order to scale the current AI capabilities within your organization, you must first identify the effort, followed by limitations, data quality and risks involved with that expansion.”

AI solutions and AI risk management use cases

When considering potential AI use cases, CROs must ensure that IT and data scientists are properly validating models so outputs do not create opportunities for regulatory or compliance issues. Model output validation is a critical component to boost efficiency, manage risks and enhance productivity. Potential challenges include:

data security

Data privacy and security: AI models such as ChatGPT may be exposed to sensitive and confidential information if improperly provided the information through prompts written by employees. These models should be tested against all systems at the application level to identify potential security losses. 

alert

Drift, bias and data cleanliness: Poor data testing and assessment via k-fold cross-validation can lead to data drift and bias, resulting in unreliable data results and associated risks.

 Line Illustration of books

Regulatory requirements: Models must comply with both local and global regulations, like the General Data Protection Regulation (GDPR), and validation should prove to regulators that no bias exists. Validation and monitoring should align with all ISO/IEC 42001 and other regulatory standards.

In addition, master data management (MDM) is a crucial process for AI performance and optimization. An effective MDM system can be the center of all your data applications, such as:

  • Data privacy: Privacy and security for sensitive data like personally identifiable information or other risk-related data
  • Regulatory compliance: GDPR and data usage frequency rules, such as DK rules
  • Data hygiene: KPIs to build visibility into master data domains to avoid bias or other risks to AI models
  • Data lineage: Tracking of data usage within internal or external AI models for better reliability and quality
  • Data synchronization: Consistency across systems for better governance across lines of business and businesses

“To ensure effective AI data governance, an organization must engage in cross-functional collaboration as seen with the MDM system. From business units and risk teams to IT, each team plays a significant role,” says Proto. “Regulations like the Colorado Privacy Act insist all major contributors align with each other, such as business units guiding customers in managing their data, risk teams monitoring data-related risks and IT ensuring compliance with privacy regulations.”

Fontanazza also emphasizes the importance of a system impact assessment to create a foundation for effective AI use cases within the risk organization.

“To deploy an AI model, the system impact assessment process helps identify its purpose, intended uses, potential benefits or risks, and data requirements,” says Fontanazza. “This step becomes significant for CROs to formulate evaluation criteria and monitoring frequency, along with regulatory compliance.”

Frequently asked questions

The takeaway

AI is rapidly changing the way we work and the way risks are managed. Deploying AI tools and models has moved beyond a trend and become more of a necessity with firms looking to transform their businesses and processes. However, due to complexities and uncertainties associated with these technological advancements, AI governance has become a critical concern for risk leaders.

While risk leaders may understand how to devise a strong AI deployment strategy, additional support may be necessary to determine the best AI solutions and most beneficial framework. In addition, an external perspective can increase visibility into AI adoption and governance strategies, reducing the potential for reputational and financial risks.

Ready to get started? RSM’s experienced AI advisory team understands enterprise AI strategies and the foundational elements necessary to generate increased value and reduce risk. Contact our team to learn more about how AI can transform your key business operations.

RSM contributors

  • Steve Biskie
    Principal
  • Jason Proto
    Principal
  • Joseph Fontanazza
    Manager

Related insights