Artificial intelligence and AI technologies are critical to the success of the risk function.
Artificial intelligence and AI technologies are critical to the success of the risk function.
CROs can deploy AI to address compliance and navigate data governance and enterprise-level risks.
Leaders may devise a strong AI plan, but support may be necessary to select tools and frameworks.
With a long list of significant benefits and potential use cases, artificial intelligence and AI technologies are playing a pivotal role in the success of middle market businesses, especially for the risk function. As AI advancements continue to evolve, chief risk officers (CROs) are key contributors in implementing an effective AI strategy to proactively manage AI risk, establish a strong AI governance foundation, and drive responsible, transformative AI adoption.
To demonstrate how quickly AI has advanced in the middle market, the 2025 RSM Middle Market AI Survey: U.S. and Canada found that 91 per cent of middle market executives are either formally or informally using AI in business practices. But 53 per cent of organizations who have adopted and implemented generative AI believe they were only somewhat prepared to do so and 70 per cent using generative AI report they need outside help to get the most out of that tool. With the growth and potential of AI solutions, CROs have a significant opportunity to navigate an increasingly complex environment of evolving regulations, data privacy concerns and security risks to drive innovation and value across processes.
Steve Biskie and Jason Proto, principals with RSM US LLP, along with RSM’s AI governance leader, Joseph Fontanazza, discussed how risk leaders must break down AI governance into manageable steps built upon existing corporate, IT and data governance principles during RSM’s webinar AI for the chief risk officer.
Below, we take a look at some critical details for CROs to consider when developing an AI strategy, as well as issues and opportunities and potential use cases for many AI tools and applications.
With AI reshaping the business landscape, CROs can leverage emerging AI tools and models to address compliance and regulations, navigate data governance, and mitigate enterprise-level risks. Some key areas CROs must focus on when considering an AI strategy that properly considers risks include:
Alignment: AI governance and data strategies must support the firm’s goals, strategy, values and regulatory requirements
Consistency: Data practices and governance standards should be uniform across the enterprise level
Definition: Roles and responsibilities should be preset for model evaluation criteria, oversight structures and inclusion of all AI systems, including third-party tools
Informed decision making: Incident response, stakeholder feedback and ongoing regulatory compliance, including user training and awareness, must be clearly defined
In many cases, AI tools and applications—often not approved or evaluated by the organization—are currently in use. Risk leaders must establish a view of this use and how it may increase potential risk exposure.
“Multiple free or open-source AI models are being used within organizations,” says Fontanazza. “CROs must understand their business purpose, suitability and potential risks. They must also understand the evaluation parameters. Often, Grammarly is the most commonly used AI model within an organization, yet we rarely see organizations consider the implications of such free-to-use AI tools. Such practices often result in loss of control over data [where sensitive data is leaving the secure organizational walls] and other data-related risks.”
Given AI’s vast potential, companies can leverage the technology in many ways to manage compliance and drive innovation by effectively integrating third-party AI vendors into their business processes. Some critical factors to consider with third parties include:
Shared responsibility models: Establishes clear accountability with risk distribution between the organization and vendor
Data considerations: Addresses data privacy, security and compliance aspects
Model development process: Provides insight into the vendor's AI development methodologies and quality assurance
Alignment with values: Ensures ethical and regulatory compliance that aligns with organizational values
In addition, AI model evaluation for transparency and security protocols, along with end-user training and education resources, also play a critical role in successful AI deployment.
“Think of an open AI model, like ChatGPT. When you use such tools to analyze confidential financial statements or strategy, there will always be a risk of data loss,” says Biskie. “It’s not only about how to use the application, but also about using it properly and ethically to mitigate risks. CROs must stay extra vigilant while establishing AI deployment, considering employees have access to both open AI like ChatGPT and Microsoft Copilot, which operates within the firm’s secure environment.”
Ultimately, one of the most significant obstacles to successful AI use within the risk function is the company’s data foundation. Data challenges are a common theme during many projects, but they must be addressed to get the intended value from AI investments.
“One of the things that is slowing down the adoption of AI in some organizations is the poor data governance and poor-quality data in their environments,” says Biskie. “It’s like if you feed bad data into an AI model, you can't expect it to come up with great solutions. The output is directly proportional to the input.”
It’s clear that risk management is an area with massive potential for significant optimization and process improvement with AI solutions. However, CROs must consider the potential for bias that can emerge in multiple forms, including:
“When you create an AI model with an expectation of a specific outcome, and the software generates that particular result, you must go back and analyze the testing data to ensure you have the right data in your AI model,” says Proto. “Most likely, what you're doing is you're bringing in the data to give you the result you are expecting. Instead, you should engage with these tools with a blank-slate mindset. You should never predict the outcome or use specific data with a preconceived notion.”
Multiple factors can contribute to AI monitoring and audit principles to determine model effectiveness, performance and risk management. The U.S. Government Accountability Office has developed a framework to standardize these approaches. The framework encompasses:
“One of the most common queries we get is how to optimize and scale AI tools effectively. The answer here is to know that there’s never a single exact way you can expand AI models for two or multiple similar processes,” says Proto. “If a model works great for one process, it may or may not work for another. In order to scale the current AI capabilities within your organization, you must first identify the effort, followed by limitations, data quality and risks involved with that expansion.”
When considering potential AI use cases, CROs must ensure that IT and data scientists are properly validating models so outputs do not create opportunities for regulatory or compliance issues. Model output validation is a critical component to boost efficiency, manage risks and enhance productivity. Potential challenges include:
Data privacy and security: AI models such as ChatGPT may be exposed to sensitive and confidential information if improperly provided the information through prompts written by employees. These models should be tested against all systems at the application level to identify potential security losses.
Drift, bias and data cleanliness: Poor data testing and assessment via k-fold cross-validation can lead to data drift and bias, resulting in unreliable data results and associated risks.
Regulatory requirements: Models must comply with both local and global regulations, like the General Data Protection Regulation (GDPR), and validation should prove to regulators that no bias exists. Validation and monitoring should align with all ISO/IEC 42001 and other regulatory standards.
In addition, master data management (MDM) is a crucial process for AI performance and optimization. An effective MDM system can be the center of all your data applications, such as:
“To ensure effective AI data governance, an organization must engage in cross-functional collaboration as seen with the MDM system. From business units and risk teams to IT, each team plays a significant role,” says Proto. “Regulations like the Colorado Privacy Act insist all major contributors align with each other, such as business units guiding customers in managing their data, risk teams monitoring data-related risks and IT ensuring compliance with privacy regulations.”
Fontanazza also emphasizes the importance of a system impact assessment to create a foundation for effective AI use cases within the risk organization.
“To deploy an AI model, the system impact assessment process helps identify its purpose, intended uses, potential benefits or risks, and data requirements,” says Fontanazza. “This step becomes significant for CROs to formulate evaluation criteria and monitoring frequency, along with regulatory compliance.”
Emerging AI tools and models significantly enhance processes, address compliance and regulations, navigate data governance, and mitigate risks in order to drive value and efficiency.
Risk officers will continue to leverage AI tools to proactively manage risk, establish a strong governance foundation and drive responsible, transformative risk management processes. In addition, effective AI strategies can mitigate potential bias and increase efficiency by automating processes.
Risk management is an area with massive potential for process improvement with AI solutions, such as managing key compliance tasks and strengthening security and privacy measures. However, for successful deployment, executives must address potential challenges related to data quality, bias and data loss with unapproved data models.
Risk officers can leverage various frameworks to address compliance and regulations, creating a responsible AI strategy that enables proper alignment, consistency, role and responsibility definitions, and informed decision making.
AI is rapidly changing the way we work and the way risks are managed. Deploying AI tools and models has moved beyond a trend and become more of a necessity with firms looking to transform their businesses and processes. However, due to complexities and uncertainties associated with these technological advancements, AI governance has become a critical concern for risk leaders.
While risk leaders may understand how to devise a strong AI deployment strategy, additional support may be necessary to determine the best AI solutions and most beneficial framework. In addition, an external perspective can increase visibility into AI adoption and governance strategies, reducing the potential for reputational and financial risks.
Ready to get started? RSM’s experienced AI advisory team understands enterprise AI strategies and the foundational elements necessary to generate increased value and reduce risk. Contact our team to learn more about how AI can transform your key business operations.