Many organizations are still working to establish effective AI governance.
Many organizations are still working to establish effective AI governance.
51% of survey respondents conduct staff training on responsible AI usage and development.
46% utilize data governance policies to support data quality and privacy for use in AI models.
AI has rapidly advanced to permeate every corner of the operating environment for middle market companies. Leveraging AI tools and strategies is a must to generate the efficiency, insight and productivity necessary to succeed, but companies must integrate effective governance to mitigate persistent risks.
As new use cases emerge and solutions evolve, middle market companies are continually adjusting AI strategies to transform key processes. Users are understandably excited to take advantage of new technology, learn new skills and contribute to business growth. However, as in many cases, humans are often the weakest link from a security perspective, and many companies are encountering significant shadow AI risks from the use of unauthorized tools.
All too often, users test or pilot AI tools lacking the company’s established guardrails, without proper tracking and governance. This scenario creates a host of issues, from data leakage to zombie accounts where users set up a test account and upload sensitive information that the company doesn’t know about and wasn’t monitoring for.
“AI introduces two reinforcing risks,” says RSM US LLP Principal Alden Hutchison. “Internal users expose data through shadow AI, and attackers exploit AI once identity is compromised. In both cases, lack of governance turns AI into an accelerant, not the root cause.”
In addition, AI solutions are being integrated within many existing business technology tools companies already leverage, introducing additional threat vectors.
“You can assume that these AI solutions are well tested and protected,” says RSM US Principal Daniel Gabriel. “But still, what information do you want to provide to third-party companies? You must rethink how you are engaging with these organizations.”
AI introduces two reinforcing risks. Internal users expose data through shadow AI, and attackers exploit AI once identity is compromised. In both cases, lack of governance turns AI into an accelerant, not the root cause.
As a nonhuman decision-making engine within the business, AI has been described as a ghost in the machine—one requiring protection through the following measures:
From an AI governance perspective, 51% of respondents in the Q1 2026 RSM US Middle Market Business Index survey reported conducting staff training on responsible AI usage and development, making it the most widely implemented control, up from 36% last year. Close behind were:
have data governance policies to support data quality and privacy for use in AI models
monitor and audit AI system performance and outcomes
have defined roles and responsibilities for enterprise AI decision making
have established principles/guideline for AI development and use
The Canadian perspective: The leading AI governance practice among Canadian survey respondents was data governance policies to support data quality and privacy for use in AI models (60%), followed by training for staff on responsible AI usage and development (54%).
The survey results were a direct reflection of the growing importance of AI in the middle market, as each answer showed increases from last year and the top five options showing double-digit growth. However, the use of external frameworks to guide AI governance may be a missed opportunity. While more than a third of respondents reported using AI governance frameworks (35%), the option ranked seventh among respondents and tied for the smallest amount of growth from last year.
“Many organizations are still trying to figure out what effective AI governance is,” says RSM US Principal John Huyette. “Therefore, I am surprised that more are not mapping their AI strategies to some of the existing frameworks, such NIST RMF, ISO or responsible AI guidelines from Microsoft and other leading solution providers. To protect the ghost in the machine, adopting a governance framework is certainly a step in the right direction.”
Gabriel stresses the value of AI governance and the potential for companies to make positive change in their AI strategies. “Organizations are now at a pivotal moment where they have the opportunity to do things right and make the secure adoption of AI a priority,” he says. “Companies can ignore it and keep doing what they are doing and play catch-up to address issues in a reactive manner, or do it correctly to put themselves in an advantageous position going forward.”
Many organizations are still trying to figure out what effective AI governance is, Therefore, I am surprised that more are not mapping their AI strategies to some of the existing frameworks, such NIST RMF, ISO or responsible AI guidelines from Microsoft and other leading solution providers. To protect the ghost in the machine, adopting a governance framework is certainly a step in the right direction.
Are you addressing emerging AI risks?
AI solutions are rapidly evolving, with significant potential for increased insight, productivity and collaboration. But as you develop an AI strategy with features and benefits in mind, AI risks must also be a key part of the equation.