Article

Risk to resilience: How engineering firms can govern AI with confidence

August 22, 2025

Key takeaways

 Line Illustration of an AI chip

Engineering firms must create AI governance frameworks that align with ethical and industry standards.

AI

A strong culture of responsibility is crucial to ensure AI tools are used ethically and effectively.

Illustration of weight scales

Ongoing evaluation, cross-functional coordination and flexibility are critical for responsible AI usage.

#
Generative AI Artificial intelligence Professional services Architecture & engineering

Artificial intelligence is transforming entire industries. Engineering firms, in particular, find themselves at the crossroads of innovation and accountability, due in large part to the impact of AI solutions.

With public safety, ethical responsibility and financial risk all on the line, the urgency for a structured approach to responsible AI governance is undeniable. But what key pillars can help engineering firms navigate this complex landscape?

Establishing a framework

The foundation of responsible AI begins with governance. Engineering firms must identify ownership for AI oversight, which could reside with the board of directors, a dedicated subcommittee or a chief innovation officer. This governing person or body will define and monitor policies that align with ethical principles and industry standards. Firms may also want to establish an ethics committee to oversee the lifecycle of AI implementation.

With this framework in place, firms can address policy creation, stakeholder engagement and the integration of best practices. In essence, proper governance sets the tone for a firm’s approach to AI.

Standards of care

Responsible AI use extends to the core of engineering practices. To be blunt, no responsible firm would just trust AI to design a bridge.

The engineering standard of care is traditionally focused on public safety. In the context of AI, this standard must include the following:

  • Developmental responsibility: Ensuring ethical considerations during AI system development
  • Usage responsibility: Training engineers to engage with AI in their day-to-day tasks
  • Monitoring processes: Establishing robust controls to validate AI outputs and mitigate risks such as algorithmic bias or hallucinations

Failure to adhere to these standards can jeopardize public safety, lead to legal repercussions and undermine professional credibility.

Clear obligations

The engineering industry lacks widely accepted standards regarding the use of AI in contracts. So firms must take it upon themselves to define clear indemnification clauses and contractual safeguards that manage liability. This step ensures that AI implementations align with professional codes of conduct and legal standards, providing a safety net against unforeseen risks.

Taking responsibility

Culture plays a pivotal role in any successful AI adoption. Training and reskilling employees—not just to use AI but to use it responsibly—is crucial. This includes understanding the ethical implications of AI and adhering to firm policies designed to mitigate risks. Firms must invest in upskilling programs while fostering a culture that values transparency and accountability.

One could argue that using AI is similar to operating heavy machinery. In both cases, mastery of the tool must be accompanied by a deep understanding of its power and impact.

Possible challenges

Even if engineering firms embrace best practices to implement AI, potential issues might arise. For example, measuring the effectiveness of AI remains a nascent and resource-intensive effort. So firms will need to develop their own key performance indicators and gauges of success.

Furthermore, implementing robust governance and training programs often requires a significant investment. Firms will have to commit to the process and stay focused on AI’s potential to transform their business.

In addition, aligning AI initiatives across various business units can be a complex process. Interdepartmental coordination is vital, and firm leaders have to ensure that miscommunication doesn’t derail the project.

Continual monitoring and adaptation

AI governance is not a "set it and forget it" process. Ongoing monitoring and constant improvements are essential. Firms will need to conduct regular audits of AI systems, validate outputs and adhere to evolving ethical and legal standards. As AI technology improves, firms must adapt their governance structures to remain resilient and responsible.

The takeaway

As AI becomes an integral part of the engineering industry, firms must adopt a holistic approach to governance. By establishing a robust framework, upholding professional standards, fostering a culture of responsibility and remaining agile in monitoring and adaptation, engineering firms can strike the right balance between innovation and accountability.

RSM contributors

  • Drew Faries
    Principal

Related insights

Experience the power of being understood
Connect with our business services professionals today.