Skip to main content
search

Visionary Voices

Risk Proofing AI

The Risk-Proofing Imperative for AI

Bart Kimmel
Vice President & National IT
Risk Assurance Practice Leader

In our latest Visionary Voices conversation, we sat down with Bart to explore the critical gap between AI adoption and governance reality. The answer? While organizations rush to deploy AI solutions for efficiency gains, many have yet to establish the AI governance needed to ensure the solutions are trustworthy, secure, and compliant—a gap that will become exponentially more dangerous as we look into a future when quantum computing converges with AI.

Here’s what stood out most:

AI Governance Must Be a Strategic Priority

As Bart explained, “Many organizations are understandably enthusiastic about the potential of AI—reducing costs, increasing efficiency, and automating routine tasks so teams can focus on more strategic, value-adding work. But to unlock these benefits responsibly, strong AI governance is essential.”

Without proper oversight, AI can quickly introduce significant risks—cybersecurity vulnerabilities, misuse of training data, and flawed outputs that lead to poor decisions. These issues aren’t just technical—they’re business-critical.

Organizations must proactively embed trust, transparency, and risk management into the foundation of their AI strategies. Governance shouldn’t be an afterthought or a post-implementation fix. It must be part of the business case from day one, ensuring AI systems are not only powerful and efficient, but also safe, secure, and aligned to enterprise goals.

Organizations must proactively embed trust, transparency, and risk management into the foundation of their AI strategies.

AI Strategy is Missing a Critical Voice: Risk and Audit Leaders

While AI continues to dominate C-suite discussions, a crucial perspective is often left out of the conversation. As Bart notes, “I’m not seeing many organizations proactively integrating Internal Audit or Risk Management into the dialogue around AI initiatives.” And that’s a costly oversight.

Too often, AI solutions are being developed and deployed without early involvement from Chief Risk Officers (CROs) or Chief Audit Executives (CAEs)—the very leaders responsible for identifying emerging risks, ensuring compliance, and protecting enterprise integrity. Without their input, organizations risk building powerful systems without adequate controls, governance frameworks, or alignment to regulatory expectations.

To build AI responsibly, risk and audit must shift from back-end reviewers to front-end collaborators. Their proactive involvement ensures that risk mitigation, compliance, and ethical considerations are baked into AI strategies from the start—not bolted on after problems arise.

I’m not seeing many organizations proactively integrating Internal Audit or Risk Management into the dialogue around AI initiatives.

Trustworthy AI Starts with Governance—Not After Deployment

For AI to be truly trusted, it must rest on five critical pillars: explainability, fairness, transparency, robustness, and governance. These are not optional features-they are essential safeguards.

“Without governance, it’s more than likely you may not be implementing a trustworthy AI model,” Bart emphasized.

AI must be explainable and understandable—not a black box. It must be fair and free from bias, built on transparent training data, and resilient enough to withstand security threats. These guardrails align AI systems with an organization’s risk appetite and ensure responsible innovation.

Quantum Will Redefine What Governance Means

Looking ahead, Bart sees quantum computing as “the most significant advancement in our history”—a breakthrough that will “touch everything” and “change the world as we know it.” The convergence of AI and quantum technologies will usher in a new era of seemingly limitless computing power capable of solving problems at speeds and in ways we may not fully understand.

“Quantum has already delivered results we can’t explain—this signals intelligence beyond our current comprehension,” Bart noted.

That level of complexity will require governance frameworks that are not only rigorous, but adaptable to technologies we haven’t yet mastered.

Third-Party AI Risks are Outpacing Traditional Controls

Today, most organizations rely on third-party or open-source AI tools rather than building their own models. This introduces a growing web of external risks—many of which go unchecked.

“How do you know the algorithms are sound? That the training data aligns with your values and won’t lead to hallucinations or inaccuracies?” Bart asked.

As third-party vendors embed their own AI into your tech stack, the risk compounds—and many companies are unprepared to navigate this fragmented ecosystem.

Three Priorities for Building AI Governance That Lasts

1

Proactive Governance Integration: Governance must be built into the AI strategy from the outset—not retrofitted post-deployment. Business cases should include requirements for trust, transparency, risk tolerance, and regulatory compliance upfront.

2

Leadership and Literacy: Chief Risk Officers and Chief Audit Executives need to move from reactive observers to strategic leaders in AI governance. This includes building technical fluency and leveraging frameworks like NIST's AI RMF to guide safe deployment.

3

Future-Ready Governance Frameworks: With technologies like quantum on the horizon, organizations must shift from rigid, rule-based controls to flexible, principle-based governance structures—ones that can scale with the unknown.

Governance Is Not Optional—It’s Foundational

AI governance isn’t a luxury for advanced adopters—it’s a non-negotiable for any organization deploying AI at any scale. The companies that build strong, proactive governance foundations today will be the ones best positioned to harness the full promise of AI and quantum tomorrow.

If you’re ready to put governance at the core of your AI strategy—we’d love to talk.

Visionary Voices is a segment of RGP’s LinkedIn newsletter, Mindshift. Each month we highlight a unique futurist who challenges us to think differently and to drive innovation. Mindshift also contains valuable research and curated content.

RGP logo
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognizing you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.