Businesses are eager to embrace AI, but they’re struggling to implement governance strategies that match the speed and nature of its real-world integration.
<div class="media-with-label__label">
Credit: oneinchpunch / Shutterstock </div>
</figure>
</div>
</div>
</div>
</div>
Chief Information Officers across various sectors are rapidly deploying generative AI via SaaS solutions, integrated copilots, and external applications. This rapid adoption outpaces conventional governance structures, which were never designed for such velocity. AI now impacts crucial areas like customer engagement, recruitment processes, financial assessments, software creation, and intellectual tasks—frequently without undergoing traditional, formal deployment.
Consequently, there’s a growing disparity between the quick rollout of AI and the necessary safeguards for its responsible application. Companies are adopting AI more swiftly than they can manage its use, often rushing to implement controls only after an issue arises.
Insights from five industry experts, each addressing distinct challenges in enterprise AI, shed light on why this disparity continues and what leadership must do to bridge it proactively, before external pressures from regulators, auditors, or customers demand action.
When AI enters operational workflows, traditional governance systems often falter.
The core issue is foundational. Governance frameworks were built for deliberate, centralized decision-making, a stark contrast to the dynamic nature of AI adoption. Ericka Watson, CEO of Data Strategy Advisors and former chief privacy officer at Regeneron Pharmaceuticals, observes this consistent trend across diverse industries.
“Organizations continue to structure governance as though decisions are made methodically and from a central authority,” she noted. “However, this doesn’t align with how AI is actually being integrated. Businesses are making rapid, day-to-day choices involving vendors, copilots, and embedded AI functionalities, whereas governance models still expect a pause for formal procedures and approvals.”
This inherent disconnect inevitably leads to circumvention. Even well-meaning teams often sidestep governance because it isn’t integrated into their actual workflows. AI capabilities are launched before crucial aspects like training data rights, subsequent data sharing, or accountability are properly evaluated.
The initial failure point, Watson explained, is typically in data control and transparency. Staff inadvertently input sensitive data into public generative AI tools, and the traceability of this data is lost as its outputs transition between various systems. “By the time executives become aware of the situation,” she warned, “the data could already be compromised in an irreversible manner.”
What to do: CIOs need to transition from governing AI models themselves to governing their practical usage. While direct control over the model might be limited, control can be exerted over its application, the data it interacts with, and the destination of its outputs. Governance should function as integrated checkpoints within workflows, rather than being relegated to post-factum policy reviews.
Outdated data governance strategies struggle to cope with generative AI.
Existing governance frameworks, even when present, frequently rely on outdated premises. Fawad Butt, CEO of Penguin Ai (a creator of agentic healthcare platforms) and former chief data officer at UnitedHealth Group and Kaiser Permanente, contends that conventional data governance models are fundamentally unsuited for the complexities of generative AI.
“Traditional governance was designed for established record systems and predictable analytical flows,” he stated. “That paradigm no longer applies. We now have systems generating other systems – producing novel data and outputs, often instantaneously.” In this fluid landscape, infrequent audits can provide a misleading sense of security, and controls focused solely on outputs overlook the true sources of risk.
“Damage can occur even without a security breach; well-protected systems can still produce fabrications, exhibit bias, or deviate from expected behavior,” Butt highlighted, stressing that inputs, not just outputs, represent the most overlooked area of risk. This encompasses prompts, data retrieval origins, contextual information, and any dynamic tools accessible to AI agents.
What to do: Prioritize establishing guardrails over drafting policies. Clearly delineate prohibited use cases. Restrict high-risk data inputs and limit the tools accessible to AI agents. Crucially, observe how these systems function in real-world scenarios. Policies should be developed after practical experimentation, not before, to avoid embedding incorrect assumptions.
Governance challenges become acute with third-party AI solutions.
While internal AI governance may be inadequate, managing third-party AI poses an even greater challenge. Richa Kaul, CEO of Complyance, advises global businesses on risk and compliance. She observes a clear distinction: organizations typically have more robust governance for AI they develop in-house, but are significantly less equipped when AI is integrated into commercial vendor offerings.
“We frequently encounter a scenario where AI is utilized before any governance is in place,” she stated. “Moreover, governance often operates as a committee function, with numerous individuals independently assessing vendors without a standardized set of inquiries.” Businesses, she notes, frequently pose broad questions about AI privacy and are content with comforting, yet vague, responses—a phenomenon Kaul terms “happy ears.”
Effective governance is characterized by precise inquiries. For example, is client data employed for model training? Is this data subsequently shared among different clients? And is the Large Language Model accessed through a dedicated enterprise solution or a public consumer interface?
“A vendor leveraging Azure OpenAI inherently carries a considerably lower risk profile compared to one directly integrating with ChatGPT,” Kaul explained.
What to do: CIOs should begin with a fundamental, yet often neglected, action: thoroughly examining vendor subprocessor lists. While cloud providers are generally familiar, Large Language Model providers are not. AI introduces a secondary, inadequately documented subprocessor tier, which is precisely where governance systems falter.
Why outright prohibitions are ineffective and incidents reoccur.
Technological controls alone are insufficient to bridge the gap in responsible AI adoption; human behavior plays a more significant role. Asha Palmer, SVP of Compliance Solutions at Skillsoft and a former US federal prosecutor, frequently intervenes after AI-related incidents. She notes that the initial, uncomfortable realization for leaders is often the predictability of the outcome.
“We were aware this possibility existed,” she stated. “The critical inquiry is: why did we not prepare our teams to manage this before it materialized?” Performance demands are the underlying driver. Employees leverage AI to accelerate work and achieve objectives, mirroring patterns seen in various compliance breaches, from corruption to data mishandling.
Comprehensive prohibitions on generative AI prove ineffective. “When you remove options for responsible engagement,” Palmer explained, “individuals will resort to irresponsible use, often clandestinely and beyond the reach of governance.”
What to do: Transition from mere awareness training to practical behavioral learning. Palmer describes this as “moral muscle memory” – a method employing scenario-based exercises to train individuals to pause, evaluate risks, and make appropriate decisions when under duress.
Regulatory bodies and auditors seek confirmation that relevant personnel have received targeted training commensurate with their specific risks. Generic, universal AI literacy programs are often viewed as insufficient.
Why mere assurance isn’t enough when auditors scrutinize AI governance.
The ultimate challenge emerges when organizations are required to demonstrate the efficacy of their governance. Danny Manimbo is ISO & AI Practice Leader at Schellman, an attestation and compliance services firm. He consistently observes a recurring pattern of failure.
“Enterprises frequently mistake the existence of policies for actual governance,” he noted. “Responsible AI principles hold no value unless they demonstrably shape concrete business decisions.”
Auditors might begin by asking for documented evidence of an AI risk-based decision that directly altered an outcome. Robust governance leaves tangible evidence, such as postponed deployments, declined vendor partnerships, or restricted features. Conversely, underdeveloped governance typically yields only general statements.
“The costliest governance efforts are those undertaken post-deployment,” Manimbo cautioned. Attempting to reverse-engineer data lineage, assign accountability, and redefine initial intent becomes exceptionally challenging once systems are operational.
What to do: Approach AI governance as an integrated management system, rather than a mere compliance checklist. Frameworks such as ISO/IEC 42001 are effective only when they seamlessly link risk management, change control, ongoing monitoring, and internal auditing into a cyclical process.
The true measure of effective governance is its ability to influence business decisions, not simply the production of extensive documentation.
Bridging the gap in accountable AI implementation.
A consistent point emerged from all five interviews: the deficit in responsible AI is fundamentally not a technological shortfall, but a misaligned governance timeline. Controls are being developed for past systems, even as AI is actively influencing current decision-making.
Multiple experts underscored that CIOs should move beyond perceiving responsible AI as a future initiative and instead consider it a critical operational hygiene factor— akin to identity management or financial oversight, rather than solely an ethics committee concern.
Watson of Data Strategy Advisors highlighted visibility as the paramount initial step. Organizations unable to identify precisely where AI impacts decisions—particularly via SaaS applications—are inherently vulnerable. “Effective governance is impossible without clear visibility,” she stated, cautioning that many firms still lack even a foundational inventory of workflows influenced by AI.
Butt, from Penguin Ai, reiterated this, emphasizing a shift in inventory focus from mere platforms to systems within their specific contexts. An AI feature integrated into HR software carries a different risk profile than the identical feature within marketing automation; treating them as equivalent is a misperception in governance.
Kaul of Complyance extended this principle to external engagements. Vendor AI governance fails when companies settle for vague assurances rather than meticulously tracking the actual flow of their data. She finds that merely compelling teams to map AI subprocessors often uncovers hidden risks that leadership had unknowingly undertaken.
Palmer of Skillsoft drew attention to the human element underpinning these issues. Governance structures, she contended, fail when they presuppose that individuals will reduce their pace when under duress. “Pressure remains constant,” she affirmed. “One must train to operate within it.” Companies neglecting this preparation should anticipate employees improvising with AI in hazardous manners.
Lastly, Manimbo of Schellman presented a direct indicator: if governance has never resulted in a postponed deployment, a rejected vendor, or a restricted feature, it likely lacks practical implementation. “Governance must leave discernible evidence,” he asserted, otherwise, it merely represents an ambition.
Collectively, these discussions indicate that bridging the responsible AI gap doesn’t demand impeccable foresight or exhaustive policies. Instead, it necessitates proactive intervention and defined accountability. Organizations that move promptly—while AI adoption remains disparate and unofficial—can influence usage patterns. Those that delay will inherit systems beyond their control and risks they cannot articulate.
By then, governance ceases to be an option; it transforms into an imperative for damage mitigation.
Related reading:
- Generative AI in productivity tools: Potential pitfalls to consider.
- Navigating governance and trust challenges to advance agentic AI.
- Striking a balance: Governance and innovation in the AI era.
- Deloitte’s AI governance shortcomings reveal significant gaps in corporate quality oversight.
