Forget hackers: AI could just turn off our critical systems.

Evan Schuman
9 Min Read

Please find the rewritten content below, with the text within the <figure> tags left completely unchanged, and all other HTML tags and structure preserved as requested.

A recent Gartner report projects that by 2028, incorrectly configured AI systems will cause the shutdown of critical national infrastructure in a G20 nation. However, consultants believe this scenario could unfold even sooner.

AI-driven technology powers automation and big data workflows, enabling analysis through neural networks and data analytics for business intelligence, predictive insights, and process optimization.
                                        <div class="media-with-label__label">
                        Credit:                                                             NicoElNino / Shutterstock                                                   </div>
                                </figure>
        </div>
                                        </div>
                        </div>
                    </div>

Given Gartner’s recent report, which forecasts that AI issues could lead to the “shutdown of national critical infrastructure” in a significant country by 2028, CIOs must reconsider the industrial control systems rapidly being automated by AI agents.

Gartner refers to these advanced technologies as Cyber Physical Systems (CPS), defining them as “engineered systems that orchestrate sensing, computation, control, networking and analytics to interact with the physical world (including humans). CPS is the umbrella term to encompass operational technology (OT), industrial control systems (ICS), industrial automation and control systems (IACS), Industrial Internet of Things (IIoT), robots, drones, or Industry 4.0.”

The report highlights a concern not primarily with AI systems generating errors akin to hallucinations, though that remains a worry. Instead, it points to the risk that AI systems may fail to detect subtle operational shifts that experienced human managers would readily identify. When it comes to managing critical infrastructure directly, even minor mistakes can escalate into major catastrophes.

“The next major infrastructure failure might not originate from cyberattacks or natural calamities, but rather from a well-meaning engineer, an erroneous update script, or a misplaced decimal point,” explained Wam Voster, VP Analyst at Gartner. “To protect national infrastructure from unintentional shutdowns caused by AI misconfiguration, a secure ‘kill-switch’ or an override mechanism accessible only by authorized operators is crucial.”

Voster further stated, “Contemporary AI models are often so intricate that they operate like black boxes. Even their creators struggle to predict how minor configuration alterations will affect the model’s emergent behavior. As these systems grow more opaque, the danger of misconfiguration increases. Consequently, human intervention becomes even more vital when necessary.”

For several years, enterprise CIOs and other IT leaders have acknowledged the dangers posed by industrial AI and have been provided guidelines to mitigate risks to critical infrastructure. However, as autonomous AI increasingly expands its control over systems, the associated hazards have also grown.

Matt Morris, founder of Ghostline Strategies, highlighted that a challenge with industrial AI controls is their potential weakness in detecting model drift.

Morris illustrated this with an example: “Suppose I instruct it to ‘monitor this pressure valve.’ Then, over time, the normal readings begin to subtly shift.” He questioned whether the system would dismiss this change as mere background noise, assuming slight variations are normal during operation, or if it would recognize it as a signal of a potentially grave issue, as an experienced human would.

Despite these and other unresolved questions, Morris observed that “companies are deploying AI extremely rapidly, often faster than they realize.”

Industrial AI Accelerating Too Rapidly

Flavio Villanustre, CISO for the LexisNexis Risk Solutions Group, has also observed signs that AI might be gaining control too quickly.

He explained, “When AI manages environmental systems or power generation, the interplay of complexity and unpredictable behaviors can lead to extremely severe outcomes.” Boards and CEOs, he noted, often think, “‘AI will provide this productivity boost and cut my costs.’ Yet, the risks they are undertaking can far outweigh the potential benefits.”

Villanustre expressed concern that boards and CEOs might only halt the rapid deployment of autonomous industrial AI after their organization experiences a disaster. He added, “[But] I don’t believe that [board members] are malicious, merely incredibly imprudent.”

Cybersecurity consultant Brian Levine, executive director of FormerGov, concurred that the dangers are extreme: both highly perilous and highly probable.

Levine asserted, “Critical infrastructure relies on fragile, decades-old layers of automation. Integrating autonomous AI agents on top of this creates a Jenga tower in a hurricane.” He suggested, “It is beneficial for organizations, especially those managing critical infrastructure, to adopt and assess their maturity using reputable frameworks for AI safety and security.”

Bob Wilson, a cybersecurity advisor at the Info-Tech Research Group, also voiced apprehension about the near certainty of a significant industrial AI incident.

Wilson stated, “The likelihood of a disaster stemming from a poor AI decision is quite high. As AI becomes integrated into enterprise strategies faster than governance frameworks can keep pace, AI systems are advancing more rapidly than risk controls.” He added, “We can observe the early warning signs of quick AI deployment and insufficient governance increasing potential exposure, and these indicators warrant investment in governance and operational controls.”

Wilson emphasized that companies must adopt novel perspectives on industrial AI controls.

He explained, “AI can almost be regarded as an insider, and governance ought to be established to manage that AI entity as a potential accidental insider threat.” He continued, “In this scenario, prevention begins with rigorous governance over who can modify AI settings and configurations, how such changes are tested, how their deployment is managed, and how swiftly they can be reversed. We are indeed seeing this type of risk exacerbated by a growing disparity between AI adoption and governance maturity, where organizations roll out AI faster than they implement the necessary controls to manage its operational and safety impact.”

Therefore, he advised, companies should institute a business risk program with a governing body responsible for defining and managing these risks, as well as monitoring AI for behavioral changes.

Rethinking AI Management

Sanchit Vir Gogia, chief analyst at Greyhound Research, suggested that tackling this issue first requires executives to reframe fundamental structural questions.

He observed, “Most businesses still discuss AI within operational settings as if it were merely an analytical layer, a clever addition atop existing infrastructure. This perspective is already outdated.” He elaborated, “The instant an AI system influences a physical process, even indirectly, it ceases to be just an analytics tool; it becomes an integral part of the control system. And once it’s part of the control system, it assumes the responsibilities inherent in safety engineering.”

He pointed out that the repercussions of misconfiguration in cyber-physical environments differ significantly from those in traditional IT settings, where the outcome might be outages or instability.

He explained, “In cyber-physical environments, misconfiguration directly interacts with physical realities. An improperly tuned threshold in a predictive model, a configuration adjustment that alters anomaly detection sensitivity, a smoothing algorithm that inadvertently filters out weak signals, or a quiet shift in telemetry scaling — all these can subtly change how the system operates.” He added, “It doesn’t happen catastrophically at first. It’s subtle. And in closely integrated infrastructure, subtlety is frequently the precursor to a cascade of failures.”

Gogia further advised, “Organizations must mandate explicit articulation of worst-case behavioral scenarios for every AI-enabled operational component. What occurs if demand signals are misinterpreted? How does sensitivity change if telemetry gradually shifts? If thresholds are misaligned, what boundary condition prevents uncontrolled behavior? When teams are unable to answer these questions clearly, their governance maturity is incomplete.”

This content was originally published on CIO.com.

Artificial IntelligenceCritical InfrastructureSecurity
Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *