AI spending on network automation will reach $20 billion by 2028. For CNI operators, deploying AI effectively requires professionals who understand both the technology and the operational context of critical infrastructure.
← Back to Insights
When a machine learning model flags a control command on an energy grid as malicious and blocks it automatically, the consequences are not a slow-loading webpage. They are physical, operational, and potentially dangerous.
That tension — between the transformative potential of AI in critical infrastructure and the very real risks of getting it wrong — defines one of the most important capability challenges facing CNI organisations today.
Artificial intelligence is reshaping network infrastructure across every sector. Juniper Research projects that network operator spending on AI for orchestration and automation will reach $20 billion by 2028, a 240% increase from $6 billion in 2024.
In CNI environments, that investment is being driven by three converging forces: the need for real-time threat detection, the complexity of converging IT and OT networks, and the sheer scale of data that human analysts can no longer process manually.
For energy grids, transport networks, telecommunications systems, and water treatment facilities, the implications are both transformative and high-stakes.
Traditional security monitoring relies on known signatures and predefined rules. In CNI environments, where a single anomalous packet on an OT network could indicate a sophisticated nation-state intrusion, this approach is no longer sufficient.
AI-driven behavioural analytics can identify deviations from normal network patterns that rule-based systems miss. A subtle change in data flow between a SCADA controller and a historian server. An unusual authentication pattern on an industrial control system. A gradual exfiltration disguised as routine telemetry. These are the signals that AI can surface for human analysts to investigate.
The threat landscape reinforces the urgency. The Dragos 2026 OT Cybersecurity Report documented a 49% year-on-year increase in ransomware targeting manufacturing and industrial environments. Three new ICS-focused threat groups were identified in the past twelve months alone.
Against this backdrop, AI is not a luxury for CNI network defenders. It is becoming a necessity.
Beyond security, AI is enabling CNI operators to optimise network performance in ways that directly support resilience. Predictive algorithms can identify potential equipment failures before they cause outages, dynamically reroute traffic during incidents, and allocate bandwidth to critical services during periods of high demand.
For energy networks managing the transition to distributed generation, or transport systems coordinating real-time signalling across thousands of nodes, this capability is operationally significant. The networks that underpin national infrastructure are becoming too complex and too interdependent for static, manual management.
Deploying AI in critical infrastructure is not without risk.
Machine learning models can be poisoned by adversaries who understand the training data. Automated response systems can be manipulated into taking actions that degrade rather than protect network integrity. And the opacity of some AI decision-making creates governance challenges for organisations subject to regulatory scrutiny.
These risks are amplified in OT environments where the consequences of a wrong decision are physical, not just digital. An AI system that misclassifies a legitimate control command as malicious and blocks it could disrupt industrial processes with real-world safety implications. This is not a hypothetical risk; it is a design challenge that requires human oversight built into every automated response workflow.
This is where the conversation shifts from technology to people.
Implementing AI in CNI networks requires professionals who understand both the technology and the operational context. Data scientists who can build models for industrial network traffic. Security engineers who can evaluate adversarial risks to machine learning systems. OT specialists who can validate that automated responses are safe in physical environments.
These hybrid skill sets are rare. The ISC2 2025 workforce study found that 95% of organisations report cybersecurity skill needs, with skills gaps now overtaking headcount as the primary concern. In CNI, where clearance requirements further constrain the talent pool, the challenge is even more acute.
The professionals who can sit at the intersection of AI, security, and operational technology are not browsing job boards. They are embedded in defence programmes, intelligence agencies, and a handful of forward-thinking infrastructure operators. Reaching them requires specialist networks and a credible understanding of the work they do.
Organisations that want to deploy AI effectively in their critical networks need to invest in the people who can make it work safely, not just the technology itself.
That means building recruitment strategies around roles that do not yet have established talent pipelines. It means being realistic about the rarity of these skill sets and competing on mission, technical challenge, and career development rather than salary alone. And it means working with search partners who understand both the technology landscape and the security-cleared talent market.
The organisations that move first will secure the talent. The ones that treat AI deployment as a procurement exercise, without a corresponding people strategy, will find themselves with powerful tools and no one qualified to operate them safely.
Trusted by security leaders at



"I can't recommend Gyles and the team at Foundations enough. We struggled to find a suitable candidate for 5 months, Foundations found 3 perfect candidates in 24 hours."
Manager of EMEA & APAC Network Engineering, Equinix