Skip to content
cyber soci

When the air gap stopped existing

Sam Keogh
Sam Keogh
9 MIN READ

On 7 April 2026 the FBI, CISA, NSA, EPA, the US Department of Energy and US Cyber Command jointly issued advisory AA26-097A. Iranian-affiliated actors operating under the IRGC-CEC umbrella — the group also known as CyberAv3ngers, Hydro Kitten, Storm-0784 and UNC5691 — were actively disrupting US water, wastewater, energy and government services by reaching into internet-exposed Rockwell Automation/Allen-Bradley programmable logic controllers (PLCs). They were not exploiting an unpatched vulnerability. They were logging in with Rockwell’s own engineering software, Studio 5000 Logix Designer, against PLCs whose owners did not know they were on the public internet, then editing project files and tampering with HMI and SCADA displays.

Days later, Censys counted 5,219 internet-exposed Rockwell/Allen-Bradley devices worldwide — devices that fingerprint themselves to the public internet over EtherNet/IP, ready to accept that “engineering” connection. Spain, Taiwan and Italy carry the largest non-US populations. The number for Australia was not zero.

5,219
Exposed PLCs

Rockwell/Allen-Bradley devices reachable from the public internet (Censys, April 2026)

6 agencies
Joint advisory

FBI, CISA, NSA, EPA, DOE and US Cyber Command on a single PLC threat

$3.3M
Proposed SOCI penalty

Civil penalty for corporate non-compliance with a Ministerial direction (Australia, 2026)

Pause for a moment and ask the question your board will eventually ask:

What is the dollar cost — to us, to our customers, to our regulator — if a single one of those PLCs goes the wrong way on our network?

If you cannot answer that with a number, a confidence interval and a defensible model, you have a tooling problem. You are also not alone. Most cyber risk quantification (CRQ) platforms cannot answer it either, because they assume the asset can be scanned. PLCs cannot be scanned. That is the entire problem.


Why your scanner is not allowed near the most expensive assets you own

Active vulnerability scanning is not safe to run against operational technology. It is not “we should be careful with it” — it is “do not do it.” Vendors prohibit it in their hardening guides. OT engineers will physically unplug the scanner if they catch it. The reasons are not theoretical:

Turbines

A scan packet sent to a PLC running a turbine governor can crash the governor.

Water

A credential-stuffing scan against a treatment HMI can lock out the operator at exactly the moment a chlorine pump must be re-engaged.

Legacy

An agent installer running on an end-of-life Windows XP HMI can blue-screen the supervisory layer that controls the safety interlock.

So OT teams do the only sensible thing: they keep scanners out. The result is a paradox that defines the modern critical-infrastructure attack surface — the most consequential systems on the network are the ones with the worst risk data.

The same paradox shows up everywhere unscannable assets cluster:

Asset class Why scanners can’t reach Who runs them
OT / SCADA / PLCs Active probing risks operational disruption and safety Power, water, gas, manufacturing, ports, transport
Air-gapped networks No network path for an agent or scanner Defence, intelligence, classified research
Legacy / end-of-life systems Cannot run a modern agent; unsupported OS Healthcare, government, utilities
Vendor-managed third parties No admin access to install or scan Outsourced IT, supply chain, cloud-hosted vendors
Embedded IoT and medical devices Resource-constrained; no agent support Hospitals, smart buildings, smart grids

Across every one of those categories, the playbook of “scan the asset, import the CVE list, run Monte Carlo on a CVSS-derived likelihood” is unavailable. And these are the assets — water, energy, hospitals, defence — that regulators, insurers and boards now require you to quantify.

Two columns showing IT assets that scanners can reach, and OT, air-gapped, legacy, third-party and embedded assets that scanners cannot reach. The unscannable side is the side where consequence is highest. Where the scanner reaches — and where it doesn’t SCANNABLE Cloud workloads Agents, vuln scanners, NetFlow Corporate IT endpoints EDR, patch telemetry, asset feeds SaaS platforms SSPM, configuration scanning Public-facing web External attack surface, DAST Standard FAIR-style CRQ works here. UNSCANNABLE OT, SCADA, PLCs Active probing risks safety / outage Air-gapped & classified No network path; policy prohibits Legacy / end-of-life Won’t run a modern agent Vendor-managed No admin access; you don’t own it Embedded / medical Resource-constrained, no agent Where the consequence is highest — and the risk data is worst.

The CRQ industry is quietly splitting in two

Cyber risk quantification grew up serving the IT estate. Its base assumption is that a vulnerability scan, an agent, or a NetFlow feed can deliver enough telemetry to estimate threat-event frequency, vulnerability and loss magnitude. Plug those into FAIR, run the simulation, get an Annual Loss Expectancy (ALE) in dollars. That works beautifully for a cloud workload. It works poorly for a 1990s Modbus PLC running a pump.

Two responses are emerging:

Response one — scan harder

Lightweight passive collectors, NetFlow analysis, anomaly-based discovery. Helpful, but it cannot tell you the probability that a specific Iranian-aligned actor exploits Studio 5000’s legitimate engineering protocol against your specific deployment in the next twelve months. There is no scan output that contains that number.

Response two — elicit the data instead

Get the OT engineer, the threat-intel analyst, the incident-responder and the asset owner into a structured model. Have each of them place a calibrated estimate on each step in the attack path. Track who is right over time. Weight future contributions accordingly. Run the Monte Carlo over their distributions, not over a CVSS lookup table.

That second response is what we mean when we say expert consensus — and it is how you get a defensible dollar figure on something a scanner has never touched.


Translating the PLC advisory into a board-grade number

Let’s run the AA26-097A scenario the way a board would want to see it. Imagine an Australian water utility — a SOCI-regulated responsible entity — whose treatment plant is supervised by a Studio 5000 deployment that is, unbeknownst to the network team, reachable from a poorly segmented vendor remote-access path. We are not picking on anyone real; the numbers below are illustrative and intended to show the shape of a defensible model rather than to forecast a specific entity.

FAIR factor Where the input comes from
Threat Event Frequency (TEF) Public CTI plus the CISA advisory establishes that an active campaign exists. Expert estimates range from “twice a year” to “fortnightly” probing — calibrated against historical CyberAv3ngers tempo, including the November 2023 Unitronics campaign that compromised at least 75 PLCs.
Probability of Action / Vulnerability Function of how exposed the engineering protocol is, whether default credentials are in play, and how segmented the vendor path is. OT engineers and pen-testers calibrate this directly.
Loss Event Frequency (LEF) TEF × Probability of Action.
Primary Loss Plant outage hours × cost per outage hour. Cost-per-hour comes from the entity’s own production figures and customer SLAs.
Secondary Loss Regulatory penalty (under the proposed SOCI penalty regime, civil penalties for failure to comply with a Ministerial direction rise to 2,000 penalty units / $660,000 for individuals and 10,000 penalty units / $3.3 million for corporations). Plus customer notification, legal, board response and reputational impact.
Annual Loss Expectancy (ALE) Monte Carlo over the above, expressed as a distribution: median, 90th-percentile and 99th-percentile dollar values.
A right-skewed distribution chart with median, 90th percentile and 99th percentile dollar values marked. Median sits in the low millions, 90th percentile in the higher millions, 99th percentile in the tens of millions, illustrating the kind of output a board needs. Illustrative ALE distribution: PLC-compromise scenario A board does not want one number. A board wants three. Median $2.1M on the risk register 90th percentile $8.4M drives control investment 99th percentile $23.6M drives the insurance limit Annual Loss Expectancy ($) — illustrative only, not a forecast for any specific entity A heatmap cell labelled “high / red” survives none of these three conversations.

A board does not want one number. A board wants three: expected, bad case, worst-credible case. The 90th percentile is the number that drives control investment. The 99th percentile is the number that drives the cyber insurance limit. The median is the number that goes on the risk register. A heatmap cell that reads “high / red” survives none of those conversations.


Why this matters in Australia, right now

Three Australian timelines collided in April 2026.

SOCI penalty escalation. Following the s.60G independent review (Nov 2025–Jan 2026), the civil penalty for failing to comply with a Ministerial direction is rising to 2,000 penalty units ($660,000) for individuals and 10,000 penalty units ($3.3 million) for corporations.

CIRMP rule enhancements. The Department of Home Affairs’ consultation on the Exposure Draft of the enhanced Critical Infrastructure Risk Management Program (CIRMP) Rules closes on 1 May 2026. The Department received over 60 submissions in the December 2025–February 2026 round. The proposed direction is unmistakable: CIRMP is moving from “described risk” to “demonstrated risk”.

Board attestation. A CIRMP is, by design, a board-approved framework, with an annual report on its effectiveness due to the board within ninety days of the entity’s financial year end. The first time a board-attested CIRMP is challenged in a Federal Court enforcement action — and that day is coming — the entity will need to produce evidence that “effective” was a quantitative claim, not a colour.

If the directors of a SOCI-regulated entity sign an attestation in the July–September 2028 window saying their cyber risk management is “effective”, they will eventually be asked what effective meant in dollars. “We had a heatmap” is not a defence.


Five things to do this month

1

Inventory your unscannable assets.

Most organisations underestimate the size of this list by a factor of three to five. Include OT and SCADA, air-gapped lab networks, legacy systems your scanners skip, vendor-managed equipment in your DMZ, and embedded devices in your buildings. Even an incomplete list is a starting point for prioritisation.

2

Pick one canonical scenario and model it end-to-end.

Choose the asset whose compromise would cost you the most, not the asset that is loudest in the SIEM. Walk a small group of experts — OT, security, risk, finance — through the threat event, the kill chain, the operational impact and the financial impact. Document the ranges, not point estimates.

3

Convert one risk to a defensible dollar number.

Take one item off the top of your existing risk register and force it through a quantitative model. ALE, with 90th and 99th percentile bands. The first one will feel uncomfortable. That discomfort is the point — it is the difference between assertion and evidence.

4

Stress-test your board attestation language.

Read your most recent CIRMP attestation, or its equivalent if you are not SOCI-regulated. If the regulator asked “what does effective mean here, in dollars?”, could you answer? If not, that is the work for FY27.

5

Pressure-test your CRQ tooling against the unscannable case.

If you already use a CRQ platform, ask it to model AA26-097A against your environment. If the answer is “we do not have telemetry for those assets”, the platform is doing exactly half the job. The other half — quantifying what the scanners cannot reach — is solvable, but only with structured expert elicitation, not with more telemetry.


The bottom line

The April 2026 PLC advisory is not an outlier. The CyberAv3ngers tempo from 2023 to 2026 makes it clear: state-aligned actors are now industrialising attacks on the most poorly instrumented part of the modern enterprise. Aerospace called this “where the data is worst, the consequences are worst.” Critical infrastructure is now living that proverb.

Cyber risk in 2026 is increasingly carried by assets your tools cannot see. The CRQ market is splitting accordingly. One half is automating FAIR over what can be scanned. The other half — the half that matters for SOCI, for cyber insurance pricing, for board attestation, and for the conversation that follows the next AA26-097A — is quantifying what cannot be scanned.

Boards, regulators and insurers are converging on the same demand: a defensible dollar figure, with a confidence interval, that holds up under cross-examination. Not a colour. The maths is not the hard part. The hard part is admitting that for the most consequential third of the asset base, the inputs to the maths come from people, not telemetry — and that the quality of those inputs is now your single biggest source of model risk.

If your CRQ programme cannot put a number on a Rockwell PLC, the board you report to is unprotected against the very threat the FBI, CISA, NSA, EPA, DOE and CYBERCOM jointly named six weeks ago. That gap will close one way or another. The only question is whether it closes with you, or with your regulator.

Related reading

Share this post