9 MIN READ
If you have a cyber risk quantification (CRQ) programme at all in 2026, you are ahead of about seventy per cent of Australian organisations. The FAIR Institute's most recent State of Cyber Risk Management report puts FAIR awareness at 99% but actual adoption at 24%, with another 22% planning to start. The category is finally crossing the chasm.
But underneath that progress sits a quieter problem that very few CRQ vendors will say out loud.
The Monte Carlo simulations, the loss-event-frequency curves, the Annual Loss Expectancy figures going onto the board pack — almost all of it is calculated on the half of the environment that the scanners can see. Laptops. Servers. Cloud workloads. Identity. SaaS.
The other half — the operational technology, the air-gapped enclaves, the legacy systems running on operating systems that haven't shipped a patch since 2017, the third-party vendors you can't put an agent on, the medical devices, the embedded controllers — that half gets a heatmap. Maybe a workshop once a year. Maybe a column in a spreadsheet.
That's the problem this post is about: the unscannable asset blind spot. And it is the reason a lot of FY27 CRQ programmes are going to deliver a mathematically rigorous answer to half the question.
Critical infrastructure operators face a paradox — the assets that carry the highest consequence of compromise are the ones with the worst evidence base.
"Unscannable" isn't a vendor category. It's a description of any asset where active scanning, agent deployment, or routine telemetry collection is unsafe, prohibited, or impractical. In our experience consulting on critical infrastructure environments, six categories show up over and over.
| Asset category | Why scanners stop | Where you find them |
|---|---|---|
| OT / SCADA | Active scanning can disrupt safety-critical control loops. Vendor warranty often voids on non-approved tooling. | Power, water, gas, ports, manufacturing |
| Air-gapped networks | No network path to the scanner. By design. | Defence, intelligence, classified research |
| Legacy / end-of-life | Cannot run a modern agent. Patch chain ended years ago. | Healthcare, government, utilities, banking core |
| Embedded / IoT / medical | Resource-constrained hardware. No agent footprint available. | Hospitals, smart buildings, transport |
| Third-party / vendor | No admin access. No agent rights. Their estate, your risk. | Supply chain, MSPs, outsourced services |
| Pre-production / new build | The asset doesn't exist yet. Risk needs to be modelled at design time. | Greenfield infrastructure, M&A, new product lines |
If your organisation is in critical infrastructure, healthcare, defence, manufacturing or anywhere else with a serious operational technology footprint, somewhere between thirty and sixty per cent of your consequence-weighted attack surface lives in this list. The percentage varies. The shape of the problem doesn't.
It's tempting to read this as the familiar "IT and OT teams don't talk to each other" story. It is bigger than that. It is a structural feature of how the modern CRQ market grew up.
The fastest-growing CRQ platforms — on both sides of the Pacific — are built around an automation thesis. Plug in your scanners, your CMDB, your cloud security posture tool, your endpoint telemetry, and the platform produces an Annual Loss Expectancy figure with confidence intervals. The pitch is "real-time, data-driven, low-touch." And for the assets those tools can see, the pitch is real.
The trouble is that the same architecture, by design, has nothing to say about the other half of the environment. No telemetry in, no model out. The unscannable estate is invisible to the very platforms that are now winning Series-C rounds on the promise of measuring it.
If your CRQ tool requires telemetry and your highest-consequence assets can't safely produce telemetry, you don't have a CRQ programme for those assets. You have a CRQ programme that pretends those assets aren't there.
Boards and regulators are increasingly able to spot the gap. Under SOCI's enhanced CIRMP rules — in consultation through May 2026, with penalties already lifted to $3.3 million per breach for corporations — board attestation is now the legal artefact. A board that signs off on an ALE figure derived purely from the IT estate is signing off on a partial picture. APRA's CPS 234 reviewers, ASIC's cyber disclosure work, and the DORA examiners in the EU are all heading in the same direction.
To make the gap concrete, here is a stylised — but realistic — portfolio ALE breakdown for an Australian water utility we modelled this year. Numbers are illustrative and rounded. The shape is the point.
The OT estate, the third-party portal access, and the legacy billing system together carry roughly $10.6 million of ALE — almost three times as much. Those numbers are noisier and the confidence intervals are wider, but they are the numbers that drive the actual loss distribution. They are also the numbers most likely to be replaced with a red, amber or green dot in the board pack.
The first instinct, when a CRQ vendor is shown this gap, is to extend the telemetry. Bigger agents. Passive sensors on the OT network. Hire a specialist OT scanner. Negotiate access with the third party. All worth doing, none of it sufficient.
Active scanning of an in-service control loop is one of the small set of things that can put people in physical danger. The trade-off is not a budget question; it is a duty-of-care question.
Many OT vendors void support if non-approved tooling is deployed on their kit. Until that changes industry-wide, telemetry coverage is bounded by what the OEM permits.
For some classified or safety-critical systems, the absence of a network path is the security control. Putting a scanner in there is the failure mode, not the goal.
If you are quantifying risk on a system that doesn't exist yet — greenfield infrastructure, an acquisition target, a new product line — there is no telemetry, by definition. Risk needs to be modelled, not measured.
The conclusion most experienced OT and risk leaders eventually reach is uncomfortable but accurate. For a meaningful slice of the most consequential assets in the country, telemetry will never be the answer. Something else has to fill the gap, and it has to produce numbers in the same currency as the IT-side analysis — an Annual Loss Expectancy in dollars, with a confidence interval, that a board can sign and a regulator can interrogate.
The good news is that the methodology for quantifying risk on assets you can't scan is not new. Defence agencies have been doing it for decades, using structured expert elicitation and Monte Carlo simulation. The Australian Defence community uses a version of this for classified estates today. Insurance underwriters have been doing a more commercial version of it since well before the cyber line of business existed. The maths is mature.
What is new is the operational scaffolding around it — the parts that make it usable inside a working CISO or CRO function rather than as a one-off six-figure consulting engagement.
What does it take to make the second track work in practice? Five things, in our experience.
Generic attack trees are too abstract. The model needs to be specific enough that an OT engineer, a control systems vendor, and a red-team lead can disagree productively. MITRE ATT&CK for ICS, supplier-specific attack patterns, and named threat actors give the discussion something to bite.
An expert's seniority is not the same as their forecasting accuracy. Tracking how often a contributor's probability estimates have matched real-world outcomes — over time, over many questions — lets you weight their input rather than averaging everyone equally. This is the single biggest lift in input quality.
Three experts giving you 0.05, 0.30 and 0.65 for the same probability isn't a problem — it's the data. The aggregation method needs to honour the disagreement instead of averaging it away, and it needs to be auditable. The Defence community calls this structured expert elicitation; the forecasting community calls it a prediction market. Same mathematics.
The point of running expert elicitation through Monte Carlo is to produce ALE in dollars — not a different scoring system, not a parallel heatmap. If the unscannable estate ends up on a separate page in a different unit, the gap reasserts itself.
The single biggest lesson from defence-grade risk-quantification frameworks is that the methodology was usually sound and the tools were rejected for being manual and cumbersome. If a control systems engineer cannot give you their probability estimates in under five minutes a fortnight, the programme will quietly die. User friction is the dominant product risk, not feature gaps.
Whether you build, buy or extend, the same five questions will surface the unscannable-asset gap quickly.
The Australian CRQ market in 2026 is at an inflection point. Awareness of FAIR is universal, adoption is rising fast, and boards are signing quantified attestations under penalty for the first time. That is real progress and worth defending.
But the next maturity step is not about better Monte Carlo simulation. The maths is already good enough. It is about closing the evidence gap on the half of the environment that the scanners cannot touch — the OT estate, the air-gapped enclaves, the legacy systems, the third parties, the pre-production builds. The category that has the highest consequence of compromise and the worst evidence base.
That is not a tooling category that exists yet at scale in Australia. It is the category that the next generation of CRQ programmes will be judged on. The boards already know it.
The CISOs who build it into their FY27 programme go into the SOCI attestation, the APRA review, the cyber insurance renewal and the budget conversation with a number that survives scrutiny. The ones who don't will be defending a heatmap on the most consequential assets in the country.
Math over vibes — on the half of the environment your scanners can see and the half they can't.
Book a working session and walk away with a defensible ALE for the assets your scanners can't see.