
Most enterprise security teams are not running blind. They have scanners deployed, a vulnerability management programme with defined owners, and some form of regular testing cadence. By conventional measures, the programme is active. And yet the question that matters most — whether exploitable exposure is actually declining — often goes unanswered. Remediation queues grow. Findings age in backlogs. When something goes wrong, the exposure that enabled it had usually been sitting in a report for months.
The problem is not activity. It is structure: the absence of a programme architecture that connects discovery to confirmed exploitability to remediation at the speed the threat environment demands. Scanners produce findings. Findings produce tickets. Tickets close, or they do not. But none of that answers the question that security leadership is ultimately accountable for.
Today, Hadrian is publishing the Hadrian External Exposure Maturity Model — a framework built to give security leaders a precise language for where their programme is today, what is structurally holding it there, and what has to change to advance.
What the model is — and what it is not
The Hadrian External Exposure Maturity Model is not a product pitch and it is not a compliance checklist. It is a diagnostic framework built from real programme patterns across enterprise environments at every maturity level. It maps four distinct operating postures, each defined not by which tools are present but by how an organisation discovers exposure, confirms it is real, and closes it before an attacker can act.
The four stages are Stage 1: Running Blind, Stage 2: Better Scope, More Hope, Stage 3: Connecting the Dots, and Stage 4: Clear Picture. Each is a recognisable portrait of how security programmes actually operate, not how they are documented in policy. The model scores programmes across seven operational dimensions independently, which matters because overall stage averages conceal the specific bottleneck that is suppressing performance across everything else.
A programme that averages Stage 3 but has Stage 1 validation is not a Stage 3 programme. It is a programme with a Stage 1 bottleneck that sets the ceiling for remediation speed, board reporting, and every other dimension above it. Most internal assessments miss this because they score overall posture rather than isolating the weakest operational link.
The model is also explicitly aligned with the Gartner Continuous Threat Exposure Management (CTEM) framework. The five CTEM phases — Scope, Discover, Prioritise, Validate, and Mobilise — map directly to the maturity progression. Stage 1 and Stage 2 organisations are typically executing the early phases. Stage 3 and Stage 4 organisations have closed the loop through Validation and Mobilisation. The phase where most CTEM programmes stall is Validation: confirming that findings represent genuinely exploitable exposure rather than theoretical severity. That is the structural inflection point the model is built around.
The four stages at a glance
Stage 1 organisations are operationally reactive. Exposure surfaces after incidents, not before them. The security team knows it is missing things — it just does not know what, where, or how significant. Scanning happens, but the output is a list of severities that no one fully trusts and that grows faster than anyone can work through. The structural absence is not tooling; it is ownership. Nobody is accountable for the completeness of the external asset inventory, so nothing gets treated as urgent until something breaks.
Stage 2 represents genuine progress. There is a formal programme, regular testing, and structured ownership. The External Attack Surface Management (EASM) deployment has expanded coverage. The CISO can produce a posture report without scrambling. The name captures both the progress and the limitation: exploitability is still assumed from severity scores rather than confirmed through testing. Quarterly penetration tests were designed for environments that changed quarterly. Most enterprise environments today change daily. The programme is structured; it is not yet validated.
Stage 3 is the operational inflection point. Validation is no longer periodic — it is continuous. Blind windows between test cycles have largely closed. Exploitability is confirmed before findings are escalated, not inferred from CVSS scores. Remediation workflows route validated findings automatically. The programme is no longer cataloguing weaknesses in isolation; it is assembling them into a picture of how an attacker might actually move. The ceiling at this stage is orchestration throughput: connecting validated findings into a full attack picture at speed still requires meaningful human effort.
Stage 4, Clear Picture, is what exposure management looks like when all the structural constraints of earlier stages have been resolved. Discovery is continuous, validation is autonomous, and leadership reporting is derived directly from live programme data. When a new exploit is disclosed, the programme confirms within hours whether the surface is affected. Attack surface coverage exceeds 95%. Critical exposures close within 48 hours. The security team's attention shifts from triaging noise to the strategic questions that require human judgment.
{{cta-maturity-model}}
Why most programmes plateau — and where
The most common plateau is not at Stage 1. It is at Stage 2, and the reason is almost always the same: the validation bottleneck.
When confirming that a finding is genuinely exploitable requires a human investigation step, that step does not scale. Remediation queues fill with unvalidated findings. Teams default to informal heuristics — anything above a CVSS score of 8 gets treated as exploitable — because it is faster. It is also far less accurate, and it neither reduces the backlog nor improves confidence in what remains. The programme looks mature by activity metrics. The exploitable exposure is not going down.
The data supports this pattern. 93% of organisations use vulnerability scanners. Only 40% have adopted automated penetration testing. High tool adoption has not translated into faster remediation because the limiting factor is not which tools are present. It is whether those tools operate within a structure that validates exploitability, integrates with remediation workflows, and measures outcomes rather than activity.
The threat environment has not waited for programmes to catch up. The average time between vulnerability disclosure and active exploitation has dropped from 32 days to 5. Nearly one in three exposures is now exploited on or before the day it is disclosed. Most enterprise security programmes are built around weekly triage cycles and quarterly testing. That gap between attacker velocity and programme maturity is not a future risk. It is happening now.
What advancement actually delivers
Progression through the stages is not abstract. Each transition produces measurable operational improvement. Moving from Stage 1 to Stage 2, attack surface coverage rises from below 20% to 40-60% of the actual external surface. True-positive rates improve from under 10% to 15-25%, meaning the proportion of alerts that require investigation drops significantly. Mean time to remediation (MTTR) falls from 90-plus days toward 45-90 days. The security team stops spending most of its capacity on findings that turn out to be irrelevant.
The Stage 2 to Stage 3 transition delivers the most operationally significant change. Coverage reaches 75-90%. True-positive rates rise to 40-60%, meaning close to half of all findings are confirmed exploitable before anyone investigates them. MTTR compresses to 15-45 days. SLA compliance reaches 60-80%. The remediation queue gets smaller and more accurate: fewer total findings, higher confidence in each one, and a measurable reduction in exploitable exposure rather than just tickets closed.
By Stage 4, coverage exceeds 95%, MTTR falls below 7 days for all findings and below 48 hours for critical exposures, and when a new exploit is disclosed the programme answers within hours rather than days whether the surface is affected.
No organisation needs to reach Stage 4 for every asset. The right question is not how advanced the programme is in aggregate. It is whether the programme is operating at the level the organisation's risk profile requires.
How to assess your current stage
The interactive self-assessment scores your programme across all seven operational dimensions independently. The output is not an overall number — it is a dimension-level breakdown, a callout of the specific bottleneck suppressing performance across the rest of the programme, and a stage-specific action plan your team can act on directly. It takes five minutes.
If you already have a sense of your overall stage, the assessment is still worth completing. The dimension-level profile is what makes it useful in a leadership conversation: not "we are Stage 2" but "we are Stage 3 on discovery and Stage 1 on validation, and here is what closing that gap is worth in remediation time."
{{cta-demo}}






