Blog / Standards & Critique

The Control Fixation Reflex

Why the cybersecurity industry can't stop counting controls — and what it has stopped asking.

BK
Bernhard Kreinz
Loading read time...

Release day for any major framework — NIST CSF 2.0, ISO 27001:2022, CIS v8.1, PCI DSS 4.0 — follows a predictable script. Within hours, vendors publish "what's new" briefings. Within days, CISOs circulate diff tables to their teams. Within weeks, GRC platforms ship updated control libraries. Mappings get refreshed. Audit programs get amended. Training decks get rebuilt.

What almost no one does, on release day or any other day, is ask: which threats are these controls supposed to address, and how well do they address them?

This is the Control Fixation Reflex.

Reflex, not bias

A reflex, not a bias. Biases are reasoning errors you can in principle catch. Reflexes are autonomic — they fire before reasoning starts. The cybersecurity industry has trained itself, over thirty years of compliance evolution, to operate at the control layer as if it were the foundational layer. Threats appear in appendices. Controls appear in the main body. Practitioners spend their careers in the main body.

The reflex shows up on two surfaces — and then propagates outward into a cascade that makes it structurally inevitable.

Surface One: The Standards Bodies

Open any modern control framework and you will find a cross-mapping appendix: NIST ↔ ISO ↔ COBIT ↔ CIS ↔ PCI. The implicit message is rigor — we are at least as complete as the others, and here is the proof. But complete toward what?

Every framework in the mapping table is a control vocabulary. None of them sits on a threat taxonomy. Mapping between them is lateral translation between equivalent dictionaries — none of which contains the word for what the controls are supposed to be controlling against. More cross-references increase the appearance of rigor while propagating the same foundational gap. It is lateral motion mistaken for depth.

When a standards body updates a control catalog, the update is justified on internal grounds: alignment with peer frameworks, regulatory developments, "lessons learned" from incidents. It is rarely justified by reference to a threat model that exists outside the catalog itself. The catalogs are self-referential.

Surface Two: The Practitioners

Now watch what happens at the receiving end. A new release drops. The CISO's first question is "what changed?" The team produces a delta: new controls in green, modified controls in yellow, deprecated controls in red. The reflex isn't reading the changes — the reflex is treating the changes as the unit of analysis.

The question that should be asked — has the threat landscape shifted in a way that makes our existing control selection less effective? — cannot even be formed without a threat taxonomy that exists independently of the catalog. Without that anchor, "what's new" becomes the only available signal. Practitioners read deltas at the control layer because there is no other layer to read them at.

A new control is meaningful only if it addresses one of three things: a newly recognized threat, a previously underweighted threat, or a velocity-and-fitness gap against an existing one. Almost no release notes are written this way. Almost no CISO reads them this way.

How the Reflex Reproduces Itself

If the reflex stopped at standards bodies and CISOs, it would be merely irritating. It does not stop there. The reflex is reproduced and reinforced at every layer of the cybersecurity ecosystem.

Vendors mirror the buyers' taxonomy. Product datasheets say "covers NIST CSF DE.CM, PR.AC." They do not say "reduces residual risk against threats #1, #4, #7." Marketing inherits the language of the customer; the customer's language is control-shaped; vendor claims are therefore control-shaped. The market itself has no threat-efficacy signal — only control-coverage signals.

Auditors verify presence, not fitness. An audit report documents that a control exists and is operating. It does not document whether the control actually works against the threat it was supposedly selected to address. This is the distinction between existence and fitness, and audit methodologies are blind to it. Auditors could not measure fitness if they wanted to — there is no threat anchor to measure against.

GRC tooling enforces the reflex architecturally. The data models of Archer, ServiceNow GRC, OneTrust, and their peers are control-centric. Threats are at best a free-text field hanging off a control record. The software cannot represent "this control has a maximum cause-side detection effectiveness of 0.6 against threat #9 at velocity Δt<1m." The schema does not permit that thought, so practitioners using the schema do not have that thought.

Maturity models multiply the error. CMMI, C2M2, NCSC CAF — all measure how well a control is implemented and operated, not what it actually controls against. Level 4 maturity of a control that addresses the wrong threat still leaves the threat fully exposed. Maturity is a control-internal metric. Without threat coverage, maturity scores are decorative.

Where the reflex causes the most damage

Board reporting completes the loop. Dashboards show "control coverage %" or "compliance %." The board hears "we are 87% compliant" and infers "we are 87% safe." There is no chain of reasoning from that number to residual risk against any specific threat. This is where the reflex causes its most consequential damage: strategic capital is allocated based on a metric that does not measure what it implies.

When New Threats Arrive

There is one more symptom worth naming, because it shows the reflex working in real time. When a new threat category emerges — AI/ML risk, supply chain compromise, post-quantum cryptography — the reflex does not ask which existing threat clusters does this fall into, and how does it shift their fitness profile? It invents new control families: "AI governance controls," "supply chain controls," "PQ readiness controls." Catalog inflation substitutes for taxonomic clarity. Every new threat becomes a new shelf of controls, indexed against no taxonomy at all.

The Cognitive Layer

The deepest layer of the reflex is linguistic. The vocabulary of cybersecurity has become control-shaped. Practitioners say "we need MFA" rather than "we need to reduce exposure to credential-acquisition threats — Phishing (#9), Identity Theft (#4), and the keylogger paths inside Malware (#7)." Controls have become the unit of cognition, not just the unit of action. The reflex is not only what people do on release day; it is what they can think the rest of the year.

What the Reflex Is Filling In For

The point of naming the reflex is not to embarrass the standards bodies, the CISOs, the vendors, the auditors, or the GRC tooling. They are all responding rationally to the absence of a coherent threat taxonomy. There isn't one — at least not in the foundational, cause-based, operationally usable form the field needs.

Heuristic threat lists exist. NIST SP 800-30. ISO 27005 Annex A. ENISA Threat Landscape. VERIS. They are not taxonomies, and they are not the operational anchor of the control catalogs. The asymmetry — threats in appendices, controls in main bodies — is the signature of the reflex. It is the shape of a discipline that has never had a diagnostic foundation and has spent decades getting very good at writing prescriptions anyway.

The Top Level Cyber Threat Clusters (TLCTC) framework exists to provide that foundation. Ten clusters. Cause-based. Mutually exclusive and exhaustive. Sitting underneath the controls, not next to them. With TLCTC in place, the questions the reflex bypasses become askable:

  • Which threats does this control actually address?
  • What is its maximum effectiveness against those threats? Its fitness against their velocity? Its operational performance in our environment?
  • When the catalog updates, which threats are now better covered, and which are not?
  • When the board sees "87% compliant," what does that mean for residual risk against #4 versus #9?

These are not radical questions. They are the questions a coherent risk discipline would have asked from the beginning. The Control Fixation Reflex is what fills the silence where those questions should be.

About the Framework

The TLCTC (Top Level Cyber Threat Clusters) framework defines ten mutually exclusive, cause-oriented threat clusters anchored in a Bow-Tie risk model. Each cluster is bound to exactly one generic vulnerability, which is what makes it a taxonomy rather than a list. For the full definitions and integration patterns: read V2.1 · tlctc.net.