Blog / Policy Critique

The Control Fixation
in the Security Properties.

The G7's Software Bill of Materials for AI — Minimum Elements (Évian, 2026) proposes seven clusters to make AI supply chains transparent. Six describe what an AI system is made of. One — Security Properties — describes what an AI system should be defended with. Therein lies the problem.

BK
Bernhard Kreinz
~12 min read

Document: SBOM for AI — Minimum Elements (G7, Feb 2026)
Publishers: BSI · ACN · ANSSI · CSE · CISA · NCSC · NCO · EU Commission
Framework: TLCTC v2.1

▶ Thesis

The Security Properties (SP) cluster fixates on controls. It enumerates encryption, access control, prompt-injection filters, adversarial-robustness training, and so on — but it never says which threat each control answers, never separates the System Risk Event (SRE) the control is meant to prevent from the Data Risk Event (DRE) it is meant to mitigate, and never binds the control to a specific element of the AI system (Model, Dataset, System, Infrastructure). The result is exactly what a Bill-of-Materials is supposed to eliminate: blurred security documentation that cannot deliver the assurance it promises.

01What the document actually proposes

The G7 document defines seven clusters (Figure 1 of the original). Six are compositional — they say what is inside an AI system:

  • Metadata — information about the SBOM document itself.
  • System Level Properties (SLP) — the AI system as a whole, its components, data flow, intended application.
  • Models — names, identifiers, hashes, training properties, lineage.
  • Datasets Properties (DP) — provenance, content, sensitivity, statistics.
  • Infrastructure — software dependencies, HBOM links.
  • Key Performance Indicators (KPI) — uptime, latency, plus "security metrics".

The seventh cluster is different. It does not describe an element; it describes a posture:

  • Security Properties (SP) — "the cybersecurity measures that apply to AI models and systems."

That single substitution — from what is in the system to what is done to defend it — is where the document changes register, and where its assurance value collapses.

02The Security Properties cluster, verbatim

Here is the full SP cluster as published. Purple items are AI-specific controls; the rest are "general cybersecurity controls":

§2.6 · Security Properties (SP) — Security controls
  • encryption
  • data minimization
  • differential privacy techniques
  • access controls
  • API authentication
  • ip/op anomaly detection systems and filters
  • physical controls
  • technical controls
  • system-level controls (role-based access)
  • administrative measures
  • adversarial robustness training
  • prompt injection controls and tools for LLMs / LLM-based agents
  • input/output filters
  • data-level controls (to curate training data)

The remaining three elements of the cluster — Security compliance, Cybersecurity policy information, Vulnerability referencing — are pointers (to standards, to security.txt, to vulnerability databases). They tell you where assurance might exist; they do not contribute to it.

Now ask the one question a Bill-of-Materials must be able to answer: "Encryption — of what, against which threat, preventing which event, mitigating which consequence?" The SP cluster, as written, cannot answer any of those four.

03Three failures, one diagnosis

The Security Properties cluster fails in three independent, compounding ways. Each on its own would weaken assurance. Together they make the cluster cosmetic: it documents that a producer has thought about security without documenting what they thought.

FAILURE 01

No threat view. The cause side is missing.

Every control on the SP list is an answer. None of them states the question. "Encryption" — against an attacker who is where? In the path (#5 MitM)? On disk after physical seizure (#8 Physical)? Reading an over-shared bucket (#1 Abuse of Functions)? Each implies a different cipher, key custody, and threat model.

TLCTC fixes the threat side first: ten cause clusters, ten generic vulnerabilities, one cluster per attack step. The SP cluster begins from controls and never returns to causes — so a reader cannot tell which threat is covered, which is partly covered, and which is silently uncovered.

▶ Effect:controls become rituals, not countermeasures.

FAILURE 02

SRE and DRE are conflated. The bow-tie is folded.

In TLCTC's Cyber Bow-Tie the causal chain is cause cluster → SRE → DRE → BRE:

Cause
Threat Cluster
#1 … #10
Central event
SRE
Loss of Control
Tech outcome
DRE
C · I · Av · Ac
Business outcome
BRE / Impact
cascading

A control sits on an arrow, never on a node. Preventive controls reduce cause→SRE likelihood; mitigating controls reduce SRE→DRE or DRE→BRE impact. The SP cluster never declares which arrow each control sits on.

The KPI cluster makes the conflation explicit. It defines Security metrics as "robustness (resilience against third-party manipulation)" — collapsing actor, cause, central event, and consequence into a single mood-word. "Third-party manipulation" could be #10, #9, #4, or #1; the resulting "robustness" therefore measures nothing falsifiable.

▶ Effect:you cannot tell whether a control prevents compromise or limits damage — so you cannot tell whether you have prevention or limitation.

FAILURE 03

The cluster is scope-blind. Controls float free of elements.

The SLP, Models, DP, and Infrastructure clusters carefully distinguish four kinds of thing: the system as a whole, the model, the dataset, the infrastructure underneath. Each has its own attack surface, its own producer, its own lifecycle.

The SP cluster ignores all four. "Access controls" — on what? The model weights file (Models)? The fine-tuning corpus (DP)? The vector store (SLP)? The GPU node (Infrastructure)? Each binding implies different controls, different operators, different incident playbooks. Listing "access controls" once, unbound, means the producer can claim the property by having any one of them — exactly the gap a Bill-of-Materials is supposed to close.

A genuine SBOM-AI would cross-tabulate: every control declared against the element it scopes to and the cluster threat it addresses. The G7 document declines to.

▶ Effect:compliance is provable, coverage is not.

04Case study: prompt injection controls

The most consequential entry in the SP cluster is the AI-specific bullet "prompt injection controls and tools for LLMs or LLM-based agents." It is the only line that names a class of attack at all. It is therefore the best test of the cluster.

Test case · "prompt injection controls"
▶ What the SBOM-for-AI says

Producer declares the SP element "prompt injection controls and tools for LLMs or LLM-based agents". End of declaration.

A reader cannot tell: is the control a system-prompt hardening template, an output classifier, an in-context tool-call gate, an indirect-injection retriever scrubber, a guardrail model, or all of the above? Is it at the model level, the system level, the dataset level (training-time backdoor), or the infrastructure level (memory store)? Does it address direct prompts from the human or content fetched from third-party tools?

▶ What a TLCTC-shaped SP entry would say

"Prompt injection" is not one threat. It is at least three different TLCTC steps depending on the path:

  • #1 Direct injection of the user prompt. The instruction-following capability of the LLM is being abused as designed — no implementation flaw, no exploit code. Pure Abuse of Functions. The control must constrain or wrap the legitimate instruction surface. Element: Model + System.
  • #3 Indirect injection via retrieved content. The LLM is now in client role — it consumes external content (a fetched page, an email, a tool result) and is misled by it. This is structurally a client-side threat (Exploiting Client generic vulnerability: insufficient validation of consumed content). The control must filter, label, or isolate untrusted content. Element: System Data Flow + Model.
  • #10 Injection via a trusted MCP / tool / plugin provider. The third-party feed is honoured as authoritative at the Trust Acceptance Event. The control must verify provenance and constrain trust scope at the SP — not at the model. Element: Infrastructure + System.

A reader of a TLCTC-shaped SP entry would learn: which of the three the producer covers, at which element, preventing which SRE (e.g. "agent loss of tool-call integrity"), and mitigating which DRE (e.g. "[DRE: C] on retrieved private context"). The same English phrase — "prompt injection controls" — now carries an answer to all four questions.

05The shape of an SP cluster that would deliver assurance

The fix is structural, not editorial. The SP cluster does not need more controls; it needs a three-axis schema: threat × element × event-side. Below is a fragment of how the same controls already in the G7 list would look if they were re-bound to the elements the rest of the SBOM already defines.

Threat (cause cluster) @ Model @ Dataset @ System @ Infrastructure
#1 Abuse of Functions
e.g. direct prompt injection, MFA-fatigue agent loop
System-prompt hardening, tool-call allow-listReduces cause → SRE Input filters, rate-limit on agent loopsReduces SRE → DRE Role-based API authReduces cause → SRE
#3 Exploiting Client
e.g. indirect injection via retrieved content
Output classifier, untrusted-content tagsReduces cause → SRE Retrieval scrubber, content provenanceReduces cause → SRE
#7 Malware (incl. backdoors)
e.g. weight tampering, poisoned dependency
Model hash + signatureDetects cause → SRE Data-level curation, dataset hashReduces cause → SRE Code signing, runtime allow-listDetects cause → SRE
#1 / #3 data leakage
e.g. memorisation extraction, system-prompt exfil
Differential privacy, output filterReduces SRE → DRE: C Data minimisationReduces SRE → DRE: C Egress filterReduces SRE → DRE: C
#10 Supply Chain (TAE)
e.g. compromised MCP / tool / model registry
Producer signing, lineage attestationDetects cause → SRE Dataset provenance verificationDetects cause → SRE Tool allow-list at TAEReduces cause → SRE Container provenance (SLSA)Detects cause → SRE
#4 Identity Theft
e.g. API-key abuse, session replay against the agent
Session binding, per-tool tokensReduces cause → SRE API authentication, key rotationReduces cause → SRE

Every cell is now a falsifiable claim. An auditor can ask: "Show me the system-prompt hardening for #1 @ Model" and receive an artefact or a "not implemented". The G7 SP list, as currently written, has no cells; it has only the row labels.

Note also that this is purely additive. The G7 document already names every other axis — Models, Datasets, System, Infrastructure are already first-class clusters of the SBOM. The threat axis is the one missing dimension. The SP cluster needs ten labels, not ten more bullets.

06Why the document writes this way (and why it is still wrong)

The Discussion section closes with a sentence that, read carefully, is an admission:

"an SBOM for AI by itself is not sufficient for increasing cybersecurity along the supply chain. To ensure substantial protection of the AI supply chain, it is necessary to connect the SBOM for AI to cybersecurity tools, such as vulnerability scanning and management tools, security advisories and bulletins…" — G7, SBOM-for-AI Minimum Elements, §3 Discussion

The authors know the SP cluster does not carry assurance. They defer it to "cybersecurity tools" downstream. But the SP cluster is precisely the slot where the SBOM could express the threats the downstream tools are expected to detect against. By writing the slot as a flat list of control names, the document forecloses that interpretation. The downstream tools will inherit the same blurred categories.

A second, related symptom: the document explicitly removes "level of decision making or autonomy of an AI system" from the minimum set, deferring it to "safety requirements" that "may be addressed differently across different jurisdictions". This is the same reflex — defer the cause-side framing, keep the control-side list. The result is that an agentic AI system, where loss of tool-call integrity is the dominant SRE, has no place to declare that fact in its SBOM.

What gets lost in the blur

The G7 SBOM-for-AI is a transparency document. The six compositional clusters are good at what they do — they give a reader the parts list, the provenance, the dependencies. They will support the vulnerability-scanning and lineage-checking use cases the document promises.

But the Security Properties cluster is asked to do something the rest of the document does not do: bear an assurance claim. An assurance claim has a shape — it names a threat, an event it prevents, a consequence it limits, and a part it applies to. The SP cluster delivers none of these. It delivers an aesthetics of security: a list of nouns that look like protection.

A Bill-of-Materials that names parts but not the threats those parts face, nor the events those threats cause, nor the consequences those events produce, is not a Bill of Security — it is a Bill of Reassurance.

The fix is small, structural, and additive: replace the flat "Security controls" element with the threat × element matrix already implied by the other six clusters, and place each control on a specific bow-tie arrow (cause → SRE, SRE → DRE, DRE → BRE). The rest of the document continues to work. The SP cluster starts to.

Until then, the G7 has produced what the cause-side taxonomy was designed to prevent: blurred security documentation without the assurance you hope to achieve.

References & Glossary

Document analysed: "Software Bill of Materials for AI — Minimum Elements", G7 Cybersecurity Working Group, Évian 2026. Published by BSI · ACN · ANSSI · CSE · CISA · NCSC · NCO with the EU Commission. Source: cisa.gov/resources-tools/resources/software-bill-materials-ai-minimum-elements.

SRE = System Risk Event (Loss of Control / System Compromise — Bow-Tie central event) · DRE = Data Risk Event (C / I / Av / Ac) · BRE = Business Risk Event (cascading business consequence) · TAE = Trust Acceptance Event (where #10 is placed).

TLCTC Framework · v2.1 · CC BY 4.0