Many security leaders have been taught a governance documentation hierarchy explicitly or by osmosis. The CISSP curriculum covers policy, standard, procedure, and guideline as distinct layers1, ComplianceForge sells a six-layer version2, and ISO 27001 requires documented policies and procedures3.
And yet most organisations produce one or two document types in practice. There is typically a “policy” that is actually a mixture of governance intent, technical requirements, and sometimes step-by-step procedures bundled into a single document that nobody reads end to end. There may be a “procedure” that combines operational criteria with work instructions, and auditors ask for “policies and procedures” as though those are the two things, reinforcing the collapsed structure that our GRC tooling then models.
Governance documentation actually has two main jobs: it should steer security choices downward through the organisation, so that teams can make good decisions without escalating every question, and it should rigorously support monitoring whether operations are effective, so that leadership knows whether the security programme is actually performing or merely performative (in other words, busy with little management of risk to show for it).
The models that we are taught appear to be structurally incomplete, as if nobody has invested the time into whether they’re fit for purpose.
I’ve reviewed enough organisations’ documentation to think the problem is not discipline or effort but missing concepts: the hierarchy as commonly taught and used simply misses the foundational ideas needed to support either steering or monitoring.
There are six clear gaps that I see, and they explain most of the practical failures in organisations that have tried to follow the standard governance documentation playbook.
No concept of invariants
The industry describes policies as “high-level statements of management intent” or “what and why without how.” These are vague, aspirational framings that usually produce documents nobody can test against. A policy that says “we are committed to protecting customer data” is not verifiable, because nobody can determine whether it holds or doesn’t hold on any given Tuesday; it expresses aspiration rather than a constraint.
A document built on aspiration generally can’t steer choices. An engineering team facing a design decision about where to store payment data gets no actionable guidance from “we are committed to protecting customer data.” They need a constraint they can test their design against.
Most security documentation sits somewhere between aspiration and constraint, at the level of intention. “Customer data should be encrypted at rest” is an intention: directional, and you know what the organisation wants, but “should” leaves room for interpretation and exception. An auditor can check whether you intended to encrypt; they can’t definitively determine whether the intention holds in all cases, because the document itself allows for ambiguity.
What’s missing is the concept of invariants: conditions that must always be true or must never be true. “Customer payment data must never be stored unencrypted at rest” is an invariant, and its binary quality is precisely what makes it useful for both jobs; it steers the design decision (the team knows what they can’t do) and it’s testable (you can verify whether it holds). The gradient from aspiration through intention to invariant is the gradient from unverifiable to automatically checkable.
This is not a novel idea from software engineering but a sixty-year-old concept from mathematics that software engineering adopted because it works, and that the smart contract community adopted more recently because invariants make systems verifiable in adversarial environments4. Security governance hasn’t made the same move, despite arguably having the same fundamental need: testable assertions at every layer of the hierarchy. The reasons for this are probably cultural and workload-driven rather than technical.
Without invariants, policies are aspirational documents, and auditors can’t test against aspiration, so they tend to default to checking whether procedures were followed. It should be clear that low-evidence assessment of whether procedures were followed, months after execution, might have unpleasantly low correlation against baseline reality or indeed outcomes.
Effect or cause?
The documentation structure may very well create the audit checkbox behaviour everyone complains about, which is worth sitting with for a moment because it means our complaints might be very badly misdirected.No distinct process layer
The standard documentation model that I see in most organisations jumps from “what must be true” (the standard) to “how to do it” (the procedure), with no concept of a process layer that has its own distinct purpose: entry criteria, exit criteria, time and effort metrics, and quality measures that describe the boundaries of the work rather than how the work is performed.
This is the gap that makes monitoring epistemically unreliable.
Without a process boundary definition layer, monitoring still happens, but against invented KPIs and activity counts that might or might not reflect whether operations are effective. Think of the process layer as instrumenting the boundaries of a black box: you care about what goes in, what comes out, how long it takes, and whether the output meets the required quality, but you may not need to prescribe how the work inside the box is performed, as long as the criteria are met. This is how mature engineering organisations often think about operational performance, and it should sound very familiar to anyone who has worked with SLOs.
What the process layer defines
The process layer answers questions like: what starts the clock, what stops it, where the data comes from, how outcome attainment is calculated, what the operational response is when targets aren’t met, and what capacity or quality indicators are tracked alongside primary targets. These are measurement definitions, not method descriptions. ISO 9001 has maintained this distinction since the 1990s: a process is “a set of interrelated or interacting activities which transforms inputs into outputs” with defined measurable objectives, while a procedure is “a specified way to carry out an activity or a process.” The security industry uses both words but has largely collapsed the distinction.[^6]Without this layer, organisations can’t - in practice - instrument process boundaries because the boundaries don’t exist as a separate concept. I’ve asked “what’s your average time-to-remediation for critical vulnerabilities?” in enough intake meetings to know that most teams can find out and tell you how many tickets they closed last month but cannot answer a basic process performance question. Security teams end up measuring activity (tickets closed, reviews completed, scans run) rather than performance (did the process produce the right outcome within target?), and activity measurement tells you people are busy rather than whether operations are effective.
No connection between document structure and measurability
The industry generally treats measurement as a separate concern, something layered on top of existing documentation through KPIs, metrics dashboards, and ISMS effectiveness reviews. The idea that documentation architecture determines what you can and cannot measure is largely absent from the conversation.
Consider for a moment what happens when process criteria are embedded inside a combined policy-procedure document. You can’t extract those criteria for measurement without first restructuring the documentation, and you can’t automate measurement against criteria that exist only as prose inside a document that also contains governance intent and step-by-step instructions. The documentation structure is the prerequisite for measurability, and treating it as a parallel workstream could explain why so many security measurement programmes aren’t worth the squeeze.
The same structural problem makes drift invisible. If you can’t measure process performance against defined criteria, you can’t detect when the operation has drifted from what the documentation describes. Drift becomes visible only at audit time, by which point it’s an audit finding rather than an operational correction.
The typical pattern looks something like this: the organisation invests in a dashboard, discovers it can only display activity counts because there are no process criteria to measure against, and concludes that “security is hard to measure.” Security is not particularly hard to measure, but it becomes difficult when the documentation doesn’t provide anything measurable, and most documentation doesn’t, because the structure was never designed with monitoring in mind. And because of that, we never really put in the hard thinking work to understand the measures and targets we should have.
Audit tests what the documentation gives it
ISO 27001 Stage 2 audits explicitly test “implementation and effectiveness” of controls5, which in practice usually means: is the control implemented as documented, and does evidence exist that it operated a few times? Auditors often check whether real-world practices follow documented procedures and flag where informal shortcuts have gradually replaced defined workflows.
This is procedural compliance testing, and there is nothing particularly badly wrong with it as far as it goes. Which is not that far.
But the better question, whether the process produced the right outcome within its criteria, often can’t be asked because the documentation structure doesn’t provide criteria to test against. Auditors aren’t making an error here; they are testing against the best artefacts available to them, and if the documentation gives them procedures, procedural compliance is what they will test in practice. Give them process criteria with defined targets and they could test effectiveness instead.
Shift-left means tooling, not ownership
The entire shift-left security conversation is about moving SAST, DAST, and SCA scanning earlier in the CI/CD pipeline, and the tooling conversation is well developed. Search for “shift-left security” and you will find dozens of guides on integrating security testing tools into development workflows, with almost no discussion of who should own what.
This is a steering problem, and it goes beyond documentation. If you want teams to own the security outcomes in their domain, they need to own the methods, the documentation of those methods, and the responsibility for meeting the criteria set by the governance layers above them. That’s almost certainly impossible when work instructions are embedded inside board-approved policy documents. An engineering team that wants to update its vulnerability triage method may have to wait for a governance review cycle, because the method description lives in the same document as the governance intent it implements.
Separating the layers is a prerequisite for shifting real ownership to teams, and without that separation, shift-left can only mean “scan earlier” rather than “let teams own and improve the methods they use, within constraints they don’t control,” which is arguably a much more valuable interpretation.
Governance documentation is not machine-readable
The entire governance stack usually lives in prose: Word documents, PDFs, and wiki pages where nothing is machine-readable, nothing can be automatically checked, and nothing feeds directly into the tooling that engineers use daily.
This breaks both of the jobs for governance documentation simultaneously. Prose governance can’t steer choices at the speed software teams make decisions, because nobody is reading a 40-page policy document before a design review. And it can’t tell you whether operations are effective without a human manually assembling evidence for each review. In organisations where engineers define infrastructure as code, policy as code, and deployment as code, the security governance that constrains what the entire operation must and must not do lives in a format that no system can consume.
NIST recognises the problem. Their OSCAL project provides machine-readable formats for security control documentation6, and the policy-as-code movement (Open Policy Agent, HashiCorp Sentinel, AWS Config Rules) already expresses policy-like assertions in machine-readable formats at the infrastructure layer7. But these tools largely operate bottom-up: they define rules at the infrastructure level with no formal connection to the governance documentation above them.
Nobody appears to be working top-down: expressing governance invariants in machine-readable formats so that infrastructure-level checks can trace back to the governance assertions they implement. The top-down half is missing. It will likely remain missing as long as governance documents express aspiration rather than invariants, because aspiration is not something a machine can parse; “we are committed to protecting customer data” has no operational meaning a system can check, but “customer payment data must never be stored unencrypted at rest” does.
Other disciplines solved this decades ago
What makes these gaps particularly striking is that adjacent disciplines solved both problems long ago. ISO 9001 quality management has maintained a clear hierarchy since the 1990s8: policy at the top, then process (explicitly defined as a high-level construct with measurable objectives and inputs and outputs), then procedures and work instructions below that. The process layer exists specifically to support monitoring, and it is arguably the single concept the security industry most needs to borrow. COSO has separated governance intent from control activities in financial controls since 19929, and accounting practice has maintained a principles, policies, and procedures hierarchy for as long as the profession has existed10.
The concept of a technical standard as a precise, testable specification predates all of these by decades. BSI was founded in 1901, ANSI in 1918, and ISO itself in 1947, formalising what engineering disciplines had been doing since industrialisation: expressing requirements with enough precision that compliance is independently verifiable without asking the author what they meant. A structural steel specification says “minimum yield strength 250 MPa,” not “use strong enough steel.” Standards exist to steer choices; “systems should be patched in a timely manner” fails at that job because “timely” is not a constraint anyone can design against.
The security industry teaches a documentation hierarchy but has not adopted what quality management, financial controls, engineering, and accounting established decades ago. The hierarchy as taught is missing the concepts that make either job possible.
What this costs
These gaps connect to problems that most security leaders will likely recognise: the 40-page policy that steers nobody’s choices, the audit finding for “not following procedure” when the outcome was correct, the security team that can count activities but cannot tell you whether operations are effective, the documentation that drifts from reality because updating it requires governance review, the team that stopped improving its methods because the change process is too heavy, and the governance assertions that can’t be automatically verified because they exist only as prose.
These are not separate problems but symptoms of cargo-culted documentation architectures that were never designed to support the actual, real and urgent two things governance documentation needs to do.
The hierarchies we use aren't wrong, but they aren't fit for purpose.
The concepts needed to support steering and monitoring (invariants at each layer, a distinct process measurement surface, documentation ownership aligned to change cadence, and machine-readability as a design goal) are missing from the standard teaching. Adjacent disciplines have had them for thirty years.The next article in this series provides a model for restructuring: four layers with invariants at each level, a proper process measurement surface, and ownership aligned to change cadence. It is not a novel framework but largely an application of what quality management and engineering have been doing for decades, adapted for security governance, with invariants as the connective tissue that makes the whole thing testable.
ISC2 CISSP Common Body of Knowledge, Domain 1: Security and Risk Management. See e.g. Destination Certification, “Security Policies, Standards, Procedures, and Guidelines: A CISSP Guide.” ↩︎
ComplianceForge, Hierarchical Cybersecurity Governance Framework (HCGF). Defines six layers but still treats procedures as “step-by-step instructions” with no distinct process measurement layer. See “Policy vs Standard vs Control vs Procedure.” ↩︎
ISO/IEC 27001:2022, Clause 7.5: Documented Information. Requires documented information for ISMS effectiveness but does not prescribe a policy/standard/process/procedure hierarchy. See DataGuard, “ISO 27001:2022 Clause 7.5.” ↩︎
Feist, J., “The call for invariant-driven development.” Trail of Bits Blog, February 2025. blog.trailofbits.com ↩︎
ISO 27001 Stage 2 audit practice: SecureSlate, “ISO 27001 Audit: How Controls Are Tested.” Also ISOQAR, “ISO 27001 Audit Process Explained.” ↩︎
NIST, Open Security Controls Assessment Language (OSCAL). pages.nist.gov/OSCAL. See also NIST CSWP 53 (December 2025), “Charting the Course for NIST OSCAL.” ↩︎
Open Policy Agent (OPA), CNCF-graduated project. openpolicyagent.org ↩︎
9000 Store, “ISO 9001 Processes, Procedures and Work Instructions.” Also ISO 10013:2021, “Quality management systems — Guidelines for documented information.” ↩︎
COSO, “Internal Control — Integrated Framework” (1992, updated 2013). Principle 12: “the organization deploys control activities through policies that establish what is expected and in procedures that put policies into action.” ↩︎
Carrtegra, “How Documented Accounting Policies and Procedures Provide Consistency.” ↩︎
