From intention to impact: a four-layer model for security governance documentation

Four layers, four audiences, invariants at every level. A model for documentation that steers choices and tells you whether operations are effective.

From intention to impact: a four-layer model for security governance documentation

The previous article identified six structural gaps in how the security industry teaches, builds and maintains governance documentation.

The gaps explain a cluster of practical issues that most security (and GRC!) leaders will probably recognise: documentation nobody reads, audits that test procedure-following rather than outcomes, security operations that can count activities but not measure performance, and maintenance processes so heavy that teams stop updating their own methods.

This article provides a model that addresses those gaps. I call it the “i2i Security Governance Documentation model”, and it’s yours to adopt, adapt, change and use.

It is not a highly novel framework, and that may be its most useful property. The core ideas come from ISO 9001 quality management, which has maintained a clear policy-process-procedure-work instruction hierarchy since the 1990s1, from COSO’s separation of governance intent and control activities in financial controls2, and from the engineering tradition of expressing requirements as precise and testable specifications.

What the security industry has been missing is not imagination but the application of concepts that adjacent disciplines established decades ago, combined with invariants as the connective tissue that makes each layer testable. And, of course more than a few minutes between alerts and incidents and urgent board presentations to think about how to make things better.

Start from the audiences

Most governance documentation is organised around document types: here is a policy, here is a procedure, maybe even here is a work instruction.

A more useful starting point is asking: “Who needs governance documentation?” and “What they need from it?”.

The two tasks for security governance documentation

  • Steer security choices downward through the organisation, in a way that teams can make good delegated decisions without escalating every question, and
  • Support monitoring of whether operations are effective, so that leadership knows whether the security programme is actually performing or merely performative

The board and executive leadership need to understand intent and constraints without operational detail. They need to know what the organisation has committed to, what risk boundaries exist, and whether those commitments are being met. They do not need to know how a vulnerability gets triaged or which tool runs the access review.

The security function and design leads need precise, testable requirements to design controls against. A technical architect, too, deciding how to implement encryption at rest needs a specification exact enough that two reasonable people would agree on whether a given implementation meets it. “Encrypt sensitive data” is not that specification. “AES-256 encryption or equivalent for data items defined as sensitive data, keys managed via dedicated KMS, no application-layer key storage, no unencrypted data logged or otherwise persisted” might well be.

Operations management (including engineering and security management) need to know whether their processes are performing, and not in the sense of whether people are busy but whether the operation is producing the right outcomes within defined targets. This requires measurement criteria that exist as separate extractable artefacts rather than prose buried inside a combined policy-procedure document.

Practitioners need to know how to do the work. Step-by-step, for the specific task, written for someone who is competent but may not have done this particular thing before. They do not need governance preamble or measurement criteria cluttering their instructions.

Those are four distinct needs with four different audiences, four different change cadences, and four different natural owners. When one or two documents try to serve all four, each audience gets something that is mostly not for them, and the document tends to become too long to read, too tangled to maintain, and too ambiguous to measure against.

Four audiences, four needs, four change cadences. When one document tries to serve all four, each audience gets something that is mostly not for them.

A four layer cake

Separating by audience and purpose produces four layers. Each layer carries invariants appropriate to its level of abstraction, intentions appropriate to its audience, and a distinctive contribution the other layers do not provide.

What about 3 layers? Or 5?

It could! There are indeed other models, such as the ISC2 CISSP CBK’s, with five layers, so maybe it wins?

Seriously: the goal is to have as few layers as possible, but make each of them count. And if possible support automation and reduce redundancy throughout the documentation set, as well as delegating ownership to the appropriate team.

I believe that the insights here can be applied to any model, but you may end up with at least one layer more if key concerns haven’t already been identified when originally specifying the documentation model.

Policy

The policy layer expresses governance intent and organisational invariants. Invariants at this level are principled and durable: “customer payment data must never be stored unencrypted at rest” or “privileged access to production systems must always require multi-factor authentication and only be enabled during a limited and temporary time-window associated with an approval.” These are conditions that must hold regardless of how the organisation changes its tools, its team structure, or its operational methods.

Policy invariants include boundary assertions: scope (what the policy covers), applicability (who it binds), and accountability (who answers for what). These are often treated as administrative boilerplate, but they are testable. “This policy applies to all systems that process payment data” is verifiable against an asset inventory. “The CISO is accountable for the security programme” is verifiable against an organisational chart and role definition.

Policies also carry rich intentions: why these invariants matter, what risk they address, what obligations they fulfil. The intentions help people exercise judgment in situations the invariants don’t explicitly cover, which in practice is most situations. “We’re evaluating a new third-party integration; does the data residency invariant apply to a managed SaaS?” The intention (contractual obligation to specific customers regarding data location) helps answer that without escalating to the CISO or risk officer for an analysis.

Conformance at this layer points downward: it is demonstrated by the existence of standards that cover the policy’s scope, with passing conformance results from those standards. The policy doesn’t know about AWS Config rules or KMS keys. It knows that there should be standards below it that refine its invariants, and that those standards should have conformance mechanisms that are passing.

A good policy is short, principled, and stable enough to survive a reorganisation, a technology migration, and a change of security leader without revision. Written for the board, executive leadership, and regulators. Owned by senior leadership. Changes rarely.

Standard

The standard layer provides testable specifications and implementation invariants. The word “standard” has a specific meaning in engineering that the security industry has largely lost: a precise specification that two parties can independently verify against without asking the author what they meant. A structural steel specification says “minimum yield strength 417 MPa,” not “use strong enough steel.” Security standards should carry the same rigour.

BSI was founded in 1901, ANSI in 1918, ISO in 1947. Engineering disciplines have been expressing requirements as precise testable specifications for over a century. The security industry uses the word “standard” but frequently lacks the concept behind it: “systems should be patched in a timely manner” would not survive review in any serious engineering discipline.

Standard invariants are more specific refinements of policy invariants, and traceable to them. “Encryption at rest uses AES-256 or equivalent” refines the policy invariant about data protection. “IAM policies must never use wildcard actions on production resources” refines the policy invariant about access control. Some standard invariants are detailed specifications with considerable depth; a complete AWS encryption standard might run to several pages, but it is still fundamentally a set of invariants (conditions that must hold) expressed with enough precision to verify against.

Standards also carry the layer’s distinctive contribution: performance expectations, expressed as targets with tolerances. “Critical vulnerabilities must be remediated within 72 hours of confirmed classification; remediation rate must meet 95% attainment against this target.” The target and tolerance form a single unit. A target without a tolerance is either an absolute invariant (100% attainment, no exceptions) or an aspiration with no way to judge whether it is being met. The tolerance is, in effect, an error budget: the 5% failure allowance defines how much the organisation is willing to accept.

The SRE community formalised this concept as “error budgets” applied to service reliability. The same logic applies to security process performance: the tolerance defines how much failure is acceptable before the operation needs to change its approach. A later article in this series develops the connection in detail.

Standard intentions explain design rationale: why this specification rather than alternatives. “This specific capability uses customer-managed KMS keys rather than AWS-managed keys because our data processing agreements require key revocation capability.” These help an architect decide what to do in situations the standard doesn’t explicitly cover.

Conformance at the standard layer asserts three distinct things, and this matters because most organisations only check one of them, as a best case. Design conformance verifies that architecture and design artefacts specify the standard’s invariants. Configuration conformance verifies that the recorded system state matches the design. Functional conformance verifies that the system’s actual behaviour matches its recorded state3. A security group configuration may say port 22 is closed while a network ACL at a different layer allows the traffic through. Configuration conformance passes; functional conformance fails. Most automated conformance checking operates at the configuration layer, asking “what does the system say about itself?” and rarely asking “does the system actually behave that way?”

Three types of conformance and missing testing

Design conformance checks architecture against the standard. Configuration conformance checks system state against the design. Functional conformance checks actual behaviour against recorded state. Most organisations automate configuration management so configuration conformance is generally only tested at deploy time, further they rarely test functional conformance at deploy time or subsequently, which is the only type that catches the gap between what a system reported at deploy time and what it does now.

Written for people who design controls, architect systems, and build processes. Owned by the security function under delegated authority from the governance layer. Changes when requirements change.

Process

The process layer is where the security industry has the most conspicuous gap, and it is arguably the single most valuable concept to borrow from ISO 9001 quality management.

ISO 9001 defines a process as “a set of interrelated or interacting activities which transforms inputs into outputs” with defined measurable objectives, distinct from a procedure which is “a specified way to carry out an activity or a process.” The security industry uses both words but has largely collapsed the distinction.

The process layer treats each process instantiation as a black box and instruments its boundaries. It does not describe the steps that happen inside, but rather the outcomes (the “what”, not the “how”). I’ve found this to be the single hardest idea to communicate because the instinct, especially among security professionals, is to define the steps. The process layer deliberately does not. It defines what goes in, what comes out, how quickly, how reliably, and how you know.

The process layer has five elements, all boundary instrumentation:

Input invariant and input criteria. The input invariant states that every well-formed input must be accepted into the process. A vulnerability report that arrives with sufficient data (affected asset, severity classification, identifier, discovery timestamp) must be taken up into vulnerability management regardless of its source, whether that is an automated scan, a manual report, or an external advisory. The input criteria define what “well-formed” means, and they function as a case dispatch: different input types may have different criteria, but all must meet a minimum bar to satisfy the input invariant.

Output invariant and output criteria. The output invariant states that every accepted input must eventually produce a defined output. Nothing silently disappears inside the box. A vulnerability that enters triage must eventually produce a remediation record, a documented exception, or a reasoned reclassification. The process may produce multiple outputs at different boundary points: an acknowledgement (“we received this, here’s a reference”), intermediate notifications (classification decision, ownership assignment), and a final output (remediation evidence or exception record). Each output type has its own criteria defining what it must contain, and these criteria are conditional on the output type.

Performance expectations. Three families, each with targets and tolerances. Timeliness measures how quickly each output appears at the boundary, and each output type may have its own expectation (acknowledgement within one hour, final remediation within 72 hours). Throughput measures whether the process can handle its input rate without degrading. Quality measures whether outputs are correct and durable; a vulnerability closed and then reopened is a failed-quality output, and a quality expectation like “fewer than 0.1% of closures reopened within 30 days” measures something fundamentally different from time-to-remediation.

Measurement definitions. Per expectation: what starts and stops the clock, where the data comes from, how attainment is calculated, the reporting cadence, and what response is triggered when a tolerance is breached. The standard says “72 hours, 95% attainment.” The process document says “clock starts when a Jira vulnerability ticket moves to confirmed critical, clock stops on verified deployment to production, measured weekly, reported monthly, attainment below 90% for two consecutive periods triggers capacity review.” Measurement definitions can be tuned based on operational reality without changing the targets the standard requires.

Conformance criteria. How violations of the input invariant, output invariant, and performance expectations are detected. At this layer, conformance criteria point to operational systems: queries against the ticketing system for unacknowledged inputs, stalled work items that never produced an output, missed time targets, and quality failures detected retrospectively.

Written for the people who need to know whether the operation is functioning, and owned by the operational function. Changes when outputs or measurement mechanisms change.

A good process description gives you everything you need to measure performance without telling anyone how to do the work.

Procedure or Work instruction

The work instruction layer describes methods: how to do the specific task in a way that maintains the invariants and meets the performance expectations defined in the layers above. “Open Jira ticket, assign to owning team, set priority to P1, link to the advisory, notify the on-call engineer.”

Work instructions carry their own invariants, but these are method-level execution constraints: “never push directly to main” or “always verify the ticket is linked to the advisory before closing.” They also carry curated intentions, selected for the practitioner’s decision-making needs rather than the full governance rationale. “We verify the ticket link because downstream reporting depends on it” is useful context for someone deciding whether to skip a step in unusual circumstances. The full risk rationale from the policy layer is not needed here.

Work instructions should include conformance checking as a method step. The practitioner verifies their own configuration conformance at point of change: after provisioning an S3 bucket, run the compliance check and verify all rules return compliant. This is self-verification, part of the method. The governance assertion at the standard or process layer is that such self-verification must be built into the way of working. Independent verification (the process-layer conformance check and the security function’s functional conformance testing) is a separate concern.

Written for the practitioner and preferably owned by the team doing the work. It changes when the method changes, which may be frequently as teams adopt new tools, improve workflows, or find better techniques.

A good work instruction can be followed by a competent person who hasn’t done this specific task before. And the critical test for layer separation: if the team switches from console provisioning to Terraform, the work instruction changes completely but nothing above it changes. The policy invariants still hold. The standard invariants still hold. The process measurement still works. Only the method changed.

If the process layer is functioning correctly, an auditor will not need to deep dive into procedure or work instructions, and in general record keeping can be kept very limited and at process level.

Guidelines at every layer

Every layer has a companion document type: the guideline. Guidelines express how to exercise judgment within the constraints set by the primary document.

Policy guidelines help people interpret governance intent in novel situations: “when evaluating a new third-party integration; here’s how to think about the data residency invariant in the context of multi-region architecture.” Standard guidelines help architects choose between valid approaches: “AES-256 is required; here’s guidance on application-level vs storage-level vs KMS-managed encryption and the tradeoffs of each.” Process guidelines help operators handle edge cases: “when a vulnerability is classified critical but the affected system is scheduled for decommission within the remediation window, here’s how to manage that.” Work instruction guidelines are technique how-tos, troubleshooting guides, and worked examples.

The principle: policies, standards, procedures and work instructions are fairly rigid, guidelines are advisory. A guideline never overrides the invariant or performance expectation it supports. It helps people navigate situations the primary document doesn’t explicitly cover. Ownership follows the same pattern as the primary documents at each layer.

For the governance-as-code story that the next article develops: guidelines are probably the one artefact type that stays as prose permanently. They are about judgment, context, and nuance, and you probably cannot express “here’s how to think about this tradeoff” as a machine-readable assertion.

Ownership and change cadence

The shift-left argument, properly made

Shift-left doesn’t mean making engineering teams do security work. It means engineering teams own what is naturally theirs (methods used to do repeatable work, configuration management) while the security function owns what is naturally theirs (control invariants, functional verification of security mechanisms). This only works if the layers are separated.

The ownership model follows from the layers rather than being imposed on them. Policy is owned by senior leadership and changes rarely because governance intent should be durable. Standards are owned by the security function under delegated authority and change when requirements change. Processes are owned by the operational function and change when outputs or measurements change. Work instructions are owned by the team doing the work and change as often as the team improves its methods.

Configuration conformance can and probably should be owned by the engineering teams, especially if they own their own configuration management tools and procedures. It is self-verification: the team checks that its own work meets the standard. They have every incentive to run it, because it catches their mistakes before someone else does. The security function doesn’t need to run configuration checks. It needs to assert that configuration checking exists and is operating (that’s a standard-layer invariant about the verification mechanism) and it needs to own functional conformance, the independent verification that controls actually work, which is a different skill set and a different relationship to the system.

Each layer absorbs a different kind of change and insulates the layers above. A work instruction change (new tool, improved workflow) doesn’t touch the process, standard, or policy, so long as it still meets the organisational requirements. A process measurement change (different reporting cadence, adjusted attainment response) doesn’t touch the standard or policy. A standard change (revised threshold) doesn’t touch the policy. The policy should survive a cloud provider migration. The standard could very well survive a tooling change but maybe not a provider change. The work instruction survives neither, and that’s fine because the team that owns it can update it without triggering governance review.

When one document tries to serve all four layers, this insulation doesn’t exist at all. A team that wants to update its vulnerability triage method has to wait for a governance review cycle because the method description lives in the same document as the governance intent it implements. Teams stop updating, and just do what they think is right. People come and go, priorities change, workflows adjust organically disconnected from organisationally-agreed outcomes. Documentation drifts from reality. And the document becomes a historical artefact that everyone maintains for audit purposes while actual practice diverges.

I believe that most security leaders who have to deal with these kinds of things are interested in solving this problem better, and I can confirm that the model works a lot better than the alternative.

Delegated authority, not unchecked authority

Layer separation doesn’t mean the security function can unilaterally weaken targets without oversight. The standard layer operates under delegated authority: the security function can refine targets within the risk appetite expressed by the policy layer. Splitting a 72-hour remediation window into 72 hours for internet-facing and 168 hours for internal-only systems might be an acceptable refinement and still aligned with policy; the governance intent is preserved and the standard is more precise. Changing 72 hours to 30 days on internet-facing systems is a material change to risk posture and would need escalation to governance and perhaps a clear alteration of policy.

The test is relatively straightforward: does this standard change alter the risk the organisation is accepting? If yes, it escalates. If no, it’s within delegated authority. This is the same pattern as financial delegation: a budget holder allocates within their budget without board approval, but exceeding the budget requires escalation. The invariant concept makes the boundary testable: if a proposed standard change would weaken or remove a policy invariant’s practical effect, that’s a material change regardless of how it’s framed.

The relationship to continuous assurance

This is the foundational piece for everything that follows in this series.

Invariants at every layer tell you exactly what to monitor. Each policy invariant becomes a high-level check (does the standard layer cover this, and is it passing?). Each standard invariant becomes a specific control check. Each process input and output invariant becomes a boundary gate check. Performance expectations give you the targets. Measurement definitions tell you how to calculate attainment. Conformance criteria tell you where to look for violations.

Together, these populate a continuous assurance capability: not continuous monitoring (which most organisations already have in some form) but continuous verification that operational state satisfies governance intent, with the traceability built into the documentation structure rather than assembled manually for each audit.

If your documentation doesn’t express invariants at each layer, you don’t know what to monitor. If it doesn’t separate process measurement from policy intent, you can’t tune operational targets without reopening governance documents. The documentation hierarchy is the prerequisite. The next article explores what becomes possible when that hierarchy is expressed as code.


  1. ISO 9001 distinguishes process from procedure: a process defines measurable objectives, inputs, and outputs; a procedure describes how to carry out an activity. See 9000 Store, “ISO 9001 Processes, Procedures and Work Instructions.” ↩︎

  2. COSO, “Internal Control — Integrated Framework” (1992, updated 2013). Principle 12: “the organization deploys control activities through policies that establish what is expected and in procedures that put policies into action.” ↩︎

  3. The conformance testing distinction (design, configuration, functional) parallels verification and validation in systems engineering: verification asks “did we build the thing right?” (configuration), while validation asks “did we build the right thing?” (functional). Neither alone is sufficient. ↩︎

Ready to restructure your governance documentation?

I help scaling companies build governance structures that are measurable, maintainable, and actually useful. Let's talk about what yours should look like.

Book a 30-minute conversation