What does good security actually look like?

You achieved SOC2. You run pen tests. Does that mean you're secure?

What does good security actually look like?

You got SOC2 because customers demanded it. You run annual pentests because the board expects it. Both were the right calls, and if you’re honest with yourself, you suspect that the certificate and your actual security posture are two different things.

You’re probably right. SOC2 measures whether you documented your controls and whether an auditor found evidence they exist. It doesn’t measure whether those controls work under pressure, whether your engineers follow them under deadline, or whether your team would notice an intrusion. A clean pentest tells you that a specific testing firm, with a specific scope, in a specific time window, didn’t find critical vulnerabilities. It doesn’t cover what they didn’t look at or what appeared the week after they finished.

SOC2 (System and Organisation Controls 2) is an audit framework that evaluates how a company manages data based on five “trust service criteria”: security, availability, processing integrity, confidentiality, and privacy. A SOC2 Type I report confirms controls exist at a point in time. A Type II report confirms they operated effectively over a period (usually 6-12 months). It’s the most commonly requested compliance certification for B2B SaaS companies selling to enterprise customers.
A pentest or penetration test is a controlled, authorised security assessment where skilled testers simulate real-world attacks on systems, applications, networks, or devices to find vulnerabilities an attacker could exploit.

None of this is an argument against compliance. You need it, your customers require it, and the discipline of going through the process has real value. But if compliance is the only lens through which you evaluate your security, you’re looking at a photograph and hoping it’s a video.

The more useful question is: what does genuine security look like in a scaling company? The answer is surprisingly observable.

It’s observable

The signals are simple, and you can see them without a report or a dashboard.

Engineers tend to mention security implications during design discussions without anyone prompting them. Someone shares a suspicious email in a Slack channel and three people respond with “yeah, spotted that one too, pretty well-targeted” rather than silence or embarrassment. A developer notices an overly broad access permission during a code review and flags it, not because a policy requires it, but because it looked wrong.

When something goes wrong, whether it’s a misconfiguration, a near-miss, or a genuine incident, the first question is “what happened and what do we learn?” not “whose fault is this?” People report mistakes early because they’ve learned that transparency leads to fixes, not punishment.

A quick test

Ask your engineering team when the last security near-miss was. If nobody can remember one, you probably don’t have great security; you have a reporting problem, because near-misses happen constantly. The question is whether your culture surfaces them or buries them.

These behaviours aren’t the result of training programmes or policy documents but come from an environment where security is treated as a shared responsibility and where people feel safe raising concerns, and that environment doesn’t happen by accident.

It’s distributed

The most important thing a security hire can achieve is not a set of controls but an organisation that makes good security decisions without them in the room.

This sounds counterintuitive. You hired a security person so that security would be someone’s job. But if security is only their job, you’ve created a bottleneck at best and a single point of failure at worst. The goal is an organisation where security thinking is embedded in how everyone works, with the security hire as the person who enables that rather than gatekeeps it.

I saw this at a large UK financial institution with over 2,000 engineers, where their security team had been pushing code analysis adoption through mandates for years. Just 26% of repositories had any SAST or DAST tooling configured, and nothing was moving.

SAST (Static Application Security Testing) analyses source code for vulnerabilities without running the application. DAST (Dynamic Application Security Testing) tests running applications from the outside, simulating attacks. Together they catch different classes of security issues. SAST finds problems in how code is written; DAST finds problems in how the application behaves.

The fix wasn’t more mandates. They baselined coverage by team and published the results as league tables, visible to everyone and updated regularly. Within six months, coverage moved from 26% to 97%. The tools had been available all along. What changed was that engineering teams could see where they stood relative to their peers, and nobody wanted to be at the bottom. Security shifted from something the security team nagged about to something engineering teams owned.

Good security isn’t a team, a tool, or a certificate but whether your organisation makes sound security decisions when nobody’s watching.

That pattern scales down: at a 60-person startup, you don’t need league tables across 2,000 engineers, but the principle holds. Make security visible, give teams ownership, and measure progress openly. A simple dashboard showing which services have security controls configured and which don’t will do more for your security posture than a 40-page policy document nobody reads, and you’ll develop the kind of culture where people want to build things the right way.

It’s proportionate

Good security matches your stage. This is where founders often go wrong in the other direction, because having seen the compliance light, they over-engineer.

A Series A company with 60 people that has clear asset documentation, sensible access controls, and engineers who think about security during design reviews is measurably more secure than a Series C company with SOC2, ISO 27001, and a trust portal but engineers who routinely bypass controls because those controls don’t fit how they actually work.

Over-engineering security at the wrong stage doesn’t just waste money; it often creates resistance, because engineers learn to see security as an obstacle, find workarounds, and stop engaging with the security hire. When you actually need their cooperation on something that matters, whether that’s responding to an incident or preparing for a customer audit, you’ve already burned through your goodwill.

The overcorrection trap

The founders who overcorrect on security are often the ones who had a scare: a near-miss, a failed customer review, a board question they couldn’t begin to answer. The instinct is to lock everything down, but the result is usually an engineering team that resents security and a set of controls that look good in an audit but don’t reflect how work actually happens.

Proportionate security means your controls match your actual risks, your actual stage, and your actual team. It means being honest about what matters right now versus what will matter in 18 months, and building for where you are rather than where you might be.

Five conversations that tell you where you stand

You don’t need an audit to assess this. You need five conversations this week:

Ask an engineer about the last security near-miss. If they can name one and tell you what changed afterwards, your reporting culture is working. If they look blank or uncomfortable, it isn’t.

Ask your CTO how they’d explain your security posture to a customer. If they can do it clearly and confidently in under two minutes, your security story is coherent. If they hedge, deflect, or reach for the SOC2 certificate, the story hasn’t been built yet.

Ask someone who joined in the last three months what they learned about security during onboarding. If the answer is “nothing specific,” security isn’t embedded in how people join your organisation.

Ask your security person (or whoever owns security in practice) what they’d change if they had full authority. The gap between what they’d do and what they’re able to do tells you how much organisational friction exists around security, or how checked out they may already be.

Ask yourself: when was the last time security came up in a leadership discussion that wasn’t triggered by a customer request or an incident? If security only surfaces reactively, it’s not yet part of how your organisation thinks. It’s something that happens to you rather than something you do.

What you're looking for

You’re not looking for perfect answers. You’re looking for signs that security is a living part of how your organisation operates, not a set of documents that satisfied an auditor. The difference between those two things is the difference between being secure and hoping you’re secure.

Culture keeps you secure

The compliance work you’ve done isn’t wasted: it opened deals, it forced discipline, and it gave you a foundation that would have taken much longer to build without the pressure of an audit. But if it’s the only thing you’ve built, you’re relying on documentation to do the work of culture, and documentation doesn’t respond to incidents, catch mistakes, or make decisions under pressure. Your people do that.

Good security is what happens when your whole organisation, not just the security hire or the compliance artefacts, makes sound security decisions as part of daily work. It’s observable in behaviour, distributed across teams, and proportionate to your actual risks.

The founders who get this right aren’t the ones who spend the most on security. In my experience, the ones who build organisations where security is something everyone does, rather than something done to them, are the ones who reap the rewards.

Not sure where your security actually stands?

Book a 30-minute conversation. We'll talk about what's working, what's not, and what to focus on next.

Book a 30-minute conversation