You made a few security decisions last week. At least half a dozen of them.
An engineer spun up a new database and someone chose a SaaS vendor and ticked “yes” on the terms of service without reading the data processing agreement. A contractor got access to your production environment because it was faster than figuring out a staging setup, and your CTO decided that fixing the authentication bug could wait until after the release.
None of these felt like security decisions. All of them were.
This is how most startups handle security: not through neglect, but through a thousand small choices that nobody recognised as choices at the time, each one quietly setting the terms for what happens next.
You don’t need a security team or a compliance framework to change this. You need three questions.
What are we actually protecting?
This sounds obvious. It probably isn’t.
Ask your CTO what your most sensitive data is, then ask an engineer, then ask whoever handles customer onboarding. You will very likely get three different answers, and the gap between those answers is your actual security risk. Not some theoretical threat model; the basic fact that your own team doesn’t agree on what matters.
Threat modelling is useful, but at your stage it doesn’t need to be a formal exercise. It needs to be a conversation. Sit your technical leadership in a room for an hour and answer one question: if someone broke in tomorrow, what would hurt us most?
A threat model is simply a structured way of thinking about what could go wrong, who might cause it, and what you’d lose. At an early-stage startup, it can be a whiteboard session; it doesn’t require a consultant or a framework.
This site has a couple of frameworks that may help, but a straightforward discussion with sticky notes and a whiteboard will get you a surprising way down the path. The only preparation required is assembling the right people.
For most startups between 50 and 200 people, the answer falls into one of three buckets: customer data, intellectual property, or access credentials that could compromise either. Once you know which bucket matters most, you’ve already made better security decisions than companies ten times your size.
The test
Can your CTO, in a few sentences, tell a prospective customer what data you hold, where it lives, and who can access it? If the answer is no, or if it takes more than three sentences, you haven’t answered this question yet.What are we willing to risk?
Every startup carries risk. You accepted risk when you chose a single cloud region because multi-region was too expensive. You accepted risk when you gave five engineers admin access to production because role-based access control would take two weeks to set up. You accepted risk when you skipped encryption at rest because the performance hit wasn’t worth it at your scale.
These were probably the right calls. The problem isn’t that you made them; it’s that you may not have made them consciously.
This is risk appetite, and it changes with your stage. At Series A with 40 people and no enterprise customers, your tolerance for certain risks is legitimately higher because you’re moving fast, resources are scarce, and the consequences of most security gaps are still contained. That’s fine, provided you’ve actually decided.
The problem comes when nobody writes it down. Eighteen months later, you’re in a customer security review and someone asks why your production data isn’t encrypted at rest. The answer you really don’t want to give is “we never got round to it.” That’s a very different answer from “we assessed the risk at our stage and chose to prioritise other things first; here’s our timeline for addressing it.”
The first answer loses deals. The second one can win them.
The trap
Founders often avoid explicit risk conversations because they feel like admitting weakness. The opposite is closer to the truth. Investors and enterprise customers tend to respect founders who can articulate what risks they carry and why. It signals maturity, not vulnerability.What’s the minimum we need to do right now?
This is the question founders seem afraid to ask. It sounds unserious. In front of a security consultant, it feels like asking a doctor “what’s the least amount of exercise I can get away with?”
It’s probably the most strategically mature question you can ask.
A Series A company with 60 people applying the security controls of a 500-person pre-IPO company isn’t being thorough; it’s likely wasting money, slowing down engineering, and building processes few people will follow because they’re designed for an organisation that doesn’t exist yet.
Security, like everything else in a startup, is a resource allocation problem: you have limited engineering time, limited budget, and a product to ship and sell. The question isn’t whether security matters, because it does. The question is what security work creates the most protection for the least drag on your business at your current stage.
At 50 people with no enterprise customers, that might mean:
- Get your credentials out of your source code,
- Turn on multi-factor authentication everywhere, and
- Make sure the core team knows what you’d do if you got breached.
That’s it. Less than a week of work, not a six-month programme.
At 150 people with enterprise pipeline, the list grows, but not by as much as you might think. Document your practices, get a handle on privileged access and vendor access, implement joiner and leaver processes, start monitoring and responding to security events proactively, build a basic incident response plan. You’re still probably not talking about a dedicated security team.
Minimum viable security is not about cutting corners but about doing the right things in the right order instead of trying to do everything at once and doing none of it well.
Think of minimum viable security like technical debt: you manage it deliberately, you know where it is, and you have a plan for when to address it. Carefully managed, tech debt can fuel growth rather than hinder it.
Security debt works the same way, except the interest payments are less predictable and occasionally catastrophic.
The reframe
You’re not asking “how little can we get away with?” You’re asking “what gives us the highest security return on the time we invest this quarter?” That’s a resource allocation decision, not a corner-cutting exercise.The question nobody asks
What happens if you don’t ask the first three questions and just carry on as you are for another 12 months?
This is where most founders go quiet, not because they’re negligent but because they don’t know. Security debt is invisible right up until the moment it isn’t.
Here’s what typically happens: security problems compound, but not linearly. They compound like financial debt, slowly at first, then with increasing force.
At 50 people, fixing a security problem is a conversation: your CTO can change a policy over lunch, an engineer can fix a misconfigured database in an afternoon, and the blast radius is small because the team is tight and everyone knows what’s going on. That changes fast.
By 200 people, that same problem is a project involving multiple teams and a change management process, because you can’t just push to production any more. The engineer who set up the original system left six months ago and nobody fully understands the configuration, so you’re looking at weeks, not hours.
At 500 people, it’s a programme: external consultants, board reporting, audit remediation, customer notification if data was involved, and legal review. The thing that would have taken an afternoon at 50 people now takes six months and a six-figure budget. I’ve watched this progression enough times that I can roughly predict the budget from the headcount at which the problem was first ignored.
The maths
This isn’t hypothetical. Companies routinely spend 10-50x more fixing security problems at scale than it would have cost to address them early. The technical fix itself may not be harder; the organisational complexity around it has exploded.The compounding isn’t just cost but opportunity. Every person-month you spend on remediation at 300 people is a person-month your engineering team isn’t shipping features, and every customer security review that turns up surprises is a deal that slows down or dies while your board meeting turns from a status update into a fire drill.
And the founders who end up in the worst position are usually the ones who cared about security but never found the time. They weren’t reckless; they were busy, the business was growing, and there was always something more urgent.
Until there wasn’t.
What to do with this
You don’t need to solve everything today. You need to spend an hour (one hour, that’s all) with your CEO, CTO, and a couple of people from product and engineering, and answer three questions:
What are we actually protecting? Get specific: name the data, name the systems, name the consequences.
What are we willing to risk? Be honest. Write down the risks you’re currently carrying and decide which ones you’re accepting deliberately versus which ones you didn’t know about.
What’s the minimum we need to do this quarter? Pick two or three specific actions that materially reduce your biggest risk, not a security transformation.
Then write it down, not in a policy document few people will read but in a shared doc your leadership team can reference. One page covering what you’re protecting, what you’re accepting, and what you’re doing about it this quarter.
That one page is worth more than a SOC2 certificate you’re not ready for, more than a security hire you can’t yet support, and more than any tool you could buy, because it forces the conversations that everything else depends on.
It also turns out to be the foundation for every security conversation you’ll have with people outside your company. When a customer sends a security questionnaire, when an investor’s technical team asks about your posture, when a regulator wants to understand how you handle data: the answers all trace back to these three questions. The companies that handle those conversations well aren’t the ones with the most controls but the ones who can articulate their position clearly because they’ve actually thought it through.
You’re making security decisions every week whether you mean to or not. The only question is whether you’re making them on purpose.
