Insights on turning technical work into clear business communication

The Security Debt Crisis: When "Move Fast and Break Things" Breaks Your Company

August 13, 2025

Let’s Have an Honest Conversation About Security Debt

It’s 11 PM on a Thursday. You’re staring at a pull request that’s a beautiful, horrifying mess. The goal is to get the big new feature out for the Friday demo with that enterprise client—the one that could make or break your quarter. The code works, but you know, deep in your gut, that it’s held together with duct tape and wishful thinking.

There’s a hardcoded API key because getting secrets management set up would take another day. The new database query is a beautiful, hand-crafted string that looks suspiciously like a SQL injection vector. And the permissions on the new S3 bucket are basically *.* because you spent two hours fighting with IAM policies and finally just gave up to get the damn thing working.

In the PR description, you type the familiar lie: // TODO: Refactor and secure this post-launch. Someone on your team slaps a ✅ on it, the CTO says “ship it,” and you all hold your breath as the build goes green. You tell yourselves you’ll fix it next sprint.

You won’t. And that’s how your company starts to die.

I’ve Seen This Movie Before. It Doesn’t End Well.

A few years ago, I was advising a promising Series A startup. They had just landed a massive client and were scrambling to integrate. The dev team did exactly what I described above. They cut corners. They pushed a “temporary” fix with a known vulnerability in a third-party library because patching it broke three other dependencies and nobody had time for that.

The call didn’t come from the client’s CISO on Monday morning. That’s the sanitized, movie version.

The real call came from a junior support engineer on Saturday afternoon. “Hey,” he said, his voice shaking, “a customer just emailed us a link to a pastebin site with what looks like… our user list?”

It wasn’t a sophisticated attack. It was some script kiddie running an automated scanner who found the three-month-old vulnerability. They used it to get a foothold, then found the ridiculously permissive API endpoint we’d created for the “quick” integration.

By Monday, the story was on TechCrunch. By Tuesday, the churn graph looked like a cliff. A third of our users vanished in a week, and the support tickets from the rest were just… brutal. The big enterprise client? Their legal team sent a one-line email terminating the contract.

The company didn’t shut down overnight. It was a slow, painful bleed. But that weekend was the mortal wound. This wasn’t bad luck. It was the bill for months of security debt finally coming due.

Security Debt: It’s Not “Technical Debt”

Everyone loves to compare security debt to technical debt, calling it its “dangerous cousin.” Let’s be clearer.

Technical debt means your feature velocity slows down. Your developers get grumpy. You spend a few sprints refactoring instead of building new stuff. It’s a leaky faucet.

Security debt means a reporter is calling your CEO for comment while your lawyers tell you not to say anything. It’s a pipe bomb in your server room.

It’s every // TODO: Add auth here comment. It’s every ignored Dependabot alert. It’s every “we’ll do a pen test after the next funding round” promise. You’re not saving time; you’re taking out a high-interest loan from a loan shark who will break your company’s kneecaps to collect.

How We Got Here: Praising Speed, Ignoring the Wreckage

“Move fast and break things” is the mantra we all grew up on. But we forget it was coined when Facebook was a place to poke your friends, not a platform influencing global elections. The stakes are higher now. We’re building apps that handle people’s medical records, their life savings, their private conversations.

But the pressure is the same. Your product manager needs to hit their quarterly goal. The VCs on your board are asking about user growth, not your patch management policy. So we normalize cutting corners.

We skip a proper code review because the sprint ends tomorrow. We use a cool new open-source library without checking its security history. We tell ourselves we’re a small target, that hackers are only interested in Google or Amazon. It’s a comforting lie. In reality, automated scanners are constantly knocking on every door on the internet, looking for an easy way in. They don’t care if you’re a Fortune 500 or a three-person startup in a garage.

The Anatomy of a Disaster

This fictionalized case study uses composite metrics and trends inspired by patterns reported in the HHS Breach Portal and the Verizon 2025 Data Breach Investigations Report (DBIR). All characters, companies, and events are entirely fictional. Let’s look at that health-tech company again. Their disaster was a cocktail of three seemingly small shortcuts:

  1. The Outdated Library: They used a popular framework for authentication but were one major version behind—the one that patched a critical remote code execution vulnerability. Why? Because upgrading would have been a two-day project.
  2. The “Temporary” Permissions: To get two services talking to each other quickly, a DevOps engineer set cloud storage permissions to be world-readable. He even set a calendar reminder to fix it the following Monday. The breach happened on Saturday.
  3. The Plaintext “Metadata”: They encrypted the core medical records (good!) but stored a bunch of “metadata” in plaintext in the same database—things like patient names and email addresses. They figured it wasn’t as sensitive. The attackers couldn’t read the records, but they could match the user list to the encrypted files, creating a terrifying data package they sold on the dark web.

None of these decisions were made by idiots. They were made by smart, stressed-out people trying to hit a deadline. The problem is, these shortcuts don’t exist in a vacuum. They combine and compound each other until they form a perfect, company-ending storm.

The two weeks they “saved” cost them everything.

Why We Keep Making These Mistakes

If it’s so obviously catastrophic, why do we keep doing it?

It’s not because we’re bad engineers. It’s because of the culture. We look at the survivors—the giants who moved fast and got lucky—and we think we can follow their path. We don’t see the graveyard of companies that tried the same thing and got wiped out.

And let’s be honest about incentives. You get a promotion for shipping that killer feature that drove a 10% increase in MRR. You don’t get a bonus for patching a vulnerability that might have been exploited. It’s impossible to celebrate a disaster that you prevented. Security is a cost center until it isn’t. And by then, it’s a catastrophe.

The Human Fallout Is the Real Cost

When that company went under, the headlines were about the HIPAA fines and the lost investment. But that’s not what I remember.

I remember the junior dev who had written the code with the original vulnerability. He was a good kid, smart and eager. It wasn’t his fault—his PR had been rushed through review by seniors who knew better. But he blamed himself. He quit the industry for two years.

I remember the support tickets. People begging us to know if their cancer diagnosis or therapy notes were now public. Their lives were permanently impacted because we wanted to save a few days of development time.

That’s the real cost. It’s not about money. It’s the knot in your stomach when you realize your work—something you were proud of—caused real harm to people. That feeling doesn’t go away.

So, How Do We Stop the Bleeding?

You can’t just stop and declare a “security sprint.” Your product manager will laugh you out of the room. You have to be pragmatic.

  1. Start with the Easy, Big Wins: You know what they are. Turn on MFA for everything—your cloud provider, your source control, your email. It’s annoying for 30 seconds a day and stops 99% of opportunistic account takeovers. Set up automated dependency scanning today. It’s a solved problem. Let a bot find the low-hanging fruit for you.
  2. Bake It In: Argue for a “debt tax.” 10% of every sprint is dedicated to maintenance, refactoring, and security. Don’t call it “security work.” Call it “improving stability” or “reducing outage risk.” Frame it in terms of business value. “Fixing this now prevents a sev-1 outage that will cost us a week of dev time next quarter.”
  3. Ask the Hard Questions: In your next planning meeting, ask: “Do we know what sensitive data this feature touches?” “What’s our plan for patching this if a zero-day drops?” “Who on this team owns the security of this component?” If the answer is a blank stare, you’ve found security debt.
  4. Change Your Definition of “Done”: “Done” doesn’t mean “it works on my machine.” Done means it’s tested, documented, monitored, and secured. Make security part of the process, not an afterthought.

This isn’t about achieving perfect security. That’s impossible. It’s about building a culture where you can move fast and sleep at night. It’s about treating security not as a feature, but as a fundamental requirement for building things that last.

Because the most disruptive companies aren’t just the ones that move fast. They’re the ones that are still around in five years to see the impact. Don’t let your big idea end up as a cautionary tale on someone else’s blog post. Pay your debts.

Recent Posts