Insights on turning technical work into clear business communication

Red Advocates: Breaking Products Before Hackers Do

July 31, 2025

Note: Company and app names in this case study have been changed. This analysis is based on a composite of real data breach incidents to illustrate common securitand legal risks in app development.

Let’s Talk About a Stupid Little Thing I Call “Red Advocacy”

So, you’re building the next big thing. You’re moving fast, shipping code, and your VCs are happy. The mantra is “move fast and break things,” right?

I’ve been there. I’ve been the founder, the first engineer, the guy pushing code at 2 AM fueled by lukewarm coffee and the fear of a competitor launching first. And can tell you that “break things” is a great way to end up on the front page of TechCrunch for all the wrong reasons.

Remember that anonymous ‘dating’ app, spill? Yeah, they ended up leaking thousands ouser ID photos from an unsecured S3 bucket. A classic. But it gets more subtle. I consulted for a FinTech startup, let’s call them “PaySphere,” who were so focused otheir slick payment flow they didn’t realize you could create a negative balance in one account, transfer the “money” to another, and cash out. They lost about $8000 in three days before they even knew what hit them.

Here’s how I explain it to CEOs: You know how you have a CFO who’s constantly asking “What’s the three-year TCO on this?” about every decision? You need someonasking “How can this blow up in our faces?” with the same frequency and authority. Your engineers think in terms of “Does it work?” Your red advocate thinks in termof “How will it break, and what’s the blast radius when it does?” It’s the same paranoid mindset that makes good CFOs invaluable, just applied to product risk insteaof financial risk.

In both cases, they were missing one person. The person in the room paid to be a paranoid jerk. The one who constantly asks the question nobody wants to hear:“That’s a cool feature, but how are trolls, criminals, and bored teenagers going to weaponize it?”*

I call this role a Red Advocate. And you need one. Yesterday.

Why This Stuff Actually Matters (Beyond the Obvious)

Look, everyone knows a data breach is bad. You get fined, users leave, blah blah blah. That’s the generic slide in every security consultant’s deck. But let me telyou what it really feels like.

It feels like your star engineer, the one who should be building your next killer feature, instead spending three straight weeks combing through CloudTrail logs anpatching a hole that should have never existed. It’s the call with your lawyer where they use phrases like “gross negligence” and “statutory damages under CCPA.”

I once saw a promising B2B SaaS company, “ConnectSphere,” completely lose a Series A funding round. They had a great product, solid traction. During due diligencethe investor’s tech team found a simple IDOR vulnerability in their API. You could literally change the user ID in the URL (/api/v1/user/123/data) to /124 anpull down another client’s entire dataset.

The deal didn’t just get repriced; it evaporated. The investors walked. Trust wasn’t just broken with users; it was broken with the people who were about to writthem a $10 million check. That’s not a “hidden cost,” that’s a death blow.

Okay, So What the Hell is a Red Advocate?

This isn’t your QA person. QA checks if the product works as intended. The Red Advocate’s entire job is to make the product work in ways it was never intended to.

Think of it like a red teamer from the cybersecurity world, but for your whole product, not just your network. They’re not just looking for an SQL injection (thougthey should be, please god, look for those). They’re thinking about the weird, squishy, human problems too.

This person should be your most product-aware, constructively pessimistic team member. Their job is to be the official, sanctioned “what if” person who challengeevery assumption.

What These People Actually Do

It’s not just about running some automated scanner like OWASP ZAP and calling it a day. It’s a mindset.

They live in the design docs. Before a single line of code is written, they’re the one asking the PM: “So we’re collecting user’s date of birth. Why? What’s thabsolute worst thing that happens if that entire database leaks? Do we really need it?”

They think like a grifter. I worked with a red advocate who spent a day trying to figure out how to abuse our free trial. He discovered you could sign up wit[email protected], [email protected], etc., and get infinite trials. Simple fix, but no one had thought to try it.

They love breaking things. They’re the one who will upload a 10GB file where you expect a 2MB profile picture, just to see what happens. They’ll paste a block oChinese characters into a “First Name” field.

They know their tools, but they’re not tool-obsessed. Yeah, I run Burp Suite Pro and write custom SQLmap tamper scripts, but the biggest vulnerabilities I finare usually business logic flaws that no scanner catches. Like the time I found a race condition in a payment processing flow—if you clicked “Pay Now” fast enough (owrote a simple Python script to do it), you could complete a purchase before the inventory check finished. The fix was literally adding a database transaction wrappewith a SELECT FOR UPDATE lock. Five lines of code. But it took someone thinking adversarially to find it, not someone running automated tests.

This can be annoying. Product managers might feel attacked. Engineers might get defensive about their “perfect” code. That’s why it has to be a formal role. Their jodescription is to be annoying for the greater good. (And honestly, sometimes that person is just me, fueled by too much coffee and a healthy dose of paranoia).

So How Do You Actually Do This Without a FAANG Budget?

Right now you’re thinking, “Great. Another thing to spend money on. We can barely afford our AWS bill.” I get it. You don’t need to go hire a $250k/year CISO froGoogle.

Last time I ran one of these, the PM said “What if someone uploads a really big file?” The engineer said “We have a 10MB limit.” Then we spent twenty minutes figurinout what happens when someone uploads exactly 9.99MB of malicious JavaScript disguised as a CSV. Spoiler alert: Bad things. Really bad things.

The key is to start. Anywhere.

The Numbers Game (For the Spreadsheet People)

Your CFO wants metrics? Here are some:

The average cost of a data breach for companies under 500 employees is $4.40 million (IBM, 2025). The average salary of senior security engineer is around $165K. Even if you hired someone full-time and they prevented just ONE incident in three years, you’re looking at a 8.9x ROIThat’s better than most SaaS metrics your investors care about.

But here’s the number that really matters to engineers: Time to recovery. I’ve seen teams spend 3-6 months of development time dealing with breacaftermath—forensics, patches, compliance reports, customer communications. Those are features that never got built, customers that never got onboarded, competitorthat gained ground while you were playing defense.

One startup I worked with calculated that their breach response consumed 2,400 engineering hours. At a blended rate of $75/hour (junior + senior devs), that’$180K in internal costs alone, not counting the external lawyers, consultants, and PR firms.

It’s a Culture Thing, In the End

A red advocate is a role, but adversarial thinking is a culture. It’s about creating an environment where pointing out a flaw is rewarded, not punished.

You’ll get pushback. “You’re slowing us down!” “That’s an edge case, no one will ever do that.”

My response is always the same: “Someone will. And they won’t be as nice about it as I’m being.”

Building a resilient company isn’t just about product-market fit or a solid tech stack. It’s about surviving your own success. As you grow, you become a biggerjuicier target. The silly little bugs you ignored at 100 users become catastrophic vulnerabilities at 1,000,000 users.

So, who’s your paranoid jerk? If you can’t answer that question, you might want to figure it out. Before someone else does it for you.

Here’s what I tell every startup I work with: You don’t need to become a security company. You just need to not become a security cautionary tale. Your engineers wanto build cool stuff. Your executives want sustainable growth. A red advocate helps you do both by making sure “cool stuff” doesn’t become “expensive stuff” an”sustainable growth” doesn’t become “sudden death.”

Start simple. Pick one person. Give them permission to be professionally paranoid. See what happens.

The worst case? You waste a few hours thinking about problems that never materialize. The best case? You prevent a company-ending disaster that absolutely would havmaterialized.

Those seem like pretty good odds to me.

Recent Posts