Field NotesProduction ReadinessSecurity

What “production-ready” actually means for a B2B SaaS

A five-minute laptop demo and a system you’d let a paying client touch are not the same thing. Here’s the gap, broken into the unsexy pieces that actually matter.

Kashan AliCofounder · Forward Deployed Engineer
3 min readMay 14, 2026

There’s a moment in every early-stage product where the demo works, the screenshots look great, and the founder’s first instinct is to ship it. Then a real client logs in, hits an edge case the demo never touched, and you realize “production-ready” was doing a lot of heavy lifting in your head.

We just ran a five-phase production audit on our own client portal. The codebase passed every test. The marketing site looked polished. The backend served traffic. By the standards of “does it work?”, it was ready.

It wasn’t. We found 46 gaps.

This isn’t a critique of how we built it — it’s how every system that has ever shipped looks when you actually score it. The gaps cluster in a few unsexy places that demos systematically hide. Here’s what we saw, organized into the dimensions we use as a checklist before we’d let a paying client touch anything.

Security beyond the login screen

Most teams nail authentication — strong passwords, two-factor, the works. The harder bar is authorization: once a user is logged in, does every action they take check whether they’re actually allowed to do this thing to this object?

We found one place where an internal user could remove a teammate from a project they had no admin role on. The fix was nine lines. The bug had been live for two months. Nobody noticed because in the demo, there was only one user clicking around.

You don’t catch these by writing more tests. You catch them by walking every action a user can take and asking: “if a hostile or careless user did this, what would happen?”

A reliable record of who did what

If something goes wrong tomorrow — a deletion that shouldn’t have happened, a credential that surfaced — can you reconstruct exactly who did what, when, with proof? "Should have logged that" is the worst phrase in incident response.

Think of it like security camera footage. We had the cameras installed — but seven important parts of our system weren’t actually being recorded. Project status changes, credential reveals, document submissions: all happening silently. The fix is mechanical: walk every important action and turn the recording on. The discipline is doing it before you need to look something up, not after.

Compliance promises you can’t deliver

Our privacy policy publicly told users they could download all their data, and that they could delete their account anytime. Both are required under European privacy law (GDPR Articles 15 and 17). Neither was actually built — a user wanting to do either would have had to email support and wait.

This isn’t a legal-hair-splitting problem. It’s a trust problem. The moment a customer notices the mismatch between what your policy says and what your product does, you’ve burned credibility you can’t easily get back.

If you write it in your privacy policy, you ship the button.

Knowing something’s broken before your customers do

We had error tracking installed but the credentials weren’t plugged in. We had alert code written but the destination (the Slack channel that should ping us) wasn’t connected. The site could go down at 3am and the only person who knows is whoever happens to refresh the page next.

That’s not a monitoring strategy. That’s hope.

The fix isn’t expensive: finish wiring the credentials, point a free uptime monitor at the site, hook the alert into a channel someone actually watches. Total cost: fifteen minutes and zero dollars on the free tiers. The reason teams skip it is that nothing’s broken right now, so it doesn’t feel urgent. It always is, the moment something goes wrong.

The 80% that isn’t features

The pattern across all of this: production-ready isn’t about shipping more features. It’s about the supporting infrastructure that lets the features survive contact with reality. Audit logging. Automated cleanup of expired data. Rate limits on public endpoints. Backups you’ve actually tested restoring (most teams have backups; very few have ever tried to restore one). A written runbook for what to do at 2am when something’s on fire. Tests that catch when you accidentally break a working flow.

Every one of these is unsexy. Every one of these is what separates a system you’d let a paying client touch from a really good demo.

How we ship it without it taking three months

We use a phased audit pattern: critical gaps that block customer onboarding first, then hardening that prevents incidents, then polish that improves trust. Each phase ships independently. Each phase has machine-verifiable proof — tests pass, scans clean, the system behaves the way the policy says it does.

The whole pass took us three days, partly because we used parallel AI agents working on different concerns in isolated branches so they couldn’t step on each other. But the AI is the multiplier, not the trick. The actual leverage is knowing what to look for — having the checklist of boring things that always need fixing.

If you’re sitting on a B2B SaaS that works, but you wouldn’t quite let a real client touch it yet, the gaps are probably the ones above. Walk them in order. Ship the fixes one phase at a time.

// Want more?

Subscribe and we will email the next one straight to you.

Sign up below in the footer — one email a quarter, the actual work, no marketing fluff.