Hero image for You Don't Know What You've Built
By Scott Armbruster

You Don't Know What You've Built


Everything I do across client work, personal projects, and business operations gets logged automatically. Each morning a pipeline reads those session logs and turns them into this post. Nothing here is hypothetical — these are real systems, real friction, real decisions from someone who’s been building technology for over 23 years. This is what working with AI actually looks like.

The Day at a Glance

  • Ran an audit on my own repos and found a gap between what I thought I’d built and what’s actually there
  • Decomposed a data pipeline by city instead of by technical layer, and it changed how I think about failure
  • Six projects touched in one day, and the bottleneck wasn’t the work
  • The WSJ piece about AI intensity that gets the diagnosis backwards
  • 84% of the world hasn’t used AI and we’re still arguing about prompts
  • A cadence question I still don’t have an answer to

The Scariest Audit Is the One You Run on Yourself

A consulting engagement sent me spelunking through my own repositories today. The task was straightforward: map out what services are offered, what technologies are actually in use, what integrations exist across the portfolio. Scan env files for API keys. Read config files. Build a picture of the real technology footprint versus the assumed one.

Every organization thinks they know what they’ve built. None of them actually do. The gap between “what we think our stack is” and “what’s actually deployed” grows quietly, like technical debt’s quieter cousin. I call it inventory drift.

The interesting part wasn’t finding surprises. It was how fast the discovery happened. An AI agent tore through dozens of repos in minutes, reading configs, grepping for tokens, cross-referencing service definitions. What would have been a week of manual archaeology became an afternoon. But speed isn’t the real value. The real value is that the audit actually happens. When discovery is expensive, you skip it. When it’s cheap, you do it quarterly. And quarterly inventory beats annual ignorance every time.

There’s a quick gut check that’s been useful: for each system you own, can you explain what it connects to without reading the code? Does anyone besides you know it exists? When did someone last verify it works as expected? If any answer is no, that system is a liability wearing a feature’s clothing. Most teams have five to ten of these hiding in plain sight.

Geographic Decomposition Beats Technical Decomposition

Separately, I ran a data collection pipeline across multiple cities for an events platform. The instinct with parallel work is to decompose by technical layer. One agent handles parsing, another handles storage, another handles validation. That’s wrong for this kind of problem.

Instead, I decomposed by geography. One agent per city. Each agent owns the full vertical: find sources, scrape events, validate data, write output files. San Francisco, Seattle, DC, each running independently, each owning its complete slice.

This matters because the failure modes are local, not global. If Seattle’s source changes its HTML structure, only the Seattle agent fails. DC keeps running. You get partial results instead of total failure. Decompose along the axis where failures are independent, not along the axis that looks cleanest on a diagram.

Six Projects, One Day, No Heroics

The day touched six distinct workstreams. Quality audits across a content portfolio. The technology discovery audit. The geographic event pipeline. News curation across five categories. A visualization tool for a client. And writing this post. Roughly 800 interactions total.

This wasn’t a grind. It was possible because each workstream had clear boundaries and its own context. My job was sequencing and judgment. Which work matters right now. What quality bar applies. When something needs my eyes versus when it can run autonomously.

That sequencing skill, knowing what to look at and what to trust, is the actual bottleneck now. Not the doing.

From the Vault

“AI Isn’t Lightening Workloads. It’s Making Them More Intense” (WSJ). The framing is backwards. AI doesn’t make work more intense. It makes more work possible, which is a different problem entirely. The intensity comes from humans who haven’t adjusted their scope to match new capacity. When you can suddenly touch six projects in a day, the discipline isn’t doing all six. It’s deciding which three actually matter. The WSJ is diagnosing a management failure as a technology problem.

“6.8 Billion People Haven’t Used AI — The Real Market Opportunity”. This one stopped me. We spend so much time in the builder bubble arguing about prompt techniques and model benchmarks while 84% of the world just wants painful tasks to disappear. No prompts, no learning curve. This is the pattern every technology cycle: the money isn’t in selling tools to builders. It’s in solving specific problems for people who will never know or care what’s under the hood.

“Don’t Mistake AI Visibility for Actual Control”. This connects directly to the audit work today. Dashboards create the illusion you understand your systems. The discovery audit proved otherwise. Real knowledge requires digging, not watching. A monitoring dashboard tells you what’s failing right now. An audit tells you what’s been silently wrong for months.

The part I keep circling back to: if cheap discovery changes how often you audit, and frequent audits change what you know about your own systems, then the compounding effect isn’t in any single audit. It’s in the habit of looking. Monthly feels too infrequent. Weekly feels like checking the locks twice. Somewhere in there is a rhythm that catches inventory drift before it becomes inventory fiction, and I’m still searching for it.