Hero image for The Constraint Document Nobody Writes
By Scott Armbruster

The Constraint Document Nobody Writes


Every action I take, client work, personal projects, business operations, gets captured in session logs automatically. Each morning a pipeline reads those logs and distills them into this post. Twenty-three years of building technology, and this is what it actually looks like when you work with AI every day. Not theory. Not hot takes. Just the work.

The Day at a Glance

  • Same content pipeline, ten wildly different domains, and the code is identical across all of them
  • The most important file in the system turns out to be one most teams skip entirely
  • Splitting research, writing, and auditing into separate passes instead of one big prompt
  • News curation across five categories revealed a problem I hadn’t anticipated
  • Operating in domains where my expertise is shallow and not knowing where the line is

One Pipeline, Ten Worlds

Today looked like this: content flowing through the same pipeline across sites covering ADHD productivity, philosophy, road cycling, self-help books, travel tools, AI reviews, fitness apps, bucket list planning, and a couple of news digests. Different audiences. Different expertise levels. Different tones entirely.

The pipeline shape is identical everywhere. Research the niche, scan existing content for gaps, write a draft, audit the result. Four stages. Same orchestration code. Same tooling. Nothing custom per site.

So what makes the output different?

One file. A brand voice document sitting at the root of each repository. It specifies tone, banned phrases, audience assumptions, category structures, what “good” looks like for that particular niche. The cycling site speaks to serious amateur riders who already know what a power meter does. The philosophy site assumes readers are looking for practical application, not academic discourse. The ADHD productivity site avoids the condescending tone that plagues most productivity advice.

After a lot of iteration, this is the thing I keep landing on: the constraint document matters more than the code, the prompts, or the model. The specification of what “done well” means for your specific context. Most teams skip it entirely. They write elaborate prompts, tune temperatures, experiment with models, and never once write down what good output actually looks like for their domain. Then they wonder why results feel generic.

The constraint document is where domain knowledge lives. Not in the pipeline. Not in the model. In the file that tells the system what matters here, in this specific context, for this specific audience.

Why Three Roles Beat One Big Prompt

Something I’ve refined over months that showed its value again today: the pipeline doesn’t use one big prompt that tries to do everything. It uses three distinct roles with different objectives.

The researcher reads the brand voice, scans existing posts, checks what topics are covered and where gaps exist. Its only job is understanding the landscape and recommending what to write next. It doesn’t write a single word of content.

The writer takes that research and produces a draft. It reads the same brand voice document but with a different instruction set. Create, don’t analyze.

The auditor gets the draft and evaluates it against the brand standards. Check frontmatter. Verify the tone matches. Flag anything that drifts from the site’s established patterns.

The monolithic approach, one prompt that researches, writes, and self-evaluates, consistently produces worse output. Not because the model can’t handle complexity, but because conflicting objectives in a single pass create mediocre compromises. The researcher optimizes for thoroughness. The writer optimizes for engagement. The auditor optimizes for consistency. Those are fundamentally different goals — asking one pass to serve all three produces work that’s okay at everything and great at nothing.

The heuristic that’s worked for me: split when the task has competing objectives. “Be thorough” and “be concise” is two roles. “Be creative” and “be consistent with existing work” is two roles. Three is usually right. Four starts adding coordination overhead. Two often means you merged the researcher into the writer, which works until your content starts repeating itself.

The Curation Inversion

News digests ran across five categories today. Cycling, travel, politics, markets, outdoors. Different problem than content creation. With creation, you’re generating from constraints. With curation, you’re filtering from abundance. Thirty articles down to six. Fifteen down to five.

What caught me off guard: the curation guidance matters more than the source material. Telling the system “prioritize gear announcements over race results” for cycling versus “focus on policy impact, not political drama” for news produces wildly different digests from the same underlying approach. The editorial judgment isn’t in the selection algorithm. It’s in the criteria you feed it.

Same pattern as the brand voice documents, really. The people who get the most from these tools aren’t the ones learning prompting techniques. They’re the ones who can articulate what good looks like in their domain. That’s a skill most people have never had to externalize before. You just knew what a good article looked like. Now you have to write it down.

Still wrestling with something though. Running content across ten niches means operating in domains where my expertise is shallow. The constraint documents encode quality standards, but they can’t encode genuine depth of knowledge. The cycling content is better than the philosophy content because I actually ride. The pipeline can’t close that gap. Not sure anything can. And I haven’t figured out where the line is between “good enough to be genuinely useful” and “technically correct but missing the thing only a real practitioner would catch.”