Proven Innovation

Notes

Inside the rebuild week

The site rebuild took a week. Here's what was actually inside it — the fodder doc upstream, the polish loops downstream, the rails built alongside.

Published 29 April 2026


A week is the honest number for this site’s rebuild. The week is honest only because of what was around it.

/about makes a positioning claim: a craft practice with AI in the tool belt, not an AI practice dressed up as craft. This note is the receipt. Three things made the week earn its name — the fodder doc that ran upstream, the polish loops that ran downstream, and the rails the rebuild quietly built for itself along the way.

Before the model wrote a word

The site copy didn’t start with a prompt. It started with a fact base.

ContentFodder.md is the upstream input — a single research-grade document synthesised from six parallel passes over my client projects, my personal projects, the non-code artefacts, and the Claude Code chat logs from both Windows and WSL. The model did the synthesis. The inputs were already mine. The output is a few hundred lines that read like a specification: who the practice is, who the clients are, what the work is, what the voice sounds like, what the forbidden vocabulary is, which numbers can be quoted and which can’t.

Then — and only then — the copy prompt ran against it. The decomposition was deliberate. One prompt drafts the content from the fodder. A second prompt builds the site from the content. Two passes, each narrow.

The reason that order matters: AI is good at synthesis and poor at conjuring facts. Give it a fact base, and it synthesises. Don’t, and it invents. Across a decade of writing software for clients I’ve found the same is true of human writing — the bad copy is the copy without research behind it. The model doesn’t change that rule. It just makes both halves cheaper if you do the prep.

After the build came the polish

The week didn’t end when the site built green. The week ended after the polish loops.

A few of those loops are mechanical. A grep against the forbidden-vocabulary list — about twenty words long, the kind of language that flags a site as written by someone selling rather than someone working. A check for calibrated qualifiers — the sentence-level habit of saying “usually” or “most of the time” instead of “always” or “never”. A sentence-length distribution check, because my own writing tends to sit at fourteen to eighteen words per sentence, and AI drift bumps it past twenty without anyone noticing.

The last loop is the editorial pass — the one only I can run. It’s the part that catches the quiet AI-flavoured patterns: the unprompted summary at the start of each section, the polished-but-empty topic sentence, the trailing “in conclusion” instinct. Most of the time those patterns aren’t wrong; they just aren’t mine. The grep can’t see them. I can.

This note is going through the same loops. The skill that drafted it surfaces five measurable critiques to me as a numbered report. I overrule a flag or fix it. The publish gate is a small checklist on a separate doc — a verbatim line of mine present, every number traceable, the editorial pass run. If any answer is no, I hold publish.

That isn’t elaborate, it’s just how I think. Did I mention, I run multiple AI sessions concurrently, it has become the normal. While keeping an eye on and in control of this, I was simultaneously progressing with client work, typically for 2 or 3 clients!

Rails built alongside the rebuild

The third thing inside the week is the part that surprised me.

While the site itself was being drafted and built, Claude Code was also building infrastructure. Not for the brochure site — for the work that would happen after the brochure site went live. Six workstream documents covering the SEO and LLMO programme. A reference document for the notes ledger you’re reading from now. A project-scoped skill called /proven-notes-author whose only job is to interview me, draft a note, and run the five measurable critiques against it. A pair of scheduled routines that fire every quarter to draft a topic memo and once at the four-week mark to re-probe search.

This is the part that’s hard to convey from outside. The model didn’t just type the site. It built the rails it would later run on. The skill that drafted this note is the same skill the rebuild week produced. The reference document the skill reads is the same document the rebuild week wrote. The note you’re reading is the end-to-end test of the lot.

Most of the time, when consultants describe their tooling, the tooling is something they bought and learned. This is the inverse — tooling I built because I understood the problem better than any vendor would, with a model doing the typing in tight loops alongside me. The work is mine. The cadence is faster. That’s what AI-augmented practice looks like in the day-to-day, not in the keynote.

What the receipt is for

/about puts the claim plainly: a craft practice with AI in the tool belt, not an AI practice dressed up as craft. The week-long rebuild is one piece of evidence for that. The case studies on /work are a different piece — a custom flight-ops platform sustained over a decade, nineteen vendor billing feeds reconciled into a single invoice, three branches running on a Jiwa surface with its own bespoke pricing matrix. Those are the engagements. The rebuild is the workflow.

Read the engagements next. They’re where the work actually lives.