A DIM Factory applied to a dormant content business. Two engines, ninety days, one decisive outcome.
monthly sessions, 16,500 unique visitors
across ~210 customers
10x below niche benchmark
Google Organic traffic loss 2023-2025
This trial isn't a relaunch. It's a system build — the factory and flywheel that should have surrounded the content from day one.
The content factory and the demand gen flywheel are tested in parallel. Each can pass or fail on its own.
| Demand gen passes | Demand gen fails | |
|---|---|---|
| Content factory passes | SCALE — full system works, SFN moves forward | KILL SFN, port factory — methodology transfers to Daybreak |
| Content factory fails | RARE — funnel converting without quality content suggests one-off luck | KILL CLEAN — documented learnings, no ongoing cost |
It works on any vertical. Stepfamilies, home services, B2B SaaS — same engine, different content. This is the portable IP. Daybreak and future projects inherit whatever survives.
It's the test that the factory's output drives behavior. SFN-specific in execution, but the playbook for matching factory output to audience demand transfers to any market we touch next.
One unified factory. Three sub-systems. Different lights, same operation. DIM, not DARK — human-in-the-loop, by design.
AI content factory: ramping from 10-15 quality assets by Day 30 to 60-90 over the full trial. Multi-model QA pipeline. Editor validates clinical accuracy.
Acquisition + conversion engine. Higgsfield for premium creative, lifecycle platform for nurture, automation orchestration.
Automation system any tech-savvy operator can run. SOPs, decision logs, agentic workflows. Spins without daily intervention.
SFN is the test bed. The factory is the IP. Whatever works here ports to Daybreak and beyond.
Once direction is set, your ship runs on autopilot. Founders steer, operator executes, the system runs without daily founder time.
Each spoke is wired through agentic systems. AI agents handle routine work; humans intervene only at high-risk decisions.
The agency site case study: I defined PSI score targets, Claude Code iterated autonomously, scores went 83/84/100/83 → 94/94/96/100 overnight. I was not in the inner loop. Detail on next slide.
| Component | Currently filled by | Replaceable with |
|---|---|---|
| Operator | Mick (full-time, 90 days) | Trained operator + Mick as paid consultant from Day 91 |
| Editor | Contracted editor (2-5 hrs/wk) | Any qualified editor with content review experience |
| Course platform | TBD (Kajabi leans, test drive confirms) | Course content portable; structure migrates with effort |
| Customer lifecycle platform | Operator's custom CRM ($150/mo) | Any major automation platform with similar capabilities |
| Avatar / talking head | Synthesia (post-test) | Voice + face training data portable to competitors |
| AI models | OpenAI + Anthropic + multi-model QA stack | Multi-model design = no single vendor lock-in on intelligence layer |
Every decision is made with replacement in mind. If a vendor raises prices, fails, or stops serving our needs, the swap is documented, planned, and possible. Phase 2 operator inherits a system, not a personality.
Google PageSpeed Insights (PSI) is the industry-standard benchmark for site performance, accessibility, best practices, and SEO. It's how Google measures real-world user experience and increasingly factors into search ranking.
Per HTTP Archive, CrUX data, and 2026 page speed statistics.
Built from scratch by AI, optimized by Claude Code without manual coding. I was not in the inner loop.
Tested mobile, May 15 2026. The AI-built agency site beats SFN's professionally-built Wix site by 41 points on performance, 8 on accessibility, 19 on best practices, and 15 on SEO.
This is the DIM Factory pattern in three sentences. I defined the constraint (PSI score target). The system iterated. I came back later. The site outperforms the majority of professionally-built websites — at zero marginal cost per iteration.
Three platforms tested on free tier. Same script. Output quality on free tier may differ from paid output — noted in the table below.
| Platform | Output Quality (operator notes) | Custom Audio + Avatar Import |
|---|---|---|
| HeyGen | Free tier: generic. Paid tier reportedly stronger — supports 4K video, generally sharper and more "professional" looking. | Restricted on free tier |
| Colossyan | Worse than HeyGen on free tier. Not worth pursuing further. | Restricted on free tier |
| Synthesia | Best lip sync. Most natural delivery. Capped at 1080p — less sharp than HeyGen 4K, but more believable as a talking head. | Full support on paid tiers |
Why: lip sync quality and delivery naturalism matter more than 4K resolution for our use case (clinical/emotional content, not visual production). The avatar's job is to be believed, not to look sharp.
Cost reality (Synthesia): Creator $59-64/mo × 12-month annual = $708-768 Day 1. Enterprise estimates $1,000+/mo × 12 = $12,000+ Day 1. Sales call after Friday confirms.
⚠ Annual contract continues regardless of Day 90 outcome. Unused minutes do NOT roll over at renewal.
Recommendation flipped twice during research as new information surfaced. The honest read now: agentic capability is equivalent on both platforms, so the decision comes down to funnel/CRO infrastructure (Kajabi wins) vs operator conveniences (Thinkific wins). Side-by-side test drive before final lock.
DropInBlog's MCP server is tied to the DropInBlog account, not the host platform. It works identically on Thinkific (1-click app install) and Kajabi (Custom Code paste). The agentic content workflow we proved on the agency site runs the same way on either platform. This is NOT a differentiator — it's a tie.
Native landing pages, native abandoned cart recovery, CRO-optimized templates, coaching scheduler, premium funnel infrastructure. Thinkific gives us none of these — we'd build them on the agency-site architecture, which is real work that competes with content production for operator time. Kajabi handing us these out of the box is meaningful.
Lesson-level webhooks (vs Kajabi's purchase-event only), included API (vs add-on or Pro tier), open app marketplace, 1-click DropInBlog install. These are real but smaller — the kind of advantages that matter to the operator but not to the customer experience.
UX feel, theme customization friction, content publishing workflow rhythm, abandoned cart automation quality. Both platforms have free trials. Side-by-side test drive before Day 1 of the trial — recommendation confirmed (or flipped) based on what we actually feel, not what research predicts.
Walled gardens like Kajabi and Thinkific own your funnels, your automations, and your page designs. If we want full agentic automation long-term, the destination is open-source (BuddyBoss + LearnDash, or fully custom). Benefits of going open-source: full ownership of content + automation logic, no vendor price-hike exposure (Kajabi raised prices Sept 2025), unlimited MCP/agentic workflow potential, and no walled-garden migration cost when the platform inevitably stops serving us.
For the trial: walled garden is the right choice. Speed beats ownership at 90 days. For Phase 2 scale: open-source migration becomes a real conversation.
If Kajabi wins the test drive, its native funnels handle most of this. If Thinkific wins, marketing landing pages get built on the same architecture as the agency site — Next.js or plain HTML on Hostinger, deployment via GitHub, optimization via agentic workflow. We already proved that stack works overnight at 94/94/96/100. DropInBlog MCP integrates either way.
Two paths. The variable is Synthesia tier, locked after sales call.
| Category | Ideal Path | Worst Case Path |
|---|---|---|
| Operator (full-time, 90 days) | $18,000 | $18,000 |
| Editor (2-5 hrs/wk @ $75/hr) | $3,400 | $3,400 |
| Software stack (incl. DropInBlog $25/mo) | $4,675 | $4,675 |
| Branding (logo refresh, brand assets) | $500 | $500 |
| Ad spend (Day 60+, founders select $1K/$2K/$3K) | $2,000 | $2,000 |
| Subtotal — 90-day operating | $28,575 | $28,575 |
| 10% project buffer | $2,858 | $2,858 |
| 90-day operating total | $31,433 | $31,433 |
| + Synthesia annual commitment ⚠ | $708 (Creator) | $12,000+ (Enterprise) |
| Total Day 1 commitment | $32,141 | $43,433+ |
Synthesia plans are annual contracts paid monthly. Even if SFN is killed at Day 90, the remaining 9 months continue to invoice. If Enterprise tier is required (sales call confirms), the true cost is $12,000+, not $3,000 over 90 days. Creator tier ($708-768/yr) is what we hope works for our use case.
Platform lock, avatar pipeline, voice clone, wedge candidate selected, DIM factory v0 manual recipe
10-15 quality assets shipped (quality over volume). Funnel live. Wedge product build initiated.
Production scaled. Wedge product live. First customers. Ad spend activation decision.
Decision based on documented metrics. Operator handoff package complete regardless of outcome.
Factory producing 10-15 quality assets. Manual recipe documented. Editor rhythm established.
Production scaled to volume. Organic conversion ≥ 1% (10x baseline). Wedge product validated. Ad spend decision.
Revenue threshold met (TBD Friday). Operator system handoff-ready. Methodology documented.
Quality over volume from day one. Production starts manually, ramps to automation. Cumulative 60-90 quality assets by Day 90 is the trial target — not per month.
Operator rolls onto Daybreak full-time. SFN content archived. No wind-down cost. DIM Factory methodology transfers to Daybreak regardless.
Cost beyond trial: $0 operating · Synthesia annual continues
Trial extends 60-90 days. Operator continues full-time (cannot reduce mid-trial — still in production mode). All software stack continues.
Cost: ~$8,500/mo · ~$17-25K added
Second operator hired. Therapist HITL editor formalized. Founding operator transitions to paid consultant role.
Year 1: ~$90K standalone
Each scenario ends with documented methodology. The content factory architecture, multi-model QA pipeline, operator handoff package, and lifecycle automation playbook are reusable IP. Built for SFN, but they work on any niche. Daybreak inherits whatever survives. So do any future projects we touch.
The system is designed to be operator-led and run as a flywheel without founder involvement. Weekly check-ins during early weeks until the ship is sailing in the right direction; then advisory-only at gates. Optional paid role available (e.g., editorial director) if founders want deeper involvement.
A separate one-page Operator Ownership Agreement formalizes the above for signature.
| Risk | Mitigation |
|---|---|
| AI content flagged as low-quality by Google | Multi-model QA pipeline + Human-In-The-Loop (HITL) editor for clinical accuracy |
| Wedge product selection wrong | Voice-of-customer-driven research, validated before build, single primary lane (Rule of 100) |
| Synthesia Enterprise expense unknown | No real mitigation. Online info is shabby; could be $1K+/mo. Sales call after Friday gets firm number. Founders approve actual cost — plan adapts or restructures. |
| Synthesia content moderation strict | Documented user reports of accounts banned for legitimate content. Worth flagging — pre-test moderation tolerance during onboarding. |
| Mobile app deferred — perceived as cut scope | Calendar reality, not doubt about value. App Store + Google Play review takes 6-8 weeks; production build runs 8-12 weeks. An app started Day 1 wouldn't ship before Day 90. Pre-commit to Day 91 activation if Day 90 validates. |
| Platform choice locked in error | Lock decided BEFORE Day 1 via parallel side-by-side trial. We do NOT pivot at Day 45 — wastes too much trial time. |
| Paid traffic doesn't scale | Deferred until Day 60 organic conversion proves. Founders approve activation amount. |
| Speaker rights on founder archive | Already mitigated — course guides cite sources for all derivative content. |
A 90-day trial dies when the team optimizes the wrong thing, adds the wrong tool, or builds the wrong feature on Day 45. We agree this is a real risk. Active measures are already in place.
The order: warm/founder channels → partner channels → content channels → paid ads. Each lane proves before the next activates.
The content factory (POC) is the IP we're proving. Demand gen (GTM) is the test bed. Each measured independently.
Mobile app: Day 91 pre-commit. Ad spend: Day 60 founder approval. WordPress migration: Day 90 + 2-year commitment. Every "we should also..." goes to a parking lot.
Scope creep protection lives in the Operator Ownership Agreement, the operator manual, and the POC PRD. Triple-documented = enforceable, not aspirational.
Once these land, work begins Friday afternoon. Day 1 of 90 starts when budget is approved.