Analytics May 12, 2026 13 min read

How to run a 90-day AI-assisted growth audit (without producing a useless PDF)

How to run a 90-day AI-assisted growth audit (without producing a useless PDF)

How to run a 90-day AI-assisted growth audit (without producing a useless PDF)

TL;DR

AI can compress the “read everything and synthesize” phase of a growth audit from weeks to days—if you treat the audit as an execution system, not a deck. SEJ describes a 90-day audit approach that uses AI for context and synthesis, then turns it into a collaborative plan. The practical lesson for business owners: AI is not the consultant; it’s the accelerator. To avoid hallucinations and wrong decisions, you need:

  • a real data room (primary sources),
  • a verification workflow (“no claim without evidence”),
  • a 90‑day backlog with owners and measurement,
  • and a lightweight AI risk checklist (NIST AI RMF is a good reference).

Sources: SEJ (topic signal) + NIST AI RMF 1.0.

Table of contents

Why most audits fail

Most audits are delivered as:

  • a 60–120 slide deck,
  • a long list of “recommendations,”
  • with unclear ownership and no timeline,
  • and no measurement plan.

The result is predictable: the deck gets archived, and nothing changes.

Audits become useless for five reasons:

1) the audit is treated as a deliverable, not a system,

2) information is scattered and hard to synthesize,

3) the team lacks a shipping rhythm,

4) nobody owns the decisions and risk,

5) there is no consistent truth set of data.

AI can help with synthesis, but it cannot replace ownership and data. If data is missing, AI will fill gaps with plausible assumptions. That’s the root of hallucination risk.

What AI is good at (and what it’s bad at)

What AI can do well (with good inputs)

  • summarize documents and transcripts quickly,
  • extract recurring themes from reviews and support logs,
  • propose draft plans and checklists for humans to refine,
  • highlight connections (e.g., “Landing page leak → paid traffic waste”),
  • speed up first-pass research and organization.

What AI does poorly (and must be controlled)

  • it can invent details when inputs are incomplete,
  • it can sound confident when it’s wrong,
  • it can generalize “best practices” that don’t fit your context,
  • it can produce lots of text without decisions.

Conclusion: use AI for speed-to-context, not for truth. Truth comes from your primary data.

The data room: what to collect before you prompt anything

Before prompts, create a folder (or workspace) of primary inputs. The goal: every conclusion can be checked.

1) Business context (1–2 pages)

  • what you sell, to whom, at what price,
  • rough margins (if available),
  • sales cycle length (B2B),
  • the 90-day goal (e.g., +20% revenue, +30% qualified leads).

2) Funnel definitions

  • what counts as a lead,
  • what counts as a qualified lead,
  • what counts as a customer,
  • typical time-to-qualification.

3) Analytics & tracking exports

  • GA4 export (or access),
  • Google Search Console export (queries/pages),
  • Ads export (cost/conversions/value),
  • documentation of events and conversions.

4) Website & content inventory

  • top pages by outcomes (not only traffic),
  • money pages (pricing, services, product pages),
  • recent changes (redesigns, tracking changes, migrations).

5) Voice of customer (VoC)

  • reviews,
  • sales call notes/transcripts (if any),
  • top support tickets,
  • objection emails (“why people don’t buy”).

6) Competitive set (minimal)

  • 3–5 real competitors,
  • their positioning,
  • their conversion paths (not just blog volume).

This data room is the audit fuel. Without it, AI becomes storytelling.

The 90-day framework: Sprint 0 + 3 execution sprints

Instead of a “big audit,” run an audit that turns into weekly shipping.

Sprint 0 (Days 1–7): baseline and truth

Goal: establish a trustworthy baseline.

1) confirm tracking correctness (dedupe, definitions),

2) define 3 KPIs (outcome, efficiency, capacity),

3) create an initial backlog (max 30 items) with owners and metrics.

Sprint 1 (Days 8–30): fix leaks (high impact, low regret)

Typical leak fixes:

  • pricing page clarity,
  • landing page friction,
  • missing proof and trust pages,
  • broken tracking,
  • offer mismatch.

Pick 3–7 deliverables. Ship weekly.

Sprint 2 (Days 31–60): build advantage (content + distribution + conversion)

Goal: own decision questions.

  • build a money question library,
  • publish 3–5 definitive pages with sources and proof,
  • add distribution loops (newsletter/partners/community).

Sprint 3 (Days 61–90): stabilize and institutionalize

Goal: make it repeatable.

  • weekly KPI review,
  • changelog of decisions and releases,
  • minimum dashboard,
  • quarterly maintenance schedule.

How to avoid hallucinations: verification, sources, validation

This is the part most teams skip. Don’t.

1) No claim without a source

Any factual claim that influences spend, product, or messaging must have:

  • a link to a primary source,
  • or a reference to your own export/report.

If it can’t be sourced, it’s a hypothesis—label it as such.

2) AI produces hypotheses; humans produce decisions

Use AI to propose:

  • what patterns might exist,
  • what questions to ask,
  • what tests to run.

Then validate with data.

3) Use output formats that force evidence

For every backlog item:

  • problem,
  • evidence (report link),
  • hypothesis,
  • change,
  • risk,
  • metric,
  • owner,
  • deadline.

If “evidence” is missing, the item stays in research.

4) Run a lightweight internal red team

Have someone ask:

  • “how do we know that?”
  • “what if it’s wrong?”
  • “what data would falsify it?”

This simple practice reduces hallucination risk dramatically.

Risk management: a small-team checklist inspired by NIST AI RMF

NIST AI RMF 1.0 is a strong reference for thinking about AI risk. You don’t need enterprise compliance—just a few rules.

Govern

  • who approves public claims?
  • who approves high-risk changes (tracking, pricing, legal claims)?

Map

  • what data sources are used?
  • what data is sensitive?
  • where could bias appear (one-sided feedback sources)?

Measure

  • what sanity checks detect errors?
  • what validations exist before shipping changes?

Manage

  • how do we roll back bad changes?
  • how do we document learning?

The objective is not paperwork. It’s preventing expensive decisions based on false premises.

Prioritization: impact, effort, risk (no theater)

Score each initiative on:

  • Impact (1–5),
  • Effort (1–5),
  • Risk (1–5).

Then choose:

  • Sprint 1: high impact / low effort / low risk,
  • Sprint 2: high impact / medium effort,
  • Sprint 3: stabilization work that prevents regression.

This prevents “we did 30 tiny things and nothing changed.”

What a complete (but lean) audit includes: 6 areas that drive 80% of growth

A useful growth audit is not just SEO or just Ads. In practice, revenue growth is a system:

1) Offer and positioning

  • Is it clear what you sell and for whom?
  • Is the value proposition specific or generic?
  • Are you attracting the right buyer or “everyone”?

2) Conversion (landing pages + funnel)

  • Are money pages clear and proof-heavy?
  • Is the next step frictionless?
  • Do pages answer objections (price, timeline, risk)?

3) Measurement (tracking + definitions)

  • Are conversions deduplicated?
  • Are definitions stable and documented?
  • Do you measure quality, not just volume?

4) Acquisition (SEO + Ads + distribution)

  • Which channels produce qualified demand?
  • Where is waste occurring (queries, landing pages, audiences)?
  • What content influences decisions?

5) Retention (email, follow-up, upsell)

  • What happens after the first lead or purchase?
  • Do you have lifecycle messaging?
  • Are you leaking customers post-purchase?

6) Operations (shipping rhythm + approvals)

  • Can you ship 3–5 changes per week?
  • Who approves high-risk changes?
  • Can you roll back quickly?

If you cover these six areas, your audit becomes a growth engine—not a PowerPoint.

Prompt library (controlled, source-first)

Use prompts that force honesty.

Prompt 1: summarize exports with evidence

“Given this GA4/GSC/Ads export, summarize the top 10 patterns. For each: cite the columns used, state what’s missing, and do not invent causes.”

Prompt 2: generate testable hypotheses

“Propose 10 hypotheses we can test in 2 weeks. For each: metric, tool, risk, success criteria. If data is missing, list what’s needed.”

Prompt 3: money page audit

“Audit page X as a landing page: promise clarity, objections unanswered, missing proof, CTA timing, and one A/B test idea. No fabricated results.”

Prompt 4: 90-day editorial plan (after money questions exist)

“From these money questions, propose 10 definitive pages. For each: structure, primary sources to cite, and examples we should include.”

These prompts keep AI in the right role: accelerator, not fiction engine.

A simple AI usage policy for audits (small-team friendly)

If you want “no hallucinations” to be real, write and enforce 6 rules:

1) AI must not invent numbers or trends—only summarize your exports.

2) AI must not make guarantees—only hypotheses with conditions.

3) Every backlog item requires evidence (report or source).

4) Separate insights from execution: every action needs owner, risk, metric.

5) Human sign-off is required for high-risk changes (tracking, pricing, legal claims, migrations).

6) Keep a decision log: what we chose and why.

This is enough to reduce risk dramatically without bureaucracy.

AYSA execution: approval-first, evidence-first

In AYSA, an audit is not a report. It’s a workflow:

1) define the business objective,

2) build the data room,

3) generate hypotheses and backlog,

4) approval-first (no high-risk changes without sign-off),

5) ship weekly,

6) verify outcomes and document learning.

This builds authority through process, not claims.

Execution snapshot (animated)

ready

Audit

Technical and content risks prepared for review.

running

Research

Keyword gaps and competitor movement in progress.

ready

Content

Answer-ready briefs and page updates prepared.

warning

Technical SEO

Redirects and Sitemap actions need approval.

Example 90-day backlog (structure, not a universal truth)

Below is an example of how to structure a backlog so it can actually be executed.

Sprint 0 (setup)

  • Fix conversion deduplication (Owner: dev/ops; Metric: stable event counts)
  • Export top pages by outcomes (Owner: analytics; Metric: complete inventory)
  • Define “qualified lead” in CRM (Owner: sales; Metric: consistent tagging)

Sprint 1 (leaks)

  • Rewrite pricing page for clarity + objections (Owner: marketing; Metric: Conversion Rate)
  • Build one definitive landing page for the core offer (Owner: marketing+design; Metric: CPQL)
  • Reduce form friction (Owner: dev; Metric: completion rate)
  • Add internal links between money pages (Owner: SEO; Metric: click depth)

Sprint 2 (advantage)

  • Publish 3 definitive decision pages (Owner: content; Metric: assisted conversions)
  • Publish 1 case study (Owner: marketing; Metric: pipeline influence)
  • Start a newsletter distribution loop (Owner: growth; Metric: return visits)

Sprint 3 (stabilize)

  • Build a minimal dashboard (Owner: analytics; Metric: weekly report)
  • Run the weekly ritual consistently (Owner: lead; Metric: 10/10 weeks)
  • Set quarterly maintenance cadence (Owner: content; Metric: pages refreshed)

The backlog becomes your audit. Not the deck.

Pitfalls (and how to avoid them)

1) “AI said so” becomes the argument

If there is no evidence, it’s not in the backlog. Period.

2) Too many initiatives at once

Shipping beats complexity. Three weekly deliveries beats one huge “audit project” that never lands.

3) Conversion changes without parallel reporting

If you change conversion definitions, report old vs new for a short window so you don’t misread performance.

4) Audit forever, ship never

If you don’t ship something in the first 7–14 days, you built a consulting project, not a growth system.

How to know it’s working after 30 days

The proof is not “we learned a lot.” The proof is:

  • you shipped consistently (at least 1–2 changes/week),
  • you have a minimal dashboard for outcome + efficiency,
  • you can explain what changed using data,
  • the backlog evolves based on results, not opinions.

Templates + checklists

The goal of templates is not “process theater.” The goal is speed with safety: you can ship weekly without guessing.

Initiative card template

  • Problem
  • Evidence
  • Hypothesis
  • Change
  • Risk
  • Metric
  • Owner
  • Deadline

If you want to level up this card, add two optional fields:

  • Dependencies (what must happen first),
  • Rollback plan (how to revert if it fails).

KPI definition template (so everyone means the same thing)

  • Outcome KPI: (e.g., revenue, qualified leads)
  • Efficiency KPI: (e.g., CPQL, ROAS)
  • Capacity KPI: (e.g., 3–5 deliveries/week)
  • Definition: what counts, what doesn’t, and where it is measured
  • Reporting cadence: weekly + monthly

If you don’t define KPIs explicitly, teams argue about numbers instead of improving them.

Weekly 30-minute ritual

1) What shipped last week?

2) What changed in outcomes?

3) What ships this week (3–5 items)?

4) What’s blocked?

Add one rule: if a task has no owner, it’s not a task. It’s a wish.

“Proof pack” template (to avoid decision drift)

When you propose a major change (pricing, positioning, channel mix), attach a one-page proof pack:

  • What we observed (data)
  • Why it matters (impact)
  • What we’ll change (action)
  • How we’ll measure success (metric + window)
  • What could go wrong (risk + mitigation)

This keeps the audit grounded in evidence.

Rapid checklist

  • data room exists,
  • KPIs defined,
  • backlog has owners + metrics,
  • verification rules enforced,
  • weekly shipping rhythm exists.

If those five are true, your audit is a system, not a deck.

Quick start: what to do in the next 2 hours

If you want to start immediately, here’s a two-hour kickoff that produces real output:

1) Write a 1-page business context (offer, ICP, price, 90-day goal).

2) Export three datasets: last 90 days GA4, last 90 days GSC, last 90 days Ads.

3) List your top 10 pages by outcomes (not traffic).

4) Create a backlog with 10 items maximum:

  • 3 leak fixes (landing page clarity, friction, tracking),
  • 3 measurement fixes (conversion definitions, dashboards),
  • 4 advantage items (definitive pages, proof assets).

5) Assign owners and deadlines.

If you do only this, you’re already ahead of most audits—because you can ship next week.

Closing: what “audit success” really looks like

An audit succeeds when it changes behavior:

  • the team ships weekly,
  • decisions are made with evidence,
  • and learning is documented so you don’t repeat mistakes.

AI helps because it reduces time-to-context. But the audit’s value is determined by execution, not by how good the deck looks. Build the system, keep it honest, and let results prove the work.

Deep research (practical): how to avoid the “useless PDF” trap

Most “audit PDFs” fail because they optimize for appearance instead of utility. They contain:

  • generic best practices,
  • too many recommendations,
  • no ordering, no owners, no verification.

If you want to produce something that *actually* helps, keep one rule:

  • Every “insight” must convert into either (a) a backlog card with evidence and a metric, or (b) a research question with a defined next step.

This reframes the audit into a decision system.

What a week should look like during a good audit

If your audit is working, week-to-week output is tangible:

  • 1 landing page improved (clarity + proof),
  • 1 measurement fix shipped (conversion or dashboard),
  • 1 content asset updated or published (definitive page),
  • 1 distribution action executed (newsletter/partner share),
  • and a 30-minute review that decides what happens next.

You don’t need “perfect strategy.” You need consistent delivery.

How AI helps without becoming the decider

Use AI to:

  • summarize inputs,
  • draft options,
  • generate checklists,
  • identify missing information.

Do not use AI to:

  • decide budgets without data,
  • invent customer motivations,
  • write claims you can’t verify,
  • recommend major changes without a rollback plan.

That separation is what keeps audits safe and effective.

Sources

About the author

Marius Dosinescu is the founder of AYSA.ai, an approval-first SEO/AEO/AI Search execution platform focused on evidence, shipping, and verification.

More: https://aysa.ai/ • Blog: https://aysa.ai/blog/

If your audits historically turned into “lots of insights, little action,” AYSA’s operating model is designed to fix that: data room first, evidence-first backlog, approvals on high-risk changes, weekly shipping, and a verification loop that keeps the team honest. It’s less glamorous than a big deck, but it produces compounding improvements—because every week ends with something shipped and something learned.

That’s the point of a 90-day audit: not to prove you’re smart, but to make the business measurably better.

Start with Sprint 0. If you can’t trust your conversion numbers, you can’t trust anything else. Once tracking, definitions, and exports are stable, AI becomes genuinely useful: it speeds up synthesis and helps you structure work. But the only thing that makes the audit real is execution: owners, deadlines, and a weekly rhythm you actually keep.

If you only remember one constraint: keep the backlog small. Ten clear items with evidence and owners beats fifty vague “opportunities.” The smaller the backlog, the faster you ship, and the faster you learn. That learning loop is what turns a 90-day audit into growth—not the document you produce.

That’s also how you avoid the most expensive failure mode: making confident decisions on unverified assumptions.

Ship, measure, and adjust. Week after week. That’s what makes the audit real.

For 90 days. Then keep going.

Always.

Marius Dosinescu, author at AYSA.ai

Written by

Marius Dosinescu

Marius Dosinescu is the founder of AYSA.ai, an ecommerce and SEO entrepreneur focused on making organic growth execution accessible to businesses. He built FlorideLux.ro, founded Adverlink.net and writes about SEO, AEO, AI visibility, authority building and practical website growth.

SEO execution, not more busywork

Turn SEO reading into approved website action.

AYSA monitors your website, prepares the work, asks for approval, and executes approved changes inside your website.

Start now View pricing

Only €29 to €99 per month, depending on the size of your business.

AYSA SEO Magazine

Latest search intelligence.

View all articles
WhatsApp