PPC May 12, 2026 13 min read

Google Ads is limiting historical reporting: what to export before June 1, 2026 (and how to avoid breaking your data warehouse)

Google Ads is limiting historical reporting what to export before June 1, 2026 (and how to avoid breaking your data warehouse)

Google Ads is limiting historical reporting: what to export before June 1, 2026 (and how to avoid breaking your data warehouse)

TL;DR

Starting June 1, 2026, Google Ads enforces a retention policy that limits access to granular reporting to a rolling window of 37 months, while monthly/quarterly/yearly aggregates remain available for up to 11 years. This affects reporting in the UI, the API, and certain BigQuery Data Transfer Service behaviors (especially backfills). If you want multi‑year, day‑level history for seasonality, pacing, audits, or QBRs, the practical solution is not “we’ll pull it later.” It’s:

  • Export → store immutably → protect reruns/backfills before the deadline.

Sources: SEJ (topic signal) + the official Google Ads Developer Blog announcement.

Table of contents

What exactly changed (and what remains available)

Think of this as a two-tier retention system:

1) Granular data: 37 months

Per the official policy, granular reporting is limited to a 37-month lookback window. In practice, “granular” typically means anything you use for day-to-day diagnostics and pacing:

  • daily trends,
  • weekly views for operational patterns,
  • time-based segmenting that requires a fine time dimension.

After the window, you should assume you will not be able to retrieve the same day-level detail on demand from Google Ads (UI or API). (See the Google Ads Developer Blog post.)

2) Aggregates: up to 11 years

For monthly, quarterly, and yearly aggregation levels, the retention window is extended up to 11 years. That’s enough for long-horizon, executive-level trend reporting—but it’s not enough for diagnostics that require day-level “what happened on Tuesday?” detail.

3) BigQuery Data Transfer Service implications

The official announcement also calls out impacts on BigQuery Data Transfer Service, including backfill behavior for Google Ads and Search Ads 360 transfers. The key operational takeaway is: you cannot design your warehouse pipeline around the assumption that you can “just backfill the last 5–10 years” at daily granularity after June 1, 2026.

This article is not about finding loopholes. It’s about designing a measurement system that doesn’t collapse when “re-pull history” stops working.

Why this becomes painful only when you need it most

Most teams won’t notice the change on June 2. They notice it later—when they need history urgently.

1) You don’t need multi-year daily data every week

During stable periods, teams live in 30–90 day windows. The pain appears when you need to:

  • explain a multi-week anomaly from years ago,
  • compare the same seasonal window across multiple years,
  • audit a newly inherited account,
  • rebuild dashboards after a tool change,
  • defend budget decisions in a QBR.

2) Tools hide where the data comes from

Looker Studio dashboards and connector-based reporting feel “live.” If those dashboards query Google Ads directly (or rely on a pipeline that assumes unlimited backfills), a retention change can cause:

  • charts that silently truncate older time ranges,
  • charts that shift time granularity unexpectedly,
  • empty partitions or missing rows in your warehouse.

3) Warehouse reruns can do more damage than the retention limit itself

The bigger risk is operational. A poorly designed rerun/backfill can:

  • overwrite previously correct historical partitions,
  • create “holes” (0 rows) that look like performance drops,
  • duplicate rows (inflating spend) if dedupe keys aren’t enforced.

If you store history, you must also store it safely.

What data is actually worth saving (and what isn’t)

The worst move is exporting “everything” without a purpose. You’ll pay for storage, maintenance, and confusion. The best move is saving a minimal, decision-grade dataset that enables:

1) diagnosis,

2) comparability,

3) auditability,

4) rebuildability.

A. The minimum viable archive (business-first)

At a minimum, keep daily performance at a stable entity level you can maintain (often campaign-level is enough):

  • date
  • account_id
  • campaign_id (+ campaign_name snapshot, optionally)
  • cost
  • Impressions
  • Clicks
  • conversions (your chosen primary conversion)
  • conversion_value (if relevant)

Then calculate derived KPIs in your BI layer:

  • CPA = cost / conversions
  • ROAS = conversion_value / cost

Why? Because pipelines and APIs change. Your KPI definitions should be under your control.

B. If you rely on true seasonality and pacing diagnostics

If you run seasonal promos or have capacity constraints, save:

  • daily data for the windows you routinely compare (e.g., holiday periods),
  • a business calendar table (promo windows, site releases, tracking changes),
  • notes about major account structure changes (campaign renames, merges).

C. What usually isn’t worth saving long-term

Unless you have a concrete analysis use-case, avoid exporting and storing:

  • every possible segment dimension “just in case,”
  • unstable views that frequently change definition,
  • highly granular, high-cardinality datasets you can’t maintain.

AYSA rule of thumb: don’t collect data for ego. Collect it to make decisions and verify outcomes.

A 7/30/90-day export plan

Here’s a pragmatic plan that works for SMBs, agencies, and most in-house teams.

In 7 days: inventory + risk map

1) List every place Google Ads data is pulled:

  • Looker Studio dashboards
  • connector tools (Supermetrics, etc.)
  • scripts / custom jobs
  • BigQuery Data Transfer
  • manual exports
  • CRM and offline conversion imports

2) Classify each flow:

  • live (queries Google Ads directly)
  • extract (periodically saves data)
  • warehouse (central storage with defined schema)

3) Identify all reports that request:

  • daily/weekly data older than 37 months,
  • “rebuild from scratch” behavior.

4) Decide what you truly need to preserve at daily granularity beyond 37 months.

In 30 days: export “minimum viable history”

Pick one of these paths:

  • simple: scheduled CSV exports + immutable storage (works short-term)
  • correct: a warehouse table model that prevents accidental historical loss

If you choose the “correct” path, start small:

  • one fact table partitioned by date
  • one entity snapshot table
  • one business calendar table

In 90 days: harden, automate, document, communicate

1) Automate exports and loads.

2) Add validations (row counts, spend sums, missing dates, duplicates).

3) Document the pipeline (what is stored, why, and how it’s protected).

4) Communicate the change to stakeholders:

  • what Google changed
  • what you’ve archived
  • what will differ in reporting after the deadline

BigQuery: how to prevent gaps and overwrites

Even if you don’t use BigQuery, the principles apply to any warehouse.

1) Partition by date

Google Ads data naturally fits date partitioning. This provides:

  • predictable query costs
  • easy range refresh
  • the ability to freeze older partitions

2) Use an append-only staging → merge pattern

The safest pattern is:

  • load into a fresh staging table
  • merge into a final fact table using dedupe keys (e.g., date + campaign_id)
  • keep run_id / load logs

This makes it hard to destroy history by accident.

3) Freeze old history after June 1, 2026

Post-deadline, avoid any pipeline behavior that “rebuilds everything.” Instead:

  • freeze older partitions (read-only policies or governance rules)
  • write reruns to staging tables
  • merge only when validations pass

4) Add minimal monitoring

Start with three alerts:

  • a job requests daily data older than 37 months,
  • a partition becomes empty after a rerun,
  • spend totals jump unexpectedly (duplicate rows).

5) A minimal data model that keeps you sane

If you want something maintainable for years:

1) ads_daily_campaign_performance (fact, partitioned by date)

  • keys: date, account_id, campaign_id
  • metrics: cost, impressions, clicks, conversions, conversion_value

2) ads_entities_snapshot (dimension snapshot)

  • campaign_id → campaign_name, status, bidding strategy (where relevant)

3) business_calendar (your truth table)

  • date → promo_window, site_release, tracking_change notes

4) conversion_definitions (documentation + governance)

  • what “conversion” meant at time X
  • when tracking changed

With these 4 pieces, you can reconstruct decisions and avoid reporting fiction.

API & dashboards: how to adapt after June 1, 2026

We won’t invent undocumented API behaviors here. Instead, use robust, product-agnostic rules:

1) Split reporting into “recent” vs “historical”

  • Recent (≤ 37 months): daily/weekly trends, pacing, diagnostics.
  • Historical (> 37 months): monthly/quarterly/yearly aggregates only.

2) Normalize time granularity in dashboards

It’s okay for a dashboard to show:

  • last 90 days daily,
  • last 5 years monthly,

as long as the UI makes the granularity difference explicit.

3) Keep KPI calculations in BI

Prefer computing CPA/ROAS in BI so:

  • you control the formulas,
  • schema changes don’t break your interpretation,
  • you can re-attribute conversion definitions cleanly.

4) Standardize report “types” (so teams stop improvising)

Define four standard report templates:

1) operational pacing (14–90 days daily)

2) macro trend (3–5 years monthly)

3) promo impact (window vs baseline)

4) tracking health (conversion stability, anomaly alerts)

When report templates are standardized, retention becomes a constraint—not a crisis.

Keeping KPI comparability (and communicating the change)

This is as much management as it is data engineering.

1) Separate “data availability” from “business performance”

Losing the ability to pull daily data from 2019 doesn’t change what happened in 2019. It changes your ability to retrieve it on demand. That’s a governance problem, not a performance problem.

2) Set an internal retention policy

Example policy:

  • store daily data for as long as you can (ideally 4–5 years if you export in time),
  • store monthly aggregates for 10+ years,
  • store conversion definitions and tracking changes permanently.

3) Build a QBR pack that does not depend on live UI pulls

A good QBR pack contains:

  • monthly trend charts,
  • annotated events (promos, site changes, tracking changes),
  • decisions and outcomes.

Daily granularity is a diagnostic tool, not the final deliverable.

Deep dive: real failure modes (and how to prevent them)

Below are the five scenarios where this retention change tends to become a real incident. Use them as a checklist to harden your system.

Scenario 1: “We switched agencies / tools and need a multi-year audit”

When a new team takes over, they’ll want to understand:

  • what was tested historically,
  • when tracking changed,
  • when budget strategy changed,
  • how seasonal peaks were managed.

If your only source of truth is “live pulls,” the audit becomes guesswork. The fix is simple: keep a minimal daily archive plus a business calendar/changelog so you can explain what changed and when.

Scenario 2: “Tracking broke and we need to pinpoint when”

If conversions suddenly spike or disappear, day-level history is what lets you pinpoint the exact change window and correlate it with releases. After June 1, 2026, if that window is outside 37 months and you have no archive, you’ll be forced into monthly aggregates that can hide the break.

Prevent it with:

  • daily warehouse storage,
  • a tracking health report,
  • alerts for sudden conversion distribution changes.

Scenario 3: “We rebuilt the warehouse and re-ran backfills”

This is where teams accidentally destroy good history. The pattern looks like:

  • “delete and re-import everything,”
  • rerun pulls return empty for older daily ranges,
  • the pipeline overwrites partitions with empties,
  • you now have gaps that look like performance collapses.

The prevention is also simple:

  • never overwrite old partitions blindly,
  • always stage → validate → merge,
  • freeze old history after the deadline.

Scenario 4: “We compare this year vs 4 years ago, daily”

Some businesses compare specific seasonal windows day-by-day (holiday promos, launches, annual events). If that’s you, you should proactively export those windows at daily granularity now.

Your retention strategy doesn’t have to be “daily for 10 years.” It can be:

  • daily for 37 months+,
  • plus daily for “key comparison windows” you know you’ll reuse.

Scenario 5: “We changed conversion definitions and now nothing matches”

When you change conversion actions (or start importing offline quality), your KPIs change. If you don’t store:

  • conversion definitions over time,
  • tracking changes over time,

you’ll misinterpret history.

This is why a conversion_definitions table (even a simple document) matters.

Practical implementation notes: make the archive boring

“Boring” is the goal. A boring archive is stable, predictable, and hard to break.

1) Keep the schema stable

Avoid tying your core history table to fragile dimensions. A safe base is:

  • date
  • campaign_id
  • cost / impressions / clicks
  • conversions / conversion_value

If you need more detail, add it as separate tables, not as exploding dimensions inside the fact table.

2) Store names as snapshots, not truth

Campaign names change. If you store only campaign_name and use it as a key, you will eventually break reporting. Store IDs as keys; store names in a snapshot table so you can still debug and label charts.

3) Prefer “append-only” + dedupe keys

At minimum, use a dedupe key such as:

  • date + account_id + campaign_id

Then build a simple “dedupe/merge” process in the warehouse. Even if you start with CSVs, keep the idea: never assume a rerun is safe.

4) Validation beats sophistication

Your first validations can be extremely simple:

  • total daily cost in warehouse roughly matches totals you see in your main reporting source,
  • row counts are within expected ranges,
  • no missing dates for active campaigns.

If validation fails, block the merge.

Stakeholder comms template (copy/paste)

If you manage clients, leadership, or internal stakeholders, you’ll save time by communicating the change clearly. Here’s a template you can reuse:

  • What changed: Google Ads will limit access to granular reporting to 37 months starting June 1, 2026; aggregated monthly/quarterly/yearly data remains available longer.
  • What this means: we may not be able to re-pull daily history older than the window from the UI/API after that date.
  • What we’re doing: we are exporting the daily history we need into our own archive/warehouse and adding safeguards so reruns/backfills cannot overwrite old partitions.
  • What won’t change: ongoing reporting for recent periods and decision-making will continue normally; trend reporting for long horizons will still work using monthly aggregates.
  • Risks to watch: tool migrations or “rebuild everything” pipelines can create gaps if not controlled.
  • Timeline: export plan complete before June 1, 2026; Monitoring and documentation added as part of the rollout.

This turns the retention update from a surprise into a managed project.

FAQ

“Is monthly aggregation enough?”

Monthly is often enough for macro trends and executive summaries. It is not enough for diagnostics: pacing, day-of-week patterns, release impact, or tracking incident analysis.

“Can we just export from the UI?”

Manual export works as a backup, but it’s hard to keep consistent:

  • different report presets,
  • inconsistent columns,
  • partial windows,
  • human error.

If you export manually, treat it as an emergency plan and still implement consistency checks and versioning.

“What does ‘no hallucinations’ mean here?”

It means we do not invent undocumented API behavior. We rely on the official retention policy for the rules, and we use standard, proven warehouse patterns for the implementation.

How AYSA executes this (approval-first, evidence-first)

In AYSA, platform and measurement changes are treated as operational projects with risk:

1) Confirm the change (sources, scope, timeline).

2) Audit every data pull path (dashboards, connectors, scripts, warehouse loads).

3) Define the minimal archive required for business decisions.

4) Approval-first plan: what to export, where it lives, what gets frozen, what reruns are blocked.

5) Implement + validate (spend totals, missing dates, duplicates, sanity checks).

6) Monitor + document: alerts and a changelog so you can explain history later.

This is how you avoid a “data incident” and turn the retention change into a footnote.

Execution snapshot (animated)

Agent workflow

Approval-first agent flow

User Request

The user describes the SEO task in plain language.

Agent Analysis

AYSA reviews website context, signals and opportunities.

Recommendations

The agent prepares clear actions with context.

Approval

Important changes wait for human approval.

Final checklist

If you want a single rule: what you don’t store yourself eventually doesn’t exist.

  • Have you identified every dashboard or connector that queries Google Ads live?
  • Have you decided what daily/weekly history you must preserve beyond 37 months?
  • Do you have an export owner and deadline (before June 1, 2026)?
  • Is your warehouse protected against historical overwrites (append-only + merge)?
  • Do you have at least 3 alerts (old-range queries, empty partitions, duplicates)?
  • Have you communicated the change to stakeholders (what changes, what doesn’t)?

If you do nothing else, do this: stop treating historical data as something you can always re-download later. Treat it like inventory. If you don’t warehouse it, protect it, and document it, it will disappear right when you need it to answer the hardest questions.

Sources

About the author

Marius Dosinescu is the founder of AYSA.ai, an approval-first SEO/AEO/AI Search execution platform focused on evidence, shipping, and verification.

More: https://aysa.ai/ • Blog: https://aysa.ai/blog/

If you want help operationalizing the export safely (without building a fragile one-off), AYSA’s workflow approach is designed for exactly this kind of change: confirm the policy, inventory dependencies, execute with approvals, and then validate the data so leadership can trust the numbers again.

The goal is not a perfect warehouse. The goal is a durable archive you can still use two years from now, when a board question forces you to explain “what happened back then” with evidence, not opinions.

That’s what “measurement maturity” looks like in practice, even for SMBs.

SEO execution, not more busywork

Turn SEO reading into approved website action.

AYSA monitors your website, prepares the work, asks for approval, and executes approved changes inside your website.

Start now View pricing

Only €29 to €99 per month, depending on the size of your business.

AYSA SEO Magazine

Latest search intelligence.

View all articles
WhatsApp