AI Search

AI Overviews Change Every 2 Days (But Never Change Their Mind)

Louise Linehan
Louise is a Content Marketer at Ahrefs. Over the past ten years, she has held senior content positions at SaaS brands: Pi Datametrics, BuzzSumo, and Cision. By day, she writes about content and SEO; by night, you'll find her playing football or screaming down the mic at karaoke.
Just how stable are AI Overviews? If you manage to get your brand mentioned or cited in them, can you take the rest of the month off? Or do you have to fight for ongoing visibility? 

To find the answers, our data scientist, Xibeijia Guan, analyzed over 43,000 keywords—each with at least 16 recorded AI Overviews—over the course of a month.

She extracted this data from Brand Radar, our new AI visibility tool that tracks hundreds of millions of prompts and queries across seven different AI assistants.

Ahrefs Brand Radar dashboard view showing results for Toyota, versus competitors Honda, Nissan, Chevrolet, Ford, Volkswagen

The results reveal a surprising paradox in how Google’s AI operates—a constant state of change on the surface, but a deep, underlying stability.

The content of the AI Overviews we studied changed drastically over the month of our analysis.

In fact, we found that AI Overviews have a 70% chance of changing from one observation to the next.

This is known as the “Pointwise Change Rate”, and is calculated by dividing the number of changes observed by the number of consecutive pairs.

# of change observed/ # of consecutive pairs

  • Number of consecutive pairs: The total number of times we compared two sequential AI Overview responses for the same search query.
  • Number of changes observed: A count of how many of those comparisons resulted in the AI Overview content being different from the previous version.

Here’s an example of that flux in action.

Below are two AI Overviews for the query “renters insurance”, captured two minutes apart in incognito mode.

For easy comparison, one is in light mode…

: Google search results page for "renters insurance" showing AI Overview with detailed explanation of coverage types including Personal Property Coverage, Tenants' Liability Coverage, and Additional Living Expenses, plus common exclusions and optional extras.

And the other in dark mode…

Google AI Overview for "renters insurance" in dark mode showing detailed coverage information including What Renters Insurance Covers section with bullet points for Personal Property Coverage, Tenants' Liability Coverage, and Additional Living Expenses, plus What is Not Typically Covered section.

It’s immediately obvious that the phrasing and content of each overview is different.

For instance, the opening paragraph of the dark mode AI Overview lists out the types of events that renters insurance covers (e.g. fire, theft, or flood)…

Google AI Overview for "renters insurance" query showing dark theme interface with definition explaining it's insurance for tenants' personal belongings and liability coverage, with "fire, theft, or flood" highlighted in orange.

Whereas the light mode AI Overview focuses more on whose responsibility it is to obtain renters insurance…

Google AI Overview for "renters insurance" showing concise definition explaining it's optional insurance protecting tenants' belongings and providing liability coverage, noting it's the tenant's responsibility as landlord insurance only covers building structure and landlord's items.

Other differences include the use of examples, the level of detail, and the overall structure.

Our research revealed that AI Overviews have a persistence of 2.15 days on average, meaning their content tends to change every 2.15 days.

Ahrefs research findings for 43,000 keywords. Title: AI Overviews change every 2.15 days. Image shows two cartoon calendars side by side. The first reads "Nov 1st" and shows an AI Overview for "renters insurance". The second reads "Nov 3rd" and shows a different, longer AI Overview for "renters insurance". An arrow points from the first calendar to the second, with text reading "2.15 days"

Since our checks weren’t daily, it’s likely that the real citation change rate is even higher.

Even if your content gets cited in AI Overviews, you’re not guaranteed ongoing visibility.

Our research shows citation flux is common.

In fact, between consecutive responses, Xibeijia found that only 54.5% of URLs overlap on average.

This works out as approximately 1 URL change every time the same AI Overview query is re-run.

Meaning that, from one observation of an AI Overview to the next, nearly half (45.5%) of the cited sources are entirely new.

To illustrate this, here’s an example of the query “Best protein powder”, captured in Ahrefs’ SERP Overview tool via Keywords Explorer.

Ahrefs SERP overview comparison for "best protein powder" between October 12th and November 1st, 2025, showing 9 changes in Top 10 with 68% SERP similarity. Green highlighting shows maintained/improved positions (Fortune, Forbes articles), red shows declined positions (Reddit, NBC News articles).

Forbes and Fortune showed up consistently between October and November, but the third URL changed.

Initially, a Reddit comment about protein powders took second place, but a month later it was replaced by Fortune’s “best” list, and a new article from NBC on “protein shake safety” entered the third spot.

Here’s one more example for the query “renter’s insurance”—each AI Overview was captured just a week apart.

Ahrefs SERP overview comparison for "renter's insurance" between October 21st and 28th, 2025, showing 5 changes in Top 10 results with 93% SERP similarity, 3 declined positions, and 2 new entries. Green highlighting indicates maintained positions while red shows declined.

The first AI Overview returned three citations, but only two of those carried over to the second capture, where a further ten citations joined the list.

It’s clear that AI Overview visibility doesn’t follow the same consistency patterns as traditional search rankings.

Your brand can be cited today, and gone tomorrow.

Entity representation in AI Overviews is nearly as volatile as citations.

We define entities as specific, identifiable named items that appear in the text of the AI Overview—for example: people, organizations, locations, and brands.

Of the AI Overviews we studied, 37% contained entities—with each of those displaying roughly three entities per response.

Image title reads: When AI Overviews include entities, they feature three on average. Subtitle reads: Research by Ahrefs. 43,000 keywords studied. Image shows an illustrated view of an AI Overview about Ahrefs. Arrows highlight three entities. Company entity (Ahrefs), Person entity (Dmytro Gerasymenko), Location entity (Singapore)

By studying entity overlap, we were able to measure how often real-world information stays the same between two sequential AI Overview responses for the same search query.

The formula we used was:

# common entities / total entities consecutive pairs

  • Common entities: This is the count of the named things (people, organizations, or locations) that appeared identically in both of the consecutive AI Overviews being compared.
  • Total entities consecutive pairs: This is the total count of all unique entities found when you compare both sequential AI Overviews.

From this, we were able to calculate the percentage of named entities that remained consistent when the AI Overview changed—otherwise known as the “entity overlap”.

This worked out as 54%—or approximately 1 entity change for every AI Overview update.

Meaning that the remaining 46% experienced volatility—that’s just a .5% difference in flux vs. citations.

It could be a coincidence, but one theory is that Google regenerates URLs and entities at a similar rate.

This constant swapping of text, sources, and subjects means that you can often get a different AI Overview answer just by refreshing the page.

Here’s Despina Gavoyannis from our blog team experiencing exactly that…

Slack message from despina dated July 17th at 1:21 PM explaining that refreshing search results when getting an undesired AI Overview produces different answers with different citations, noting more variation occurs for topics without strong consensus or well-documented responses.

While words are in constant flux, the underlying meaning of the AI Overview is incredibly consistent.

We measured the “Semantic stability” between consecutive AI Overview responses and found an average cosine similarity score of 0.95, where 1.0 represents a perfect match.

Image of a temperature gauge dial (semi circle). Number on the gauge range from 0.0 (red) to 1.0 (green), with the dial pointing to 0.9. Image title reads: Consecutive AI Overviews show 0.95 cosine similarity score. Subtitle reads: Research by Ahrefs. 43,000 keywords studied.

This score indicates an extremely high degree of semantic consistency.

It’s like asking two different experts the same question—you’ll get different wording, different phrasing, and maybe different examples, but the fundamental answer is the same.

My earlier “renters insurance” example proves this.

Though each AI Overview differed in length, language, and structure, they covered largely the same topics and themes—like personal property coverage, liability protection, and common exclusions.
Side-by-side comparison of Google AI Overview for "renters insurance" in light mode (left) and dark mode (right), with orange arrows pointing from light version's "What it Covers" and "Common Exclusions and Optional Extras" headers to corresponding "What Renters Insurance Covers" and "What Is Not Typically Covered" sections in dark version.

In other words, AI Overviews are continuously rephrasing a stable, underlying consensus drawn from their sources—this is the nature of probabilistic large-language models.

They don’t change their “opinion” on a topic day to day.

The core message remains the same, even if the text, citations, and entities switch in and out.

Our CMO, Tim Soulo, had a theory that Google might cache AI Overviews belonging to popular keywords to save on computational resources.

In fact, his hypothesis sparked this whole study…

Slack message from timsoulo dated July 17th at 8:10 AM discussing an interview with Kevin Indig about Google generating AI Overviews for searches, with highlighted text questioning whether it makes sense to cache them for popular queries and wondering about consistency of text and citations.

But the findings disprove this.

Firstly, we’d expect to see far more stability across AI Overview content if some were being cached.

But, as we already know, consecutive AI Overviews showed different content 7 out of 10 times.

Secondly, Xibeijia measured the actual relationship between a keyword’s search volume and its AI Overview change rate, and found a Spearman correlation of -0.014.

Image shows temperature gauge (horizontal line) ranging from -1.0 (strong negative), to +1.0 (strong positive), with a highlight pointing at -0.014, just near the middle "0" (no correlation). Title reads: Search volume and AI Overview changes show no correlation

A correlation this close to zero indicates there is likely no relationship between the two variables—hugely popular search queries are just as likely to have their AI Overview text change as very niche ones.

So, it’s unlikely Google caches popular AI Overviews—at least based on our data.

Wrapping up

AI Overviews are both dynamic and stable at the same time.

The surface details, like the exact wording, URLs cited, and entities mentioned all switch constantly—but the underlying meaning and the core topics stay the same.

This changes how we can think about AI-generated search results.

They’re not static like traditional search results, but they’re not random either.

While you should expect your brand mentions and citations in AI Overviews to be volatile, there’s still a way to show up consistently.

Rather than focusing on individual prompts or queries, you need to become an authority on the themes associated with your core topics.

You can understand which themes AI ties to your brand using Ahrefs Brand Radar.

Just drop in your brand, and head to the “Topics” report. This will show you which themes individual AI responses ladder up to.

For example, Ahrefs is most closely linked to the topics of “SEO tools” and “SEO software” in AI Overview responses.

Ahrefs Brand Radar interface showing Topics tab with AI Overviews filter selected, displaying 2,539 results dated November 3rd, 2025, with top topics being "seo tools" (177 AI responses, 120K volume), "seo software" (131 responses, 90K volume), and "keyword research" (116 responses, 54K volume).

Tracking AI visibility over a volume of answers will also help you see past the variance of AI responses.

Two side-by-side line graphs comparing prompt tracking methods. Left graph titled "Individual prompt tracking" shows sporadic data with red X marks at 100% on Day 5 and 0% on Day 15, with no Day 30 data point. Right graph titled "Aggregate prompt tracking" shows consistent analysis with a green line trending upward from approximately 5% on Day 5, through 40% on Day 15, to 60% on Day 30, with green dots marking each data point.

By focusing on aggregated visibility and AI Share of Voice, you can:

  • See if AI consistently ties you to a category—not just if you appeared once.
  • Track trends over time—not just snapshots.
  • Learn how your brand is positioned against competitors—not just mentioned.
Ahrefs Brand Radar dashboard showing Ahrefs with 84.1% AI Share of Voice leading competitors across multiple metrics, including a mentions comparison table across AI platforms and a time-series graph tracking brand mentions.

Winning the topic, not the query, is the best way to stay visible—even when AI answers are changing daily.