top of page

How to conduct an AI search audit

A step-by-step guide to conducting an AI search audit and acting on your findings

How to conduct an AI search audit
Lily Grozeva

10/31/25

7

 min read

  • Lily Grozeva
  • Oct 29
  • 8 min read

Search KPIs are shifting from links to mentions. ChatGPT, Gemini, and Claude don’t rank pages, they generate responses, which means visibility now depends on being included and described accurately in those answers.


Traditional SEO audits don’t cover this. SEO metrics can’t fully predict AI visibility, outdated content risks undesired amplification, and competitive positioning may be ignored. 


To be clear, AI search doesn’t replace SEO; it extends it. But the growth of AI search means we need to optimize across both ecosystems. 


Graph comparing Google and AI search users. Google's users in blue remain stable, while AI search users, in green, steadily increase.
The growth of AI search compared to Google

That’s why I thought it was important to develop a framework for AI search audits. For the past seven years, I’ve led teams optimizing visibility for US tech companies, combining classic SEO fundamentals with AI-enhanced workflows. 


My framework consists of two parts:


  • Auditing how your brand is represented in AI. Analyze how page-level content is represented and summarized in AI search results, as well as the data sources feeding those results (as reported by tools like Profound, Peec, and Rankscale).  

  • Auditing your website content. We look at content that drives better representation and perception across AI models to help define what’s working.


Wix Studio ad with "AI tools for AI search" text and "Try it now" button on a gradient blue background.


How to audit brand visibility in AI 


When ChatGPT exploded at the end of 2022, my team and I began testing how brands actually show up in AI-driven discovery, Google’s AI Overviews, and LLM platforms like ChatGPT, Gemini, and Perplexity.


When I started running AI search audits, I realized a single metric couldn’t explain why some brands showed up while others vanished. 


Out of this experimentation came a repeatable AI search audit process, a framework we’ve been refining since the beginning of this year to help clients measure and improve their visibility where search results are now answers, not links.


Over time, I built a 7-lens AI visibility framework that looks at:



These are different lenses of the same camera.


Flowchart titled "7-Lens AI Visibility Framework" with seven steps: Inclusion, Answer Presence, Accuracy, Tone & Sentiment, Differentiators, Trust & Grounding, Brand Safety.


01. Inclusion 


Inclusion is baseline visibility. It checks whether the model even recognizes your brand and can surface it at all in answers. In traditional SEO, it corresponds to indexability. 


To check inclusion, start with the basics. 


Test bots access with tools like Knowatoa or your server log files. If you’re a Wix user, you can see which AI crawlers are visiting your site in your SEO Reports.


Blue line chart displays AI bot traffic over time with a drop-down menu showing filters for five selected bots. Time span: Mar 19-Apr 1.
Track how often AI bots crawl your site in Wix's SEO Reports

If access fails, work on fixing it. When granted access, run branded and non-branded prompts in ChatGPT, Claude, Gemini, and Perplexity, or another LLM platform you know your ideal customer profile frequents.


Note: I noticed clients struggle with choosing the right prompts for monitoring a topic. Data on conversations people have in LLMs is still unavailable, so your best bet is to:


  • Write down your primary personas, main use cases for your core offerings, brand name, product/services, leadership, and so on.


  • Open an LLM of your choice and prompt it to use the above information and build ten non-branded and five branded (comparison/recommended list) prompts your personas are likely to use when they are in: awareness, consideration, retention phase, etc.


You can choose more than 15, but these will be enough to run an inclusion check and pass this phase.


Expand after that if needed. (If you’re doing this for the first time, keep it simple, and expand later on.) 


Run the 15 prompts manually or use a tool like RankScale, Peec, or Knowatoa. (I personally use these, but there are many more emerging tools you could consider.)


Log whether your brand is mentioned or absent. 


If you're not ready to invest in these tools, try this LLM brand visibility tracker spreadsheet to get started.


Optimization in this step involves strengthening entity signals (Knowledge Graph, Wikipedia) and seed citations on authoritative surfaces (industry blogs, Reddit, Capterra, G2, Garner). 



02. Answer presence


Answer presence is competitive positioning. AI search visibility tools examine where and how often your brand appears in response to key prompts, as well as whether models exclude, include, or misclassify you compared to others.


Because these are easy to confuse: 


  • Inclusion is existence

  • Answer presence is representation


Wix’s AI Visibility Overview tracks how often your site is mentioned and cited in AI responses and compares your visibility to your competitors.


Bar chart showing "The Pottery Place" leading with a 55% visibility score. Four competitors follow: "Clay & Create Studio" at 25%, "Wheel & Kiln Workshop" at 20%, "MudWorks Studio" at 10%, and "Artisan Clay Studio" at 0%. Blue bars indicate scores.
Compare your visibility against competitors in Wix's AI Visibility Overview

Having this information helps you bridge the gap between your targeted vs. your actual visibility and start working on building content and industry representation to cover it. 



03. Accuracy


To check accuracy, run prompts about your brand and compare model answers against your official facts: leadership names, pricing, deployment model, integrations, and features. 

This is pretty straightforward, and any approach—manual or tool-automated—will do the job.


Log every mismatch or vague response. 


To optimize, keep About pages, pricing tables, docs, and Wikidata entries current with crisp, dated statements. Regularly track outputs across models and lightweight scripts that compare answers against a central “claims registry” of canonical facts.



04. Tone and sentiment


To check tone and sentiment, review how LLMs describe your brand: is the framing positive (“trusted leader”), neutral (“offers X features”), or dismissive (“basic option”)? 


Track adjectives, qualifiers, and context. 


Optimization involves incorporating seed content with authoritative, confidence-building language, highlighting analyst/customer praise, and ensuring that case studies and FAQs reinforce your desired positioning.


Brand perception report for The Pottery Place. Positive sentiment noted. Strengths: diverse offerings, community focus. Improvements needed.
Assessing brand perception in Wix's AI Visibility Overview

05. Differentiation


These are very bottom-of-funnel and a high priority for optimization. They should address your core offering and competitive advantage. 


Run competitive prompts with RankScale for “best tools for,” “alternatives to X” across your targeted LLMs. 


Capture which brands appear with yours, how often, and what visibility score and sentiment they’re given. 


Scoreboard showing Visibility and Sentiment Scores for multiple brands on July 1. AMPECO leads with 100% visibility and 85% sentiment.
Image credit: RankScale

Look for clustering patterns. 


Are you grouped with enterprise leaders, budget players, or niche tools? Note any misframings.

Score your positioning: strong if you’re in the right cluster with accurate framing, weak if absent, misclassified, or grouped with irrelevant competitors. 


To optimize differentiators:


  • Create comparison pages, listicles, and “best tools for” content that clearly state your unique edge 

  • Seed citations on industry blogs, directories, and Reddit to frame your advantages 

  • Add FAQ/QAPage schema to product and solution pages 

  • Refresh copy to prevent misclassification (e.g., SaaS vs on-prem) 

  • Get media coverage for your products and brand


Adjust your content and outreach efforts until you’re consistently grouped with the right competitors and accurately represented.



06. Trust and grounding


To check trust and grounding, review LLM answers for whether they cite your site or credible third parties versus vague, hallucinated claims. Look at citation domains (via Peec/Rankscale) and test if your content has short, factual, and quotable statements. 


Citation and Reference Analysis list with URLs, first and last seen dates (30/07/2025), and the number of appearances, some highlighted.
Image credit: RankScale

Optimization involves tasks such as creating liftable claims (“As of 2025, we support SOC 2 Type II”), updating Wikidata, and aligning documents and FAQs with structured sources.



07. Brand safety


To check brand safety, run the prompts you already defined or build new ones using the process we outlined at the beginning. Make sure you cover 10 to 15 branded prompts across company, product, and leadership names, and scan for risky associations. For example:


  • The wrong industry category

  • Outdated product info 

  • Negative incidents 

  • Collisions with similarly named companies 


Again, you can do that manually or with any of the best AI tracking tools.


Optimization involves keeping brand pages, Wikidata, and press kits updated, addressing old narratives with fresh, authoritative content, and monitoring for confusing overlaps. 


An additional step would be to set up a quarterly check for these as new LLM model versions emerge.



How to audit your website content for AI search 


Now, let’s move to the execution engine, a repeatable process for auditing how your website content can be optimized to be surfaced in AI responses.

 


01. Crawl & harvest content for buyer-relevant topics


This step ensures you’re auditing the most important content on your website. 


Start by mapping business goals and product areas to the topics buyers actually search for or ask LLMs about. 


It’s not about keywords in the old SEO sense. It’s about buyer-relevant concepts like product integrations, compliance, pricing, workflows, and industry pain points. 


Crawling the content of your website reveals what’s already represented; comparing it to goals exposes gaps. The aim is to build a clean, prioritized topic list before entity analysis.


How to do this in practice:


  • Tools: Sitebulb or Screaming Frog (for crawls), ChatGPT/Gemini (to cluster page topics), spreadsheets or Notion for organizing.


  • Approach:

    • Crawl the site and extract page titles, H1s, and meta descriptions

    • Pull product docs, pricing, and blog categories

    • Translate these into a topic list aligned with revenue goals (e.g., “SOC 2 compliance,” “Microsoft 365 integration”)


  • Keep it lean: 20 to 40 topics max for small sites.


Log these for later.



02. Analyze your entities


Entity analysis means identifying the specific concepts, products, platforms, or standards your buyers expect to see linked with your brand, then checking how well your content reflects them. 


In practice, crawl your site, extract entities with NLP tools (spaCy, Google NLP, or ChatGPT/Gemini), and measure frequency. 


I talk more about this in this LinkedIn post and on this webinar with Sitebulb.


The idea is to classify each entity as underrepresented, appropriate, or overrepresented, and rate its relevance to your topics. 


LLMs learn from consistent signals, so missing high-relevance entities (e.g., “Microsoft 365,” “compliance”) creates visibility gaps you must fix first. These gaps will be at the core of your updated content plans.


Spreadsheet listing entities with columns: Frequency, Status, Relevance to Core Topics, Content Opportunity. Highlighted sections in red.


03. Align with LLMs


This step checks whether your content teaches models what you want them to know. After extracting entities, use ChatGPT, Claude, or Gemini as alignment helpers. 


Feed in spreadsheets with topic clusters and sample content (or if you have fewer than 5,000 pages on your website, you can feed it all your scraped body text, then ask the model to rate how well each entity supports the target topics (clear, partial, or weak alignment). You can use a prompt similar to the two examples below.


Text-based image outlining instructions for an AI Search auditor. It details input types, task instructions, rating scales, and output format.
Gemini Advanced

The goal isn’t perfection but directional scoring to see if entities reinforce your positioning or create noise. This helps prioritize updates by fixing weak or misaligned entities before scaling content further.


For a small B2B tech site, here’s how LLM alignment scoring can work in practice.


  • Pick your entities (from Step 2). For example: “Microsoft 365,” “compliance,” “multi-cloud.”


  • Gather sample content. Pull key pages (About, product docs, case studies, blogs). You don’t need every page, just a representative set.


  • Evaluate with an LLM. Paste the content (or chunks) into ChatGPT/Gemini with prompts like the one above.


Tools you’ll need:


  • Sitebulb/Screaming Frog (crawl + export text)

  • ChatGPT, Gemini, Claude (evaluation, scoring)

  • Google Sheets/Excel (building a score tracker)


It’s essentially a manual QA layer, but for fewer than 5,000 pages, it’s quick and gives you directional clarity.



04. Test in your content


This step maps how your brand actually shows up in AI-generated answers. Run the prompts you work with across multiple models, including ChatGPT, Claude, Gemini, and Perplexity, using an AI visibility tool for automation or manual testing if resources are limited. 


Capture inclusion/exclusion, order of appearance, and snippets. 


The outcome is a visibility map. Do you show up, how are you positioned, and who consistently outranks or clusters near you?


Heatmap titled AI Search Visibility Map Example with categories and AI names. Visibility scores range 0-3, colored from red to green.
An AI search visibility map

A visibility map helps you see, at a glance, where your brand shows up across prompts and models, and where it doesn’t. Use it to prioritize fixes, track progress over time, and explain competitive positioning clearly to stakeholders.



05. Scorecard: inclusion, accuracy, sentiment, and extractability


The scorecard is where you turn raw findings into a decision tool, mostly for prioritization. Use the inputs come from entity analysis, LLM scoring, and answer layer testing to build a concise dashboard that shows strengths, risks, and priorities—so leadership can see gaps at a glance and act on them.

 

AI Search Audit Scorecard table with five topics, listing strengths, risks, and priorities. Emphasizes accuracy and improvement actions.
An AI search visibility map

Why AI audits matter


With tools, metrics, and scorecards in place, you now have a repeatable system to track coverage, accuracy, sentiment, and authority across models. 


The next step is consistency. Running this audit quarterly, fixing gaps fast, and building the kind of presence that keeps your brand visible and trusted in the AI-driven search era.

 
 

Related articles

I’ve edited 100s of product review articles. Here's how to get featured in one.

{AUTHOR}

Why LLMs have high conversion rates, and how to tap into them

{AUTHOR}

How to Rank in Perplexity

How to rank in Perplexity

BY MADDY OSMAN

Get SEO & LLM insights sent straight to your inbox

Stop searching for quick AI-search marketing hacks. Our monthly email has high-impact insights and tips proven to drive results. Your spam folder would never.

*By registering, you agree to the Wix Terms and acknowledge you've read Wix's Privacy Policy.

Thanks for submitting!

bottom of page