10 Prompts to Audit Your SEO and AI Visibility With Claude

Written on 1/3/2026
Modified on 2/3/2026
3min
Claude SEO AI Visibility Prompts

Key points of the article

  • Claude can handle part of a technical SEO audit, but not the strategic analysis that goes with it
  • The most useful prompts are the ones that leverage Claude's native features: file upload, web search, Projects, Artifacts
  • What makes this different from other prompt lists: we integrate AI visibility analysis (AEO/GEO) into every category
  • These prompts are a starting point. They don't replace a full audit or the judgment of an expert who knows your market

Claude has become a daily tool for a lot of marketers and SEOs. But between the "50 magic prompts" listicles and LinkedIn threads promising to replace an entire agency, it's hard to tell what actually works.

What we see at Vydera, using Claude every day on client audits: the most useful prompts aren't the longest or most complex. They're the ones that leverage the right Claude features (file upload, web search, Artifacts) to answer a specific question.

Here are 10 prompts we use regularly to analyze a site from both an SEO and AI visibility angle. For each one, we explain why it's useful and what to look for in the results.

A note before we start: these prompts work with Claude, but most are adaptable to ChatGPT or Gemini. We chose Claude because its features (Projects, Artifacts, large context window) are particularly well-suited to this type of analysis.

Quick technical audit

1. Analyze heading structure and Title/Meta tags

The prompt

I'm uploading the HTML source of my page [URL]. Analyze the Hn heading structure (H1, H2, H3), the Title tag and Meta Description. Identify issues: missing or duplicate H1, broken hierarchy, Title too long or too short, missing or non-descriptive Meta Description. Present the results in a table.

Claude feature used: File upload (copy-paste the HTML source code of the page)

Why it matters: Heading structure is a baseline signal for Google, but also for generative AI platforms that use headings as semantic anchors to extract content. A broken hierarchy reduces your chances of being cited in both cases.

What to look for: Check that there's only one H1 per page, that the hierarchy is logical (no H3 directly after an H1), and that your Meta Description leads with an answer or a clear statement, not a vague hook. AI platforms read the snippet of each page before deciding whether to go deeper.

2. Detect internal linking issues

The prompt

Here is my site's XML sitemap [paste or upload]. Analyze the URL structure and identify: potential orphan pages (isolated URLs with no logical link to other sections), internal linking opportunities between thematically related content, and inconsistencies in the site architecture. Suggest a linking plan organized by topic cluster.

Claude feature used: Sitemap XML upload + Artifacts to visualize clusters

Why it matters: Internal linking is an underused lever for both SEO and AEO. Generative AI platforms use internal links to understand the relationships between your content and assess your topical authority. A site with well-linked clusters is more likely to be perceived as a go-to source on a topic.

What to look for: Identify important pages that receive no internal links (orphan pages) and articles covering related topics without linking to each other. Claude can suggest thematic groupings, but it's up to you to validate whether those groupings make sense for your audience.

3. Audit existing structured data

The prompt

Here is the source code of my page [URL]. Extract all JSON-LD schemas present. For each schema, check: the type (Organization, Article, FAQPage, Product, etc.), missing required fields, absent recommended fields, and compliance with Schema.org specifications. Give me a completeness score per schema and list the corrections needed.

Claude feature used: File upload + code analysis capability

Why it matters: Structured data has become the most direct signal you can send to AI platforms about who you are and what you publish. An incomplete Organization schema, an Article without an updated date, or a FAQ without FAQPage markup are missed opportunities to stand out against better-marked-up competitors. For a deeper dive, our structured data guide for SEO and AEO covers the priority schemas to implement.

What to look for: Prioritize checking that your homepage has a complete Organization schema (name, URL, logo, description, social profiles), that your articles have an Article schema with author and dates, and that your FAQs have FAQPage markup. These three schemas have the most direct impact on AI visibility.

Content analysis and semantic gaps

4. Identify content gaps vs. competitors

The prompt

Enable web search. Compare my site's content [URL] with my 3 main competitors [URL1, URL2, URL3] on the topic [topic]. Identify: subjects they cover that I don't, angles they approach differently, and questions they answer that I don't address. Rank the gaps by priority (estimated search volume + strategic relevance).

Claude feature used: Web search enabled to analyze competitor content in real time

Why it matters: Generative AI platforms break every question into sub-queries. If your content only covers one angle of a topic, you risk being cited on that angle and ignored on everything else. Identifying gaps lets you build complete topic clusters, which increases your exposure surface to both user and AI queries.

What to look for: Don't try to cover everything. Focus on gaps that sit at the intersection of search volume and business relevance. A high-volume topic outside your expertise isn't worth a niche topic you're best positioned to own.

5. Evaluate the "citability" of existing content

The prompt

Here is the full content of my article [paste or upload]. Analyze it from an AI citation perspective. Evaluate: are key facts placed at the beginning of sections or buried in the text? Is each paragraph self-contained and extractable? Does the content include sourced data and statistics? Is the tone factual or overly promotional? Give a citability score out of 10 and list priority improvements.

Claude feature used: File upload or copy-paste content

Why it matters: AI platforms don't index pages, they extract passages. A clear, self-contained paragraph that directly answers a question has a much higher chance of being picked up than a long page where the answer is buried across ten sections. This prompt gives you a quick diagnostic of your content structure from an LLM's perspective. Our article on optimizing content for AI citations covers this in detail.

What to look for: The two most impactful criteria are where key information is placed (top of section, not after three paragraphs of context) and tone. Overly promotional content gets systematically passed over in favor of more objective sources. If Claude flags passages that "sell" rather than "inform," take that seriously.

Competitive audit

6. Map a competitor's SEO profile

The prompt

Enable web search. Run an SEO analysis of the site [competitor URL]. I want to understand: their content strategy (topics covered, publishing frequency), site architecture (depth, category organization), their best-ranking pages, and their internal linking approach. Identify what they do well and what they do poorly. Present the results in a structured Artifact.

Claude feature used: Web search + Artifacts to structure the analysis

Why it matters: Before building your own strategy, you need to understand what your competitors are doing and, more importantly, what they're not doing. Claude with web search can explore a competitor site and give you a quick read on their structure and content strategy.

What to look for: Focus on gaps rather than strengths. What your competitors don't cover is your opportunity. Also check whether they've implemented structured data, whether they have author pages, and whether their content is structured for AI extraction. In our experience, most sites are not yet optimized for AI visibility, which gives an edge to those who start now.

Structured data and Schema

7. Generate a complete JSON-LD schema from content

The prompt

Here is my page content [paste text]. Generate the appropriate JSON-LD schemas for this page. Include at minimum: the main schema matching the content type (Article, FAQPage, Product, LocalBusiness, etc.), the related Organization schema, and BreadcrumbList schema. Each schema must be complete with all Schema.org recommended fields. Give me integration-ready code.

Claude feature used: Code generation + Artifacts for JSON-LD output

Why it matters: Writing JSON-LD schemas by hand is tedious and error-prone. Claude can generate a complete schema in seconds from the actual content on your page. This is one of the highest-ROI optimizations you can make today, especially FAQPage markup, which has the most direct impact on AI visibility. Our Zoo de Guadeloupe case study shows the concrete impact of this implementation.

What to look for: Always validate the generated schema with Google's Rich Results Test before deploying. Claude rarely makes syntax errors, but it can sometimes invent properties that don't exist in the Schema.org spec, or omit required fields. Also verify that the information in the schema matches exactly what's displayed on the page.

AI visibility (AEO/GEO)

This is the section you won't find in other prompt lists. These three prompts specifically target your visibility in AI-generated responses: ChatGPT, Perplexity, Gemini, Google AI Overviews.

8. Audit how AI platforms perceive your brand

The prompt

Enable web search. I want to understand how AI platforms perceive my brand [brand name]. Search for information about [brand name] and analyze: which sources mention us (press, forums, reviews, social media), what is the overall sentiment (positive, negative, neutral), what attributes or qualifiers are associated with our brand, and which competitors are mentioned in the same context. Compare with what you find for [competitor 1] and [competitor 2].

Claude feature used: Web search to explore third-party sources

Why it matters: LLMs build their perception of your brand from the content in which it appears across the web. A brand that's regularly mentioned in trusted sources will be recognized as a legitimate entity and cited more easily in responses. This prompt gives you a snapshot of your "AI reputation": what the models know about you, and where that information comes from. Our article on how AI-generated responses work explains this mechanism in detail.

What to look for: What matters is the diversity and quality of sources, not the quantity. If your brand only appears on your own site and nowhere else, AI platforms will struggle to cite you. Also look at sentiment: if existing mentions are negative (customer reviews, forum comments), that's what AI will reflect in its responses. This is one of the areas we systematically analyze in our AEO audits at Vydera.

9. Test your visibility in AI responses

The prompt

I'm going to give you 10 queries my audience asks AI assistants. For each query, give me a response as an AI assistant would (with web search enabled). Note for each response: is my brand [name] mentioned? If not, which sources are cited instead? Does the cited content structurally resemble what I publish? Here are the queries: [list of 10 natural language queries].

Claude feature used: Web search + Projects (to store results and compare over time)

Why it matters: There's no Google Search Console equivalent for generative AI yet. The most reliable way to assess your AI visibility is still manual testing. This prompt helps you structure it. An important detail: only 6% of sources overlap between ChatGPT and Perplexity for the same prompt. Being visible on one doesn't guarantee visibility on the other. Ideally, test the same queries across multiple platforms. Our article on the differences between SEO and GEO covers this point.

What to look for: Don't just check whether your brand is cited. Analyze who gets cited instead and why. In most cases, cited sources share well-structured content, factual data, and a presence on third-party sources (press, forums, reviews). That's exactly what you need to replicate. For automated tracking, tools like Meteoria, Otterly, or Peec AI let you monitor your AI Share of Voice over time. Our GEO tools comparison covers the available options.

10. Analyze the sources AI cites in your industry

The prompt

Enable web search. For the following 5 queries related to my industry [list queries], identify the 3 to 5 main sources you use to build your response. For each source, indicate: the type (media, blog, forum, review, official site), perceived authority (is it a reference source in the sector?), and what makes this source "citable" (content structure, factual data, Q&A format, etc.). Present the results in a comparison table.

Claude feature used: Web search + Artifacts for comparison table

Why it matters: Before optimizing your content to get cited, you need to understand what AI is citing today in your industry. This prompt maps out the sources LLMs consider trustworthy for your target queries. It's the starting point of any AEO strategy. Our article on the definition of GEO/AEO explains why this mapping is the first step.

What to look for: Spot the patterns. Are cited sources primarily trade publications, review platforms, forums like Reddit, or official websites? If it's media and forums, your strategy needs to include Digital PR and community presence. If it's sites that simply have better content, your structure and topical coverage are what need work.

How to get the most out of these prompts

These prompts are a starting point. To make them truly useful, a few practical tips.

Use Claude Projects. Create a dedicated project for your site with your sitemap, key pages, and target queries loaded as Knowledge. Claude will have the context it needs for more precise analyses, without you having to re-explain everything each conversation.

Cross-reference results. A single prompt only gives you one angle. Combine the technical audit (#1-3) with content analysis (#4-5) and AI visibility (#8-10) for a complete picture. The highest-impact issues are often the ones that surface across multiple analyses.

Track over time. Prompts #8, #9, and #10 become valuable when you repeat them monthly. Use a Claude Project to store your results and track how your AI visibility evolves.

Adapt the prompts to your context. The prompts above are intentionally generic. The more context you give Claude (industry, audience, goals, competitors), the more relevant and actionable the results will be.

Why AI prompts don't replace a professional SEO audit

We use these prompts (and variations) daily in our audits. They save time on data collection and structuring, and they help us ask the right questions faster.

But let's be clear: AI doesn't replace human analysis. Claude can extract data, identify patterns, and structure recommendations. What it can't do is understand your market, prioritize actions based on your business constraints, or decide whether a content gap is actually worth filling.

What we see regularly: the same technical audit can lead to radically different recommendations depending on the industry, the site's maturity, and the company's goals. That strategic lens is what separates a checklist of technical fixes from an action plan that drives results.

AI is a tool. A powerful one, but still a tool. The value an agency or expert brings is knowing what to do with the data AI produces, in what order, and why.

If you want to go beyond a surface-level diagnostic, get in touch for a full audit of your Search and AI visibility.

No. Claude doesn't crawl your site like a dedicated SEO tool. It won't give you ranking data, exact search volumes, or detailed backlink profiles. These prompts are complementary: they cover angles traditional tools don't, especially content structure analysis and AI search visibility.

Most work with the free version. However, prompts that use web search (#4, #6, #8, #9, #10) and Projects require a Pro subscription. The same goes for uploading large files like full sitemaps or HTML exports.

Each AI has its own sources and selection logic. Testing in Claude gives you a partial view. For a complete assessment, run the same queries in ChatGPT, Perplexity, and Gemini separately, then compare which sources get cited.

For the technical audit (#1-3), once per quarter is enough unless you're making significant changes to your site. For AI visibility (#8-10), once a month is a good cadence. AI responses shift quickly, and content that isn't cited today can start appearing after optimizations.

Yes, through the Claude API. You can schedule recurring analyses and store results to track changes over time. That's what we do at Vydera for our clients' AI visibility monitoring.

Thibaut Legrand
Thibaut Legrand
Co-founder - Vydera