No llms.txt found at /llms.txt or /.well-known/llms.txt. Create one following the llmstxt.org spec to help AI agents understand your site.
llms.txt is the new robots.txt for AI. It tells AI agents what your site does, what content matters, and where to find it. Without it, AI must guess — and guessing means inaccurate recommendations. Early adopters already see better AI visibility.
We checked for /llms.txt and /.well-known/llms.txt files. We also checked for /llms-full.txt which provides your complete content in Markdown for AI consumption.
Create a /llms.txt file with a summary of your site for AI agents, and optionally a /llms-full.txt with your complete content in Markdown (like SXSW does with their full program). See llmstxt.org for the specification.
# /llms.txt — Summary for AI agents
# Your Site Name
## About
Brief description of your website and what it offers.
## Key Pages
- /products - Product catalog
- /docs - Documentation
- /pricing - Pricing plans
## Optional: /llms-full.txt
# Provide your full site content as Markdown
# so AI agents can read everything directly.
# See sxsw.com/sxsw.md for a real-world example.No JSON-LD / Schema.org structured data found.
Structured data is the language AI agents speak. It lets them understand your business type, products, locations, reviews, and FAQs with 100% accuracy — instead of trying to parse HTML. This directly affects whether an AI recommends you.
We parsed your HTML for JSON-LD script tags containing Schema.org structured data. Any valid structured data types found (Organization, Product, FAQ, etc.) are counted.
Add JSON-LD structured data for your main entity types (Organization, Product, BreadcrumbList, FAQ, etc.).
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "Organization",
"name": "Your Company",
"url": "https://example.com",
"description": "What your company does"
}
</script>sitemap.xml exists but does not contain valid XML sitemap content.
A sitemap is your site's table of contents for AI crawlers. Without it, agents may miss important pages like product pages, documentation, or pricing — reducing the completeness of their recommendations about you.
We fetched /sitemap.xml and checked for a valid XML sitemap with <url> entries and recent <lastmod> dates.
Create an XML sitemap at /sitemap.xml with all important pages and recent lastmod dates. Reference it in your robots.txt.
# Add to robots.txt
Sitemap: https://example.com/sitemap.xmlReact SPA detected with no server-rendered content. Agents cannot read JS-rendered content.
Most AI crawlers don't execute JavaScript. If your content only appears after JS runs (React SPAs, dynamic loading), AI agents see an empty page. This means no product info, no prices, no content — your site effectively doesn't exist for AI.
We compared your page content with JavaScript enabled vs disabled. If more than 50% of visible content disappears without JS, the check fails.
Enable server-side rendering (SSR) or static generation (SSG) so that critical content is available in the initial HTML without JavaScript execution.
/.well-known/webmcp returned an HTML page instead of valid JSON. No WebMCP endpoint found. Adding one lets AI agents programmatically interact with your site.
WebMCP (Web Model Context Protocol) is an emerging standard that lets AI agents directly call actions on your website — like booking, purchasing, or querying data. Early adopters get native AI agent integration.
We checked /.well-known/webmcp and /webmcp.json for a valid WebMCP configuration with an actions array.
Add a WebMCP endpoint at /.well-known/webmcp or /webmcp.json to let AI agents interact with your site programmatically.
/agents.json returned an HTML page instead of valid JSON. No agents.json found. This file helps AI agents discover what your site offers.
agents.json is a discovery mechanism that tells AI agents what your website can do — similar to how robots.txt tells crawlers what they can access. It helps agents understand your services, API endpoints, and interaction capabilities.
We checked /agents.json and /.well-known/agents.json for a valid JSON configuration.
Create an /agents.json file that describes your site's capabilities for AI agents.
robots.txt AI Bot Check
CAPTCHA Detection
Cookie Consent Wall
Machine-Readable Prices
llms.txt Check
Login-Wall Check
Structured Data Quality
Sitemap Check
JS Dependency
TTFB / Response Time
WebMCP Endpoint
agents.json Discovery
The Free Basic Scan checks static HTML. The Deep Scan simulates how AI agents actually navigate your site — and finds issues invisible to basic analysis.
Tests your pages the way an AI agent experiences them — catches CAPTCHAs, cookie walls, and JS-dependent content that basic HTTP scanning misses.
Scans your homepage plus up to 5 subpages from your sitemap. Problems on inner pages won't slip through.
Tests for API documentation (OpenAPI/Swagger), MCP server endpoints, and form accessibility — the building blocks of true agent interoperability.
No pages added? We'll auto-detect from your sitemap.
Want to scan another website?
Scan another site