Blocking CAPTCHA/challenge page detected. Agents cannot access content. Patterns: challenge-platform.
AI agents cannot solve CAPTCHAs. A full-page CAPTCHA means no AI assistant can read your content, compare your prices, or recommend your services. Every blocked page is a lost opportunity.
We rendered your page as an AI agent would see it and checked the DOM for CAPTCHA providers (reCAPTCHA, hCaptcha, Cloudflare Turnstile) that block content access.
Move CAPTCHA challenges to specific forms (login, signup) instead of blocking the entire page. Use invisible CAPTCHA or challenge-on-interaction.
No llms.txt found at /llms.txt or /.well-known/llms.txt. Create one following the llmstxt.org spec to help AI agents understand your site.
llms.txt is the new robots.txt for AI. It tells AI agents what your site does, what content matters, and where to find it. Without it, AI must guess — and guessing means inaccurate recommendations. Early adopters already see better AI visibility.
We checked for /llms.txt and /.well-known/llms.txt files. We also checked for /llms-full.txt which provides your complete content in Markdown for AI consumption.
Create a /llms.txt file with a summary of your site for AI agents, and optionally a /llms-full.txt with your complete content in Markdown (like SXSW does with their full program). See llmstxt.org for the specification.
# /llms.txt — Summary for AI agents
# Your Site Name
## About
Brief description of your website and what it offers.
## Key Pages
- /products - Product catalog
- /docs - Documentation
- /pricing - Pricing plans
## Optional: /llms-full.txt
# Provide your full site content as Markdown
# so AI agents can read everything directly.
# See sxsw.com/sxsw.md for a real-world example.Cookie banner detected (onetrust-banner-sdk), but main content is accessible in HTML.
Cookie walls that require a click before showing content block all AI agents. They see a consent dialog instead of your products, prices, and services. Switch to a non-blocking banner so content is always readable.
We checked your rendered page for cookie consent implementations (OneTrust, Cookiebot, etc.) that hide main content behind a required interaction.
Ensure your main content is accessible in the HTML without requiring cookie consent interaction. Use an overlay banner instead of a blocking wall.
No JSON-LD / Schema.org structured data found.
Structured data is the language AI agents speak. It lets them understand your business type, products, locations, reviews, and FAQs with 100% accuracy — instead of trying to parse HTML. This directly affects whether an AI recommends you.
We parsed your HTML for JSON-LD script tags containing Schema.org structured data. Any valid structured data types found (Organization, Product, FAQ, etc.) are counted.
Add JSON-LD structured data for your main entity types (Organization, Product, BreadcrumbList, FAQ, etc.).
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "Organization",
"name": "Your Company",
"url": "https://example.com",
"description": "What your company does"
}
</script>No sitemap.xml found at /sitemap.xml.
A sitemap is your site's table of contents for AI crawlers. Without it, agents may miss important pages like product pages, documentation, or pricing — reducing the completeness of their recommendations about you.
We fetched /sitemap.xml and checked for a valid XML sitemap with <url> entries and recent <lastmod> dates.
Create an XML sitemap at /sitemap.xml with all important pages and recent lastmod dates. Reference it in your robots.txt.
# Add to robots.txt
Sitemap: https://example.com/sitemap.xmlNo WebMCP endpoint found. Adding one lets AI agents programmatically interact with your site.
WebMCP (Web Model Context Protocol) is an emerging standard that lets AI agents directly call actions on your website — like booking, purchasing, or querying data. Early adopters get native AI agent integration.
We checked /.well-known/webmcp and /webmcp.json for a valid WebMCP configuration with an actions array.
Add a WebMCP endpoint at /.well-known/webmcp or /webmcp.json to let AI agents interact with your site programmatically.
No agents.json found. This file helps AI agents discover what your site offers.
agents.json is a discovery mechanism that tells AI agents what your website can do — similar to how robots.txt tells crawlers what they can access. It helps agents understand your services, API endpoints, and interaction capabilities.
We checked /agents.json and /.well-known/agents.json for a valid JSON configuration.
Create an /agents.json file that describes your site's capabilities for AI agents.
robots.txt AI Bot Check
CAPTCHA Detection
Cookie Consent Wall
Machine-Readable Prices
llms.txt Check
Login-Wall Check
Structured Data Quality
Sitemap Check
JS Dependency
TTFB / Response Time
WebMCP Endpoint
agents.json Discovery
The Free Basic Scan checks static HTML. The Deep Scan simulates how AI agents actually navigate your site — and finds issues invisible to basic analysis.
Tests your pages the way an AI agent experiences them — catches CAPTCHAs, cookie walls, and JS-dependent content that basic HTTP scanning misses.
Scans your homepage plus up to 5 subpages from your sitemap. Problems on inner pages won't slip through.
Tests for API documentation (OpenAPI/Swagger), MCP server endpoints, and form accessibility — the building blocks of true agent interoperability.
No pages added? We'll auto-detect from your sitemap.
Want to scan another website?
Scan another site