AI Agent Readiness Score: How We Calculate It

The AgentSpeed score is a weighted readiness metric that measures how well a website can be accessed and understood by AI agents, scored from 0 to 100 across 10 automated checks. When AgentSpeed scans your website, that number tells you at a glance how accessible your site is to AI agents like ChatGPT, Claude, and Perplexity. But what actually goes into it?
With Google AI Overviews now reaching 1.5 billion users monthly across 200+ countries, and ChatGPT serving 900 million weekly active users (OpenAI, 2025), your site's agent readiness directly affects whether you appear in AI-generated answers. According to Ahrefs (December 2025), 92% of AI Overview citations come from pages ranking in the top 10 — meaning visibility to agents is no longer optional.
This article explains our scoring methodology in full — the checks we run, how we weight them, and what each result means for your site's agent visibility.
How Is the Score Structured?
AgentSpeed organizes its 10 checks into two tiers based on how severely each factor affects agent access.
Tier 1: Agent Killers (70% of score)
These are the factors that fundamentally prevent AI agents from accessing your content. A failure here does not just reduce visibility — it can block agents entirely. Tier 1 accounts for 70% of your overall score because access is binary: an agent can either reach your content or it cannot.
Tier 2: Readiness Checks (30% of score)
These factors affect how well agents can understand and use your content once they have access. A failure here degrades quality and discoverability without necessarily blocking access. Tier 2 accounts for 30% of your score.
What Are the 6 Agent Killers? (Tier 1)
1. robots.txt AI Bot Check
What we check: We fetch your robots.txt file and parse it for 16 major AI user-agents: GPTBot, ChatGPT-User, OAI-SearchBot (OpenAI), ClaudeBot, Claude-Web, anthropic-ai (Anthropic), PerplexityBot, Google-Extended, Applebot-Extended, CCBot, Bytespider, cohere-ai, Diffbot, FacebookBot, YouBot, and Amazonbot.
Pass: All major AI bots are allowed access to your public content.
Warning: Some bots are restricted but the most important ones (GPTBot, ClaudeBot, PerplexityBot) are allowed.
Fail: One or more major bots are explicitly blocked with Disallow: / or equivalent.
2. CAPTCHA Detection
What we check: We scan your homepage HTML for signatures of common CAPTCHA implementations: reCAPTCHA (Google), hCaptcha, Cloudflare Turnstile, and generic challenge pages. We look for CAPTCHA scripts, challenge iframes, and interstitial elements.
Pass: No CAPTCHA detected on the homepage.
Warning: CAPTCHA detected but appears to be on a specific element (like a form), not blocking full page access.
Fail: CAPTCHA challenge detected that prevents content access.
3. Cookie Consent Wall
What we check: We fingerprint your page for known cookie consent management platforms (OneTrust, Cookiebot, Usercentrics, iubenda, CookieYes, Termly, TrustArc) and look for elements that suggest content is being blocked rather than overlaid.
Pass: No consent wall detected, or consent mechanism is present but content is accessible in the HTML.
Warning: Consent banner detected but does not appear to fully block content access.
Fail: Blocking consent wall detected — content is not accessible before consent.
4. Machine-Readable Prices
What we check: For commercial websites, we look for pricing data in the page HTML. This includes obvious patterns ("$99/month", "€49", "from £29"), dedicated pricing sections, and Schema.org price markup using Offer and PriceSpecification types.
Pass: Pricing information found in the HTML, either as text or structured data.
Warning: No clear pricing found, but site does not appear to be primarily commercial (blog, portfolio, informational site).
Fail: Commercial intent detected but no pricing information accessible in HTML.
5. llms.txt Check
What we check: We attempt to fetch your llms.txt file from two locations: /llms.txt and /.well-known/llms.txt. If found, we validate the content against the specification — checking for the required H1 title and that the file contains meaningful content rather than placeholder text.
Pass: Valid llms.txt found at one or both locations.
Warning: llms.txt found but does not follow the spec (missing H1, empty content, etc.).
Fail: No llms.txt found at either standard location.
6. Login-Wall Check
What we check: We follow HTTP redirects and check whether the final page destination is a login or authentication page. We look for authentication redirects, login form elements as primary page content, and paywall indicators.
Pass: Core content is accessible without authentication.
Warning: Authentication detected but appears to be for a specific section, not the main content.
Fail: Page redirects to login or authentication is required to access primary content.
What Are the 4 Readiness Checks? (Tier 2)
7. Structured Data Quality
What we check: We parse all JSON-LD structured data on your homepage and validate it against Schema.org vocabulary. We look for high-value types including Organization, LocalBusiness, Product, SoftwareApplication, Article, FAQPage, BreadcrumbList, and others. We also check for basic completeness — whether the schema has required fields populated.
Pass: One or more valid Schema.org types found with well-populated fields.
Warning: Schema found but incomplete or using non-standard types.
Fail: No structured data found, or structured data present but invalid.
8. Sitemap Check
What we check: We fetch your sitemap.xml (and check robots.txt for a sitemap directive). We verify the file exists, returns a 200 status, and contains valid XML with at least one URL entry. We also check whether the sitemap was modified recently enough to reflect current content.
Pass: Valid XML sitemap found with at least one URL.
Warning: Sitemap found but appears outdated (last modified more than 6 months ago).
Fail: No sitemap found at /sitemap.xml or referenced in robots.txt.
9. JS Dependency
What we check: We analyze your homepage response to detect single-page application patterns — pages where the main content is loaded via JavaScript rather than present in the initial HTML. We look for near-empty HTML bodies, large JavaScript bundle includes, and common SPA framework signatures.
Pass: Content appears to be in the initial HTML response (server-side rendered or static).
Warning: Possible JavaScript dependency detected — page may have reduced content without JS execution.
Fail: High confidence that critical content is JavaScript-only and invisible to agents that do not execute JS.
10. TTFB / Response Time
What we check: We measure the time-to-first-byte from an external server perspective. This is the time between our scan request and the first byte of your server's response.
Pass: TTFB under 800ms.
Warning: TTFB between 800ms and 1800ms — noticeably slow, some agents may timeout.
Fail: TTFB over 1800ms — high risk of agent timeouts and degraded crawl quality.
How Is the Score Calculated?
Your overall score is calculated as a weighted average of all 10 checks. Each check contributes points based on its result:
- Pass: Full points for that check
- Warning: Partial points (typically 50% of the check's value)
- Fail: Zero points for that check
The 6 Tier 1 checks are weighted at 70% of the total. The 4 Tier 2 checks are weighted at 30%. Within each tier, checks are weighted roughly equally, with minor adjustments for relative impact.
What Does Your Score Mean?
80-100 — Agent Ready (green)
Your website is well-configured for AI agent access. The major blockers are absent, and agents can read and understand your content. You are likely being included in AI-generated recommendations and citations.
50-79 — Needs Work (orange)
AI agents can probably access your content but may be missing key signals or encountering partial barriers. You are likely visible but not as prominently as sites in the green zone. There are specific improvements to make.
0-49 — Agent Blocked (red)
One or more critical barriers are preventing AI agents from accessing your content. You are likely invisible to ChatGPT, Claude, Perplexity, and other agents. Each failed Tier 1 check is a significant problem that should be prioritized.
What Is the Deep Scan Difference?
The free AgentSpeed scan uses HTTP requests for all 10 checks — fast, instant results that give you a strong signal. The Deep Scan goes further by simulating actual agent navigation using a real browser.
With the Deep Scan, CAPTCHA detection works by fully rendering the page DOM rather than just checking for script tags. Cookie wall detection verifies whether content is actually blocked using scroll-lock and viewport overlay analysis. JS dependency is measured by comparing real JS-on vs JS-off content length. TTFB is measured using Chrome DevTools Protocol for browser-accurate timing.
The Deep Scan also adds three bonus checks not available in the free scan: API documentation detection, MCP server probing, and form accessibility analysis.
Run Your Free Scan
Every website starts somewhere. The free AgentSpeed scan shows you exactly where you stand and what to fix first. It takes two seconds, no account required.