5 layers · 100 points
How scoring works
Every orank score is computed from dozens of automated checks across 5 layers. Each layer tests a different dimension of agent readiness.
Grading Scale
Discovery
20 ptsCan agents find you
Sitemap exists
GET /sitemap.xml. Valid XML with <urlset> or <sitemapindex>
Content without JS
Raw HTML has meaningful text (>500 chars), <h1> present
Not blocked by bot detection
Fetch with GPTBot, ClaudeBot, ChatGPT-User, PerplexityBot UAs
robots.txt AI policy
Parse for GPTBot, ClaudeBot, ChatGPT-User, Google-Extended, PerplexityBot directives
llms.txt exists
Probe /llms.txt and /.well-known/llms.txt for existence
llms.txt quality
Validate format, check for markdown links, structured content
Agent discovery file
Probe /.well-known/agent-skills, /agents.md, /skills.sh
A2A Agent Card
Probe /.well-known/agent-card.json, validate JSON schema
Listed in MCP registries
Query Smithery, mcp.so, Glama, PulseMCP for domain match
Identity
20 ptsDo agents understand you
llms.txt content quality
Check for structured product description, use cases, constraints in llms.txt
JSON-LD structured data
Parse JSON-LD for SoftwareApplication or Product schema types
Consistent description
Check consistency across meta, og tags, and JSON-LD descriptions
Pricing info accessible
Check /pricing, /plans, schema.org/Offer in JSON-LD, links
Public API/docs linked from homepage
Check homepage anchor tags for docs/API links, verify they resolve
Agent instruction / when-to-use
Check for explicit when-to-use guidance in agent files or llms.txt
Access & Auth
20 ptsCan agents authenticate and act
Public API with reachable endpoints
Check /api, /docs/api, /api-reference. Probe for 200 response
OpenAPI spec published
Check /openapi.json, /swagger.json, validate schema version
OAuth 2.0 support
/.well-known/openid-configuration, scan docs for "OAuth"
Scoped permissions
Scan docs for "scope", "permission"; check OpenAPI securitySchemes
Agent auth documentation
Scan docs for M2M, client credentials, agent-specific auth flows
Developer portal
Check /developers, /console, /dashboard for self-serve key management
Agent Integration
20 ptsHave you built the plumbing
MCP server/manifest
Check /.well-known/mcp paths, MCP registries, provided mcpUrl
Webhook support
Scan docs for "webhook", check /webhooks path
Markdown content negotiation
GET / with Accept: text/markdown. Check for markdown response
Streaming support
Check docs for SSE, server-sent events, chunked transfer
Rate limits documented
Scan docs for "rate limit", check OpenAPI for 429 responses
JSON error responses
Probe invalid API path, check response is JSON not HTML
In-agent Experience
20 ptsCan users interact through agents
MCP app registry listing
Query MCP registries (Smithery, mcp.so, Glama) for UI-enabled entry
ChatGPT app listed
Check ai-plugin.json, homepage for GPT store / ChatGPT references
OpenAI plugin manifest
GET /.well-known/ai-plugin.json. Validate plugin format
A2UI / generative UI support
Scan docs/integrations for A2UI, generative UI, agent UI references
Verified integration in AI platforms
Check for Claude.ai, Goose, VS Code, Cursor integration mentions
Registry branding
Check MCP/plugin manifest for display name, icon, description
How Scanning Works
Every scan begins by fetching your homepage and immediately fanning out into 33 independent checks across all five layers, running in parallel on the server. Each check has its own 10-second timeout. The full scan completes in under 30 seconds.
We probe dozens of well-known paths (/llms.txt, /.well-known/agent-card.json, /sitemap.xml, /openapi.json, /.well-known/ai-plugin.json, /.well-known/openid-configuration), parse raw HTML for JSON-LD structured data and meta consistency, send requests with AI-specific user agents (ChatGPT, Claude, OpenClaw, etc.) to test bot policies, attempt content negotiation with Accept: text/markdown, and query four major MCP registries (Smithery, mcp.so, Glama, PulseMCP) to verify real-world agent discoverability.
OpenAPI specs are validated against schema standards. OAuth flows are detected through OIDC discovery endpoints and documentation scanning. Developer portals, webhook support, rate limiting, and streaming capabilities are each verified through a combination of path probing, response analysis, and documentation keyword extraction.