Modal scores 93/100 (Grade A), placing it among the most AI-agent-ready documentation sites evaluated. It passes 17 of 22 checks (77%), demonstrating strong support for AI coding agents. 3 items require attention to reach a perfect score.
# Agent Score Fix Report — Modal URL: https://modal.com/docs Score: 93/100 (Grade A) I need help improving the AI-readiness of the documentation at https://modal.com/docs. Agent Score found 3 failing checks and 0 warnings. ## Failing Checks (3) - [content-discoverability] Llms Txt Directive: No llms.txt directive found in any of 10 sampled pages - [observability] Markdown Content Parity: 1 of 10 pages have substantive content differences between markdown and HTML (avg 13% missing) - [observability] Cache Header Hygiene: 1 of 11 endpoints have aggressive caching or missing cache headers ## Fix Instructions For each issue above, please: 1. Analyze the documentation site at https://modal.com/docs 2. Implement the specific fix 3. Verify the fix would cause the check to pass ### Common fixes: - **No llms.txt**: Create /llms.txt following https://llmstxt.org — list all doc pages in markdown format - **No .md URL support**: Configure your docs platform to serve pages at equivalent .md URLs (e.g. /docs/quickstart.md) - **No content negotiation**: Return markdown when request includes Accept: text/markdown header - **Large page size**: Reduce nav boilerplate, inline scripts, and repetitive markup - **No sitemap**: Generate /sitemap.xml listing all documentation URLs - **Auth walls**: Ensure docs pages return 200 without requiring login cookies or tokens - **No Last-Modified header**: Configure your server/CDN to include Last-Modified response headers - **Tab content hidden**: Ensure tabbed content is rendered in the HTML (not JS-only) so agents can read all variants ## Run afdocs Locally for More Detail To get deeper visibility into what's failing, run afdocs against your docs: npx afdocs https://modal.com/docs --fixes --verbose - **--fixes**: Adds "Fix:" lines to the output for each warn/fail check with actionable remediation steps - **-v, --verbose**: Shows per-page details (specific URLs, character counts, error codes) for checks with issues — useful for per-URL visibility into what's failing
CHECK RESULTS
How your docs scored
markdown-content-parity1 of 10 pages have substantive content differences between markdown and HTML (avg 13% missing)
cache-header-hygiene1 of 11 endpoints have aggressive caching or missing cache headers
llms-txt-freshnessNo sitemap found; cannot assess llms.txt freshness without a sitemap as ground truth
auth-gate-detectionAll 10 sampled pages are publicly accessible
auth-alternative-accessAll docs pages are publicly accessible; no alternative access paths needed
llms-txt-directiveNo llms.txt directive found in any of 10 sampled pages
llms-txt-existsllms.txt found at 1 location(s)
llms-txt-validllms.txt follows the proposed structure (H1, blockquote, heading-delimited link sections)
llms-txt-sizellms.txt is 17,128 characters (under 50,000 threshold)
llms-txt-links-resolveAll 10 same-origin sampled links resolve (171 total links)
llms-txt-links-markdown9/10 same-origin sampled links point to markdown content (90%)
markdown-url-support10/10 sampled pages support .md URLs (100%)
content-negotiation10/10 sampled pages support content negotiation (100%)
rendering-strategyAll 10 sampled pages contain server-rendered content
page-size-markdownAll 10 pages under 50K chars (median 6K, max 34K)
page-size-htmlAll 10 sampled pages convert under 50K chars (median 6K, 0% boilerplate)
content-start-positionContent starts within first 10% on all 10 sampled pages (median 0%)
tabbed-content-serializationNo tabbed content detected across 10 sampled pages
section-header-qualityNo tabbed content found; header quality check not applicable
markdown-code-fence-validityAll 88 code fences properly closed across 11 pages
http-status-codesAll 10 sampled pages return proper error codes for bad URLs
redirect-behaviorNo redirects detected across 10 sampled pages