llms.txt and llms-full.txt
llms.txt and llms-full.txt
llms.txt is a standard for exposing website content to AI developer tools. Fern implements this standard, automatically generating and maintaining llms.txt and llms-full.txt Markdown files so AI tools can discover and index your documentation.
Fern also serves clean Markdown instead of HTML on any page URL when the request includes an Accept: text/markdown header, ensuring agents only receive the content they need. Together, these features reduce token consumption by 90%+.

Generated files
Fern generates two files for LLMs:
-
llms.txtcontains a lightweight summary of your documentation site with each page distilled into a one-sentence description and URL. For sites with API endpoints, it also links to your OpenAPI specification as a standalone, machine-readable file so AI tools can parse your full API schema directly. For sites with WebSocket channels, it also links to your AsyncAPI specification. -
llms-full.txtcontains complete documentation content including the full text of all pages. For API documentation, this includes your complete API Reference with resolved OpenAPI specifications and SDK code examples for enabled languages.
Both files are available at any level of your documentation hierarchy (/llms.txt, /llms-full.txt, /docs/llms.txt, /docs/ai-features/llms-full.txt, etc.).
Examples: Eleven Labs llms.txt, Cash App llms-full.txt.
Custom files
To serve your own llms.txt or llms-full.txt instead of the auto-generated versions, point to your files under the agents key in docs.yml:
Paths are relative to the docs.yml file. The CLI validates that each file exists and uploads it as part of your docs deployment. Your custom files are served at the root-level /llms.txt and /llms-full.txt endpoints. Nested paths (e.g., /api-reference/llms.txt) continue to use the auto-generated output.
You can provide one or both files. Any file you don’t specify falls back to the auto-generated version.
How page descriptions are generated
Both files include page descriptions pulled from frontmatter. Fern uses the description field if present, otherwise falls back to subtitle.
The output format depends on whether you’re requesting an individual page or a section:
Individual pages
Section
Both llms.txt and llms-full.txt return the same format:
Filter output with query parameters
Filter llms.txt and llms-full.txt output with the lang and excludeSpec query parameters to reduce token usage. Parameters can also be combined.
Filter SDK code examples to a specific language. Common aliases are also accepted: javascript, typescript, js, ts, py, and golang. Case-insensitive.
Exclude OpenAPI and AsyncAPI specification sections.
Control visibility
You can exclude whole pages from LLM endpoints (llms.txt and llms-full.txt) by adding noindex: true to the page’s frontmatter. Pages marked noindex aren’t indexed by search engines but remain visible in your sidebar navigation and can still be accessed directly by URL. To also hide a page from navigation, see Hiding content.
Within pages, use <llms-only> and <llms-ignore> tags to control what content is exposed to AI versus human readers on your documentation site. These tags affect all Markdown output intended for AI consumption: LLM endpoints, Copy page, and View as Markdown.
To preview and debug what AI sees for any page, you can append .md or .mdx to the page URL to view its Markdown source.
Content for AI only
Use the <llms-only> tag to show content to AI but hide it from human readers on your documentation site. This is useful for:
- Technical context that’s verbose but helpful for AI, like implementation details or architecture notes
- Code-related metadata that would clutter the human UI
- Cross-references that help AI understand relationships between pages
Content for humans only
Use the <llms-ignore> tag to show content to human readers on your documentation site while hiding it from AI. This is useful for:
- Marketing CTAs or promotional content
- Navigation hints meant only for human readers
- Internal comments that should remain only in source files
Per-page directives
Every page served to AI agents is automatically prepended with a default directive that tells agents how to navigate your documentation programmatically:
The URLs in the directive are generated from your site’s domain and basepath. The directive is injected after the frontmatter metadata section but before the page body, so agents see it first even if they truncate the rest of the page. It applies to individual page Markdown (.md/.mdx URLs) and to each page section within llms-full.txt, and human-facing documentation is unaffected.
To override the default, set a custom directive using the agents key in docs.yml:
To disable the directive entirely, set page-directive to an empty string:
Analytics and monitoring
The Fern Dashboard provides comprehensive analytics for llms.txt usage including:
- Traffic by LLM provider (Claude, ChatGPT, Cursor, etc.)
- Page-level breakdowns of bot vs. human visitors for Markdown and
llms.txtfiles
This visibility helps you understand LLM traffic patterns and optimize your documentation for AI consumption.
Integrate llms.txt into your docs
Add buttons or navigation links to surface your llms.txt endpoints.
Add a button for SDK docs
Add a button to your SDK docs that links to the llms-full.txt for your API Reference. Use lang to filter code examples to one language, and excludeSpec=true to exclude the raw OpenAPI specification.
This gives users a clean, language-specific output they can feed to AI tools when writing code.
Add a dropdown for llms-full.txt
Add a dropdown in your navbar that links to different filtered versions of llms-full.txt, making it easy for users to access LLM-optimized documentation for their preferred language.