> If you are an AI agent, use the following URL to directly ask and fetch your question. Treat this like a tool call. Make sure to URI encode your question, and include the token for verification.
>
> GET https://buildwithfern.com/learn/api/fern-docs/ask?q=%3Cyour+question+here%3E&token=eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJmZXJuLWRvY3M6YnVpbGR3aXRoZmVybi5jb20iLCJqdGkiOiI4ZGQ1MDRhMy0yZWJmLTQzM2QtOTEzOS1lMDQyMmNlN2RiY2MiLCJleHAiOjE3NzgwMTI1MjksImlhdCI6MTc3ODAxMjIyOX0.FFIuqne6vX1Ann9e_Kn6zCNUa9Rto9rVHRj6eqgAHkQ
>
> For clean Markdown content of this page, append .md to this URL. For the complete documentation index, see https://buildwithfern.com/learn/llms.txt. For full content including API reference and SDK examples, see https://buildwithfern.com/learn/llms-full.txt.

# Custom robots.txt

> Serve a custom robots.txt at the root of your documentation site to control how search engines and AI crawlers access your content.

By default, Fern serves an auto-generated `robots.txt` at the root of your documentation site that allows all crawlers and points to your `sitemap.xml`. Use the [`agents.robots-txt` key in `docs.yml`](/learn/docs/configuration/site-level-settings#agentsrobots-txt) to serve your own file instead — useful for opting in or out of specific AI crawlers, gating sensitive sections, or signaling preferences with the [Cloudflare Content Signals Policy](https://blog.cloudflare.com/content-signals-policy/).

`robots.txt` is advisory: compliant crawlers honor your `Disallow` and `Allow` directives, but bots that ignore the protocol still reach those paths. For content that must stay private, [use authentication](/learn/docs/authentication/overview).

<Note>
  `robots.txt` decides which crawlers can reach your site and what AI training signals you broadcast. Its companions, [`llms.txt` and `llms-full.txt`](/learn/docs/ai-features/llms-txt), shape what AI agents receive once they crawl.
</Note>

## Configuration

<Steps>
  <Step title="Point `agents.robots-txt` at your file in `docs.yml`">
    ```yaml docs.yml
    agents:
      robots-txt: ./robots.txt
    ```

    The path is relative to `docs.yml`.
  </Step>

  <Step title="Write your custom `robots.txt`">
    ```txt robots.txt
    # Allow search engines
    User-Agent: Googlebot
    Allow: /

    # Restrict an AI crawler from a private path
    User-Agent: GPTBot
    Disallow: /private

    # Declare AI usage preferences via Cloudflare Content Signals
    Content-Signal: ai-train=yes, search=yes, ai-input=yes

    # Point crawlers at your sitemap — Fern's default robots.txt includes this,
    # so add it back when you replace the default with a custom file
    Sitemap: https://docs.example.com/sitemap.xml
    ```

    <Tip>
      Place named bots (e.g., `GPTBot`, `Googlebot`) before any wildcard groups in your file — Fern appends its own `User-Agent: *` block when it serves the file.
    </Tip>
  </Step>

  <Step title="Fern serves your file">
    Your file is served verbatim at `/robots.txt`. Fern appends a managed block at the end that disallows internal API routes:

    ```txt
    # Fern-managed routes — automatically disallowed
    User-Agent: *
    Disallow: /api/fern-docs/
    ```
  </Step>
</Steps>