Phase 3 · Get Found·3.4·~30 min
Phase 3~30 minEasy

3.4 Robots.txt

Step-by-Step

Check your current robots.txt

Visit yourdomain.com.au/robots.txt. If it exists, review the rules. If not, most platforms create one automatically.

Ensure your sitemap is referenced

Your robots.txt should include: Sitemap: https://yourdomain.com.au/sitemap.xml

Allow AI search crawlers

AI search engines (ChatGPT, Perplexity, Copilot, Gemini) use crawlers to read your site. If they can’t crawl you, they can’t cite you. Check your robots.txt does NOT block these crawlers:

  • GPTBot — OpenAI (powers ChatGPT search)
  • PerplexityBot — Perplexity AI
  • ClaudeBot — Anthropic (powers Claude)
  • Google-Extended — Google Gemini (note: blocking this does NOT affect Google Search, only Gemini AI)

If your robots.txt has no Disallow rules for these bots, they’re allowed by default. Don’t add rules blocking them unless you have a specific reason. Being crawlable by AI is how you get cited in AI-generated answers — this is covered in detail in section 5.3 (GEO).

Verify in Search Console

Use the Robots.txt Tester in Search Console to confirm no important pages are accidentally blocked.

Example Robots.txt

A well-configured robots.txt for most small business websites:

User-agent: * Allow: /

Sitemap: https://yourdomain.com.au/sitemap.xml

That’s it. Simple. Unless you have admin pages, staging environments, or duplicate content to block, don’t overcomplicate it.

You're 29 hours deep. We're impressed.
Most business owners tapped out 10 hours ago.
Next: 3.5 Meta Tags and Open Graph← Back: 3.3 Creating a SitemapView all sections