What Is robots.txt?
robots.txt is a plain-text file that lives at the root of your domain and tells search engine crawlers which URLs they're allowed to fetch. It's the first thing Googlebot, Bingbot and other crawlers request when they visit your site. A clean, well-formed robots.txt protects private routes, points crawlers at your sitemap, and keeps your crawl budget focused on the pages that actually matter for SEO.
Our free robots.txt generator writes a valid file in seconds. It applies smart defaults for common admin and system routes, lets you allow or block specific crawlers, and automatically adds your sitemap reference.
How to Use the Robots.txt Generator
- Enter your website URL (for example
https://example.com). - Optionally list any Allow paths (one per line) — these are public routes you explicitly want indexed.
- List any Disallow paths you want blocked — admin, dashboard, internal search, etc.
- Add a Sitemap URL (defaults to
/sitemap.xml) and an optional Crawl delay in seconds. - Toggle Googlebot, Bingbot and Yandex on or off, then click Generate.
- Copy or download the generated
robots.txtand upload it to the root of your site.
Smart Defaults Explained
When the Smart defaults option is on, the generator automatically blocks routes that should almost never be indexed:
/admin/,/dashboard/,/account/— protected admin areas./login,/signup— authentication pages that produce thin or duplicate content./api/,/cgi-bin/— service endpoints that shouldn't appear in search./private/,/tmp/,/.well-known/— internal directories.
The tool will also automatically reclaim accidentally-blocked critical paths like / and /blog/, moving them from Disallow to Allowso you don't deindex your site by mistake.
SEO Best Practices for robots.txt
- Don't block CSS or JavaScript — Google needs them to render your page correctly.
- Always include a
Sitemap:directive — it's the fastest way to surface new URLs. - Use
noindexmeta tags (not justDisallow) for pages you want fully hidden from search. - Test your file in Google Search Console's robots.txt Tester before deploying.
- Remember:
Disallowblocks crawling, not indexing — a blocked URL can still appear in search if it's linked externally.
Frequently Asked Questions
+What is a robots.txt file?
robots.txt is a plain-text file at the root of your site that tells search engine crawlers which URLs they can fetch. It's the first file Googlebot, Bingbot and other crawlers request when they visit your domain.
+Where do I put my robots.txt file?
It must live at the root of your domain — for example https://example.com/robots.txt. Crawlers ignore robots.txt files placed in any other directory.
+Does robots.txt prevent indexing?
Not exactly. Disallowblocks crawling, but a URL can still appear in Google's index if other sites link to it. To prevent indexing entirely, use a noindex meta tag or HTTP header instead.
+Should I include my sitemap in robots.txt?
Yes. Adding a Sitemap:directive is the canonical way to point all crawlers — including ones you haven't submitted to manually — at your XML sitemap.
+Does Google honor crawl-delay?
Googlebot ignores the Crawl-delay directive. Bingbot, Yandex and most other bots do honor it. To slow Googlebot, use the crawl rate setting in Google Search Console.