One Line of Text. Entire Site Invisible.
I wish I were being dramatic.
But after 20+ years and 500+ campaigns, I have personally seen robots.txt files with `Disallow: /` on production sites. Blocking. Everything.
The site owner had no idea. For months.
sighs
What Robots.txt Does
It tells search engine crawlers which parts of your site they are allowed to visit. It is a set of rules. "Crawl this. Do not crawl that." Google's official robots.txt documentation explains the syntax in detail, and Moz has a solid primer if you want the fundamentals.
It lives at `yoursite.com/robots.txt`. Every search engine checks it before crawling.
The Most Common Mistakes
Leftover staging rules. During development, someone added `Disallow: /` to keep Google away from the staging site. Then they pushed that robots.txt to production. Whoops.
Blocking CSS and JavaScript. Google needs to render your pages. If you block your CSS and JS files, Google sees a broken page and ranks you accordingly.
Blocking entire sections. `Disallow: /blog` because you "were not ready yet." Six months later, your blog has 50 posts and zero organic traffic. Nobody remembered the robots.txt rule.
Using robots.txt for deindexing. Blocking a URL in robots.txt does NOT remove it from the index. It prevents crawling, which means Google cannot even see a noindex tag if you add one. You have effectively locked the door and thrown the key inside. We explain the right approach in our deindexing guide.
What to Do Right Now
Go to `yoursite.com/robots.txt`. Read it. Do you understand every line?
Make sure your sitemap URL is listed. Make sure you are not blocking anything important. Make sure your staging rules did not sneak into production.
Then add it to your regular audit checklist. seocheckup.app includes robots.txt verification in our 113-task checklist. Free. No credit card. 30 seconds to set up.
Because one wrong line should not cost you six months of traffic.