User-agent: * (all bots) Disallow: / (not allowed to crawl all files under your main directory) I have a blog that was created a long time ago but it hasn't been indexed as of yet. I've been reading that the current robots.txt file for all blogs purposly tells the spiders NOT to crawl their blogs....and I've read that you can't edit the robots.txt file...so.....how do you get around this so that your blog will be indexed into the SERPS? Thanks.
You sure about that? robots.txt file for a random blog only disallows the /search URL: http://limagequotidienne.blogspot.com/robots.txt What's your blog URL exactly?
It really does not make sense to block a blog from being indexed. out of curiousity, what effect does blocking the search url have on a blog?