HI all, ive anew blog just started on sunday, its been index by google etc BUT was checking my logs and had a refer from blog/robots.txt. On inspection my robots.txt reads User-agent: * Disallow: I though the dissalow meant no spiders could index it, obviously i dont want that, but it already has been indexed? what should the defualt setting be? Help!
This code doesn't restrict crawlers to visit your website. This code means that robots are not disallowing any directory or path and they are free to move within the website. "Disallow:" and "allow: /" are both same and used to allow crawlers to visit your website without any restriction.