User-agent: * Disallow: /search this is what shows up in my webmaster tools for my site. Does this mean the robots cant search? What exactly is this? lol
that means you have a folder called search and your robots.text file asking robots not to search that folder.
Sounds about right! Though not exactly. You most likely have a file on your server called robots.txt (not in a folder as such) that has some crazy command that disallows spiders from accessing your site (supposedly - though I say supposedly in an offhand way as it does actually work in cases) So what you need to do is this > Fetch that file onto your local machine (That's your PC, Mac or whatever) and edit it to say something like > User-agent: * Disallow: # too many repeated hits, too quick User-agent: litefinder Disallow: / # Yahoo. too many repeated hits, too quick User-agent: Slurp Disallow: / # too many repeated hits, too quick User-agent: Baidu Disallow: / Code (markup): Then reload the page and you should be good to go. There are other commands that may be of use - perhaps others will digress and diversify?
Hi, oleander I think this would only mean all crawlers would not index your pages under /search directory if they are decent enough to obey the robots rules. Have a nice day,
Disallow: /search would ONLY block search engines indexing a file in your sites root directory called "search" with no file extension. The ones saying it blocks a directory have not read your question correctly or do not understand *sigh* if you wanted it to block the contents of the search directory on your server you would need to put Disallow: /search/ ... with a / at the end !! in your case, using one of your blogs as an example it would block thecanadarealestatenews.blogspot.com/search?q=test Code (markup): ie a FILE called search
Search engine will not crawl the folder "Search" on the server. This is mentioned in the robots.txt file here.