Last saturday my Forex Profiting Tips Website was completely de-indexed! I did nothing blackhat (or "grey" hat). Everything about the site is totally legitimate and the content is of the highest quality and is all unique to the site. The site was just starting to gain traction in the search results and then it was suddenly all gone! I submitted a reconsideration request a week ago but have had no change yet. Does anyone know why this happened? Has this slaying of innocent sites happened to anyone else?
new sites with virutally zero traffic like yours - that sometimes happens that you drop out and a few days weeks later in again G never deindexes w/o reason always ask yourself "did I do the VERY best possible" in all aspects of publishing for example "did I validate my pages" ?? if you want a SE to visit you that's EXACTLY as if you want a beautiful girl to date / visit you - you have to do some real and above average efforts .......
I have a lot of sites, I've seen them get given good rankings for a sort of 'trial peroid' for a few weeks, then the good rankings are lost (and they gradually get them back for real this time) but they have never been de-indexed before. Usually the pages are still there just lower down in the results, but these are actually gone from the index. Also the content on this site is one of the best compared to my other sites. It's the one I would least expect to be de-indexed. What results should the tool you linked to give?
I don't see any relation between getting indexed with w3c validation markup. If it's true, how do you explain this : http://validator.w3.org/check?uri=www.google.com Result: Failed validation, 62 Errors Nice finding, maybe you should start on this.
Since last few days i was having the same problem. since problem now resolved still there are few pages yet not indexed and crawling rate from Google side is also very very low.
Thanks vagrant, hadn't notice that. I've edited it to a blank robots.txt file. If that doesn't fix it I will be back here asking the same question.
You can also delete the file instead of having a blank robots.txt http://www.google.com/support/webmasters/bin/answer.py?answer=40360&hl=en
probably better to remove the file, as on your site it also redirects to robots.txt/ with a / at the end and that might cause problems ? vagrant.
it's all about validation of course there is precise and official online doc for every aspect of web design - and usually also a validator for everything that matters - for robots.txt see http://www.robotstxt.org/ the validator at robotstxt.org currently is unavailable. Google search will help to find another one, else just stay within precise documented standards. since you have nothing disallowed you have no need to add that word. Disallow: <title></title> is simply and totally WRONG syntax and never belongs into robots.txt ! as a general rule do only what you EXACTLY KNOW - if you lack precise knowledge - then use GOOGLE search and study first before making major fatal errors. in addition if you have nothing disallowed but want to HELP major modern SE to find new pages, then you may add your line: Sitemap: /sitemapindex.xml to robots.txt and of course KNOW / learn what to put into that sitemapindex.xml - file G search will help you or have a look at my small How to create your sitemapindex.xml for your robots.txt