There are many features now and I found some are really helpful. I think new interface is easy to use now.
I just noticed that too. I see they have a preferred domain tool where you can set your site to show in the results with or without the www. I wonder if they will use that to get rid of duplicate content where webmasters don't have redirects to www or without?
not horrible, but why would they dare call the URLs that I am specifically blocking with Robots.txt a "crawl error"? Seems to me that the error is that these pages have been on my robots.txt for almost a year and that some were last accessed on August 2nd. Also, should I notify them that I found a 404 error while 'crawling' their 'crawl rate' page? Should this be the same thing that the rest of the internet calls a Crawl-Delay according to robots.txt? I think Google is learning from Micro$oft and purposely screwing up the internet so that you have to do it the defacto standard way AND their made-up way. (M$ IE... we can't support the standards or it will break websites that foolishly do things the way we've always told them to do it)