I was about to copy small HTML codes of google and found out that the HTML on their search pages contains many tag attributes that are not quoted, font tag is heavily used and 238 errors when validaed on http://validator.w3.org/ I wonder why, with billions in budget, they aren't making their pages clean enough to follow some standards set by w3.org?
This shows that a site can be successful and not validated. Is there any real downside to not having valid code? I have in the past left the pound signs off of the color codes since they can can cause trouble with some programming languages. Quotes missing from attributes are another thing that I don't see a downside to but can cause trouble if storing HTML in a database.
We made 2 sites we administer W3c compliant (about 3 months ago). The code was pretty clean to begin with, with no serious errors except for about 20 snippets to be fixed. Making the sites compliant made no difference in ranks. This could change though. No one knows if the next algo update could be the one to favor sites that validate....
No... as Matt Cutts put it... there are a great number of useful pages out there not created by W3C compliant web developres (like from universities for example). So making indexing decisions based on code compliancy makes no sense from their standpoint. :0)
My pages won't validate, but when I use the google webmaster tools to see what they think of my site they give zero HTML errors! I am glad that they don't apply a higher standard to me than they do for themselves! best regards wiz