A recent SEO Refugee thread brought up the subject of whether Google (and the SE bots in general) check CSS files. Testing that is easy: download your log files and search for all requests to the CSS file(s) coming from Googlebot. I usually get a few hits every time I do this (every couple of months or so), but on closer inspection, the hits have always been from a spoofed user agent string, where some clown browses the web pretending to be Googlebot. This is easy enough to accomplish, for example, using Firefox and the user agent switcher extension. Source: http://ekstreme.com/thingsofsorts/seosem/googlebot-requested-a-css-file An interesting story explaining how google discovers (and what's next) hidden texts embedded on some black hat seo pages. I really recommend reading it (not my site) FYI, Ruslan
Googlebot should also incorporate a JavaScript interpreter ... What if I have: CSS: .hideSpam { background: #fff; color: #fff } Code (markup): HTML: <p id='x' name='x'>SPAMMY CONTENT HERE ....</p> .... <script type='text/javascript'> document.getElementById("x").setAttribute('class', 'hideSpam'); </script> Code (markup): Anyway, G has made progress in time removing spam content from it's index but there's a lot of work left to be done.
IMHO, that's a useless scenario. If googlebot does not understand JS, it will still see this: <p id='x' name='x'>SPAMMY CONTENT HERE ....</p> And it has been known that sites with content differentley shown to googlebot, and googlemediabot (adsense) are flagged somehow.. So beware
I don't think my point has been made. This is not server-side cloaking (like providing bots a version and real users another one), the above example affects only the visibility of a certain paragraph.