So from a code and content semantics perspective what do you do with all of this 'hidden text' if the css file is not read - so it appears in the browser window? At the same time as creating hidden content do you also devise methods to remove it in the future if it's ported to a different device or platform?
On top of that they appear to be able to decipher text that's being hidden as part of a menu system (then made visible via javascript) from text that's being hidden to game the engines. How they do it I really haven't a clue. They're definitely (as far as I can tell) not hitting my external style sheets.
ok this was a really easy answer... the answer is/was no. I just greped through 1 year of logs for a bunch of sites and not one request from google to read the css.
Google does read and ban websites because of the CSS file. Try it yourself: Point to a backgroundimage in the element body of your css file. Have it point to a picture in a folder you have disallowed in your robots.txt. In 1-2 days the search in google for: "site:yoursite.com" wil suddenly only give the URL of your homepage and nothing more. (At least this is how it happened to me. My page was indexed with all the 18 pages before I changed the CSS file) I just tested how the site would look with a backgroundimage instead of a color and did not think about the fact that the picture I used was in a folder diallowed by the robots.txt. A few days later my website disappeared from google. I deleted the line with background image in my css file and used a "real" bgcolor again. My website was back in Google again in a few days. Since I was not sure if it was really this what caused it, I tried it again. That was 2 days ago and now my website is gone again in google. It only shows 1 line: my homepage URL. Now I am just waiting till my page comes back again. (This website exists for 6 months, PR 0)
Exactly, there are so many pro's here i feel overwhelmed, heh. By the time i know the answer there are like, say 20 posts with what i was thinking and then some. Well, take their word for it lol. Peace
i would guess that Mozilla/5.0 (compatible; Googlebot/2.1; ... can see the resulting page but the normal Googlebot/2.1 can not. plus they probably have a robot look at pages sometimes that only shows as a normal user but is Mozilla/5.0 to help them find cloaked pages etc
IE and Firefox can both read and interpret CSS but the multi-billion dollar Google can't? I find that extremely hard to believe.