Most of us know what search engine cloaking is. Displaying different content to a search bot, compared to the human visitor. I was wondering what people thought about (what I called) reverse cloaking. I called it this because it's not an attempt to game the search engines in any way. In fact it's an attempt to only show certain content to human users, so as to completely avoid the problem of passing link juice. No content addition, and no change in content, just removing a block of content from being indexed. Ignore the fact that you could use nofollow - all I'm asking is, do you see this as being in any way unethical? And what do you think Google would make of it, considering its only the removal of a certain part of the page (no content addition on alteration). EDIT: I was thinking of it as an alternative to javascript (which can contain content, but isn't indexed), but without having to resort to actually using javascript.
Now thats a very interesting idea. I cant imagine that the SE would have an issue with it, after all your not changing the content for the spider - having said that you could take your modifications to the 'N'th degree and have a very different site for human consumption, so the argument would be exactly how much change would be considered practical. For the life of me I cant see why google doesnt just do say one in every 6 spiders as IE running on XP so that nobody noticed, and then if any differences show up mark out that website for investigation. hmmmm. I think it may be worth giving it a try fella....
I think you hit right upon my concern there. As with most techniques, a lot of it is down to implementation - good or bad motive etc. I was thinking that as long as it was limited to purely the removal of a content block, then it's not really any different to using javascript from a search engine's point of view.
I think as long as your intent is honest, that it should be fine. For example, consider Experts Exchange. Results from their website appear in many Google results for programming/technical searches. Yet, when you click on the link, you are taken to a page which says "Hey, you gotta pay $$ and sign in before you can see it". But if you go back to the Google SERP and click on the Cached version you can see what it showed Google. While I consider this cloaking, apparently Google does not, probably because it's a paid service. So I think you would be fine as long your intent is clear and honest. As far as Google reading javascript - to the best of my knowledge, I don't think they do.
Hmmm, after looking at it, there is some evidence to suggest that Google does crawl some javascript links. Agreed that the intent has to be honest, I just wonder how Google would work out what the intention was.