I am starting to have doubts concerning privacy issues with google. They will end up knowing about every stightest bit you exchange in the web. http://webaccelerator.google.com/support.html
There is no doubt they know pretty much everything already. More important is how they use that information.
The begining of the "mark of the beast" perhaps? "Roll up your sleeve so we can check your google ID"
I don't know. Those services already exist in a number of places. With the resources at Google's disposal, I expect they can do it far more efficiently.
Didn't they already find a bug in this? Something about cached sessions for forum logins? Did they ever fix that?
The mark of the beast on the forehead has to do with our thoughts. The mark of the beast on the right hand has to do with our deeds. But this doesn't exclude programs like this. -It can be involved with the beast. It always depends in whose hand it is.
I guess I overestimated how much resources Google was willing to devote to this project: http://webaccelerator.google.com/index.html
Any way google is really dominating the internet today. It can ruin some very good webmasters. what do you say.
I don't think it ruins good webmasters. I think it may cause some distress for webmasters trying to take shortcuts.
I simply meant that if you play by the rules, aim for the long run in your SEO practices instead of the quick buck, and pay attention to the Google guidelines, you will do fine. The "good webmasters" that are "ruined" are those who take shortcuts trying to get to the top in a hurry.
You do kinda have a point. There was a time when if you had a white page, a table with a black background, and white text in it, Google considered it hidden text, although perfectly indexable and visible. One could look at the 302 redirect problem in a similar manner; many of the webmasters using the scripts or utilizing a click counter were innocent, as were those caught in a rank hijack. Both should have been ranked somehow, but Google should never have considered a 302 - Temporary Redirect as a valid url if it were not capable of distinguishing the link from the actual page. Search engines do dictate, in a way, the scripts we use and the designs we make. Who is to say that today a script writer finds a new way of handling data, or a designer a new technique for display, perfectly valid, but that Google or others don't trip and stumble upon it, down-ranking the site?
Yes, there have been a few glitches along the way, but let's be honest here. Google doesn't make a habit of sniffing out "new ways of handling data" or "new techniques for display" and penalizing them just for the hell of it. Those new "scripts" and "designs" you're talking about are developed as attempts to fool search engines and gain an undeserved advantage for that webmaster in search engine rankings. As the developer, I can understand why you are upset when Google discovers and slaps you for that script. The rest of the world, including not only the regular searcher but also the other webmasters who play by the rules, applauds Google.
Google doesn't necessarily sniff them out... and they are not always developed for unscrupulous means. Google just doesn't seem to understand them. Unfortunately, many techniques that are valid (css to locate div's) can be used unscrupulously. The table problem I wrote of was due to unscrupulous webmasters utilizing a table with a background the same color as the text within it... innocent sites with the technique I described got hurt with the original fix. Google eventually found a workaround. Click tracking with 302's was around before the rank hijacking bug. It is perfectly valid, 302's are only temporary, and should not be picked up by the search engines for that reason. 302 server codes are the default for refreshes to another page. Google changed something - perhaps crawling more dynamic url's, trying to increase the size of the index, or my latest theory - that it is related to the missing index fix. (Merely due to the dating, at this point. It may be an early symptom.) Click tracking was not intended to conserve pagerank, hijack sites, or feed content to adsense by scraping results out of search engines. But that is how it got used by the unscrupulous and those unaware of the bug, once Google picked the urls up. And Google's fixes have been a bigger mess, at first it would show the title and description for a site under the wrong url, then rank the page the 302 link was on in the place of the site it pointed to. At times, Google has dumped both the 302 url and the site that it points to, making those who have valid reasons for tracking in this manner (advertising, other engines respect the 302 as temp, even some script data handling duties) change methods. They stumbled over a technique older than Google itself. Sites caught in a hijack have to run around trying to get the link taken down, or in my case, making complaints to broadband cable access providers about a server running in several different locations around Canada. Changes were dictated to the site hijacked, and still no guarantee that it would release the innocent. For those who got a server disconnected or a link taken down, there were others ready to take its place. It created a lot of work for sites that did nothing wrong, and would have been just fine otherwise. I applaud Google for going after unscrupulous webmasters. But alot of innocents get hurt in the wake. They could simply count that 302 url as a valid inbound link rather than reward the site it is on. That is the most simple fix I have been able to imagine, and cannot think of how this would hurt anyone.