I have a serious problem with two of my websites, whose resutls in Google were hijacked by a proxy-site called unipeak.net (same as unipeak.com) and a result all of my traffic now goes to this site, who is simply stealing my content. It all began when I saw that traffic to two of my sites dropped drastically - 8-10 times. I looked onto the reason and I found that I am loosing Google traffic. I checked my rankings and I was pretty surprised to see that this site - www.unipeak.net <http://www.unipeak.net> is ranking right where my sites used to be. It is even ranking #1 for my sites' domain names! <http://www.google.bg/search?hl=bg&q=toursbg.info&meta> = <http://www.google.bg/search?hl=bg&q=visionbulgaria.com&meta> = Same here - the copy of my page at unipeak.net is ranking where previously my site was ranking: <http://www.google.bg/search?hl=bg&q=vision&btnG=Търсене&meta=lr=lang_bg > <http://www.google.bg/search?hl=bg&q=хотели+в+българия&btnG=Търсене&meta=lr= > This proxy-site has also parsed and rewritten all of the links on my page, it's unbelievable so really nobody is now coming to my sites as a result of this hijacking. I investigated a lot and found no explanation on to how has this site managed to made Googlebot believe that it is my sites. I found other webmasters complaining of the same theft, I saw other sites who's pages are hijacked by the same site (unipeak) but not a solution to the problem. I tried blocking the IP's of the crawlers of unipeak.net/.com but it simply didn't work. I am completely at a loss about how to get my sites back into your index. Is there anything I can do to prevent this from happening and/or get my rankings back? Is there anything you can do about this issue (which seems *very* serious to me if any site could be hijacked this way…).
Well...you aren't helping things by linking to it. I would edit your post and remove the link to their site.
Get in touch with Google. Sounds like they are doing a redirect of some sort ... does 302 redirection still work with Google?
I guess the question is, if google can cache pages without permission can a spamy search engine cache pages? Seems that the only solution is to try to have google de-index unipeak cache pages. Unipeak is getting away with a complete black hat system for stealing content. Care to share there unipeaks bots here so others can block there bots?
Yes, if google can cache then every Joe Blow that builds his own search engine can cache. They can essentially have your entire site on their servers and even outrank us for terms. All they have to do is set up redirects. You'll see more of it in the future... The only way to stop it is to block their spiders. But first you have to find their IP in your logs. Even then they can simply ignore it. There's also htaccess blocks that can prevent it. But again, if the spider comes from a new IP we're out of luck. To the OP - don't expect google to do anything. Your best bet is to try and block the bots. Even then it's a bit like closing the barn door after the horse has left.
Unipeak is -- or claims to be -- a proxy site. I asked it to fetch google, and it brought it up at the following not-quite-a-url: http:// site www.unipeak.net/ file gethtml.php? param _u_r_l_=aHR0cDovL3d3dy5nb29nbGUuY29tLw== Obviously, the data from every site visited goes through their server, and they would appear more like a normal browser than like a bot. Whether they are maliciously republishing the data, or the problem is an unintended artifact of their operation, is not clear to me. I see a similar link to the OPer's data in the google search. If that link were posted and lots of people used it, your server would deliver your data to the users and to google.
Strange that I found this, it is a rather long article (Not by me as it was new to me too) but it answers all your questions. I included the authors name but no links. Hope this is within the guidelines! Ward By Sophie White (c) 2007 There is a current and active way to knock a website out of Google's search engine results. It's simple and effective. This information is already in the public domain and the more people that know about it, the more likelihood there is that Google will do something about it. This article will tell you how it works, how to get a website knocked out of the search engine rankings, but most importantly, how to defend your own website from having it happen to you. To understand this exploit, you must first understand about Google's Duplicate Content filter. It's simply described thus: Google doesn't want you to search for "blue widget" and have the top 10 search terms returned copies of the same article on how great blue widgets are. They want to give you ONE copy of the Great Blue Widget article, and 9 other different results, just on the off chance that you've already read that article and the other results are actually what you wanted. To handle this, every time Google spiders and indexes a page, it checks it to see if it's already got a page that is predominantly the same, a duplicate page if you will. Exactly how Google works this out, nobody knows exactly, but it is going to be a combination of some or all of: page text length, page title, headings, keyword densities, checking exactly copy sentence fragments etc. As a result of this duplicate content filter, a whole industry has grown up around trying to get round the filter. Just search for "spin article". Getting back to the story here, Google indexes a page and lets say it fails it's duplicate content check, what does Google do? These days, it dumps that duplicate page in Google's Supplemental Index. What, you didn't know that Google has 2 indexes? Well they do: the main one, and a supplemental one. Two things are important here: Google will always return results from their Main index if they can; and they will only go to the Supplemental index if they don't get enough joy from their main index. What this means is that if your page is in the supplemental index, it's almost certain that you will never show up in the Search Engine Ranking Pages, unless there is next to no competition for the phrase that was searched for. This all seems pretty reasonable to me, so what's the problem? Well there's another little step I haven't mentioned yet. What happens if someone copies your page, let's say your homepage of your business website, and when Google indexes that copy, it correctly determines that it's a duplicate. Now Google knows about 2 pages that it knows are duplicates, it has to decide which to dump in the supplemental index, and which to keep in the main one. That's pretty obvious right? But how does Google know which is the original and which is the copy? They don't. Sure they have some clever algorithms to work it out, but even if they are 99% accurate, that leaves a lot of problems for that 1% of times they can get it wrong! And this is the heart of the exploit, if someone copies your website's homepage say, and manages to convince Google that *their* page is the original, your homepage will get tossed into the supplemental index, never to see the light of day in the Search Engine Ranking Pages again. In case I'm not being clear enough, that's bad! But wait, it gets worse: It's fair to say that in the case of a person physically copying your page and hostÃng it, you can often get them to take it down through the use of copyright lawyers, and cease and desist letters to ISP's and the like, with a quick "Reinclusion Request" to Google. But recently there's a new threat that's a whole lot harder to stop: the use of publicly accessible Proxy websites. (If you don't know what a Proxy is, it's basically a way of making the web run faster by caching content more local to your internet destination. In principle, they are generally a good thing.) There are many such web proxies out there, and I won't lÃst any here, however I will describe the process: they send out spiders (much like Google's) and they spider your page, take your content, then they host a copy of your website on their proxy site, nominally so that when their users request your page, they can serve up their local copy quickly rather than having to retrieve if off your server. The big issue is that Google can sometimes decide that the proxy copy of your web page is the original, and yours is not. Worse again, there's some evidence that people are deliberately and maliciously using proxy servers to cache copies of web pages, then using normal (white and black hat) Search Engine Optimization (SEO) techniques to make those proxy pages rank in the search engine, increasing the likelihood that your legitÃmate page will be the one dumped by the search engines' duplicate content filters. Danger Will Robinson! Even worse still, some of the proxy spiders actively spoof their origins so that you don't realise that it's a spider from a proxy, as they pretend to be a Googlebot for example, or from Yahoo. This is why the major search engines actively publish guidelines on how to identify and validate their own spiders. Now for the big question, how can you defend against this? There are several possible solutions, depending on your web hostÃng technology and technical competence: Option 1 - If you are running Apache and PHP on your server, you can set the webhost up to check for search engine spiders that purport to be from the main search engines, and using php and the .htaccess file, you can block proxies from other sources. However this only works for proxies that are playing by the rules and identifying themselves correctly. Option 2 - If you are using MS Windows and IIS on your server, or if you are on a shared hostÃng solution that doesn't give you the ability to do anything clever, it's an awful lot harder and you should take the advice of a professional on how to defend yourself from this kind of attack. Option 3 - This is currently the best solution available, and applies if you are running a PHP or ASP based website: you set ALL pages robot meta tags to noindex and nofollow, then you implement a PHP or ASP scrÃpt on each page that checks for valid spiders from the major search engines, and if so, resets the robot meta tags to index and follow. The important distinction here is that it's easier to validate a real spider, and to discount a spider that's trying to spoof you, because the major search engines publish processes and procedures to do this, including IP lookups and the like. So, stay aware, stay knowledgeable, and stay protected. And if you see that you've suddenly been dumped from the Search Engine Rankings Pages, now you might know why, how and what to do about it. About The Author Sophie White is an Internet Marketing and Website Promotion Consultant at Intrinsic Marketing an SEO and Pay-Per-Click firm dedicated to supplying Better Website ROI.