I'm getting hit at the rate of about twice a minute on the exact same page, from a massive range of different IP addresses (I'm talking Poland to Brazil to China...) and all of the hits are direct (i.e. no referrers) An example is shown below (obviously I see something similar for each image hosted on the page as well): Host: 71.57.47.162 * / Http Code: 200 Date: Jan 08 03:22:48 Http Version: HTTP/1.1 Size in Bytes: 10216 Referer: - Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322) Code (markup): This is happening on two of my sites, and whilst it's a way short of having an serious DOS-type effect, it is using my bandwidth and more to the point p*ssing me off! Does anyone have a sure fire way of dealing with this sort of thing? The problem as I see it is that I don't want to stop legitimate visitors accessing my site via bookmarks/links in emails etc. So far my not-very-elegant solution has been to do this: <?php $UA=$_SERVER["HTTP_USER_AGENT"]; $referrer = getenv( "HTTP_REFERER" ); $location = $_SERVER["PHP_SELF"]; if($UA == "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)" && empty($referrer)){ echo "Sorry, you are seeing this page because we've been targetted by spammers. If you're human, click <a href=\"". $location . "\">here</a> for the site."; exit; } ?> Code (markup): The last site this happened to me on, the problem went away a couple of weeks after putting the above code in place. But I don't really want to have to put a 'human check' in place... Ideas anyone?
I made the same experience see http://forums.digitalpoint.com/showthread.php?t=215225 I use for one part the agen method to deny access but since multiple agents used, ... I now collected all IP data to set up rules for iptables in my case almost all fake traffic comes from China - from networks usually empty of any generic human traffic - hence blocking an entire C, B or even A subnet is the most convenient solution for me