Rather that ask the direct question I'm going to phrase this slightly differently. If you were Google, how would you detect 3-way link exchanges? examples of bad answers: Stop them counting. Remove their PR
A couple of the most obvious ways to detect these kinds of links are to see if the IP of their servers differ, and also the registration information of the domain names.
true, but then Google already does this to detect one-way links that are owned by the same person. Traditionally 3 ways links are on 3 separate servers with separate owners.
3 way links are hard to detect sometimes. even link exchanges cannot be detected at times. some link linchages do still count but too much link exchanges get discounted. if only a handful of 3 way link exchanges are done, it is nearly impossible for google or anyone to detect that. i dont think anyone can do any better than what google is already doing
Well google knows about this thing as it is discussed all over internet through public forums and it is also dealing with them. If i were to do then i would have reduced TrustRank of a particular website.
True, it's virtually impossible to detect three way link exchanges. Imagine I buy a PR3+ domain, then get about 10 three way links and then sell a few more links to maintain the domain. Do it 10 times and you get 100 links plus money to pay for the domains and if they are hosted on different IPs and stuff noone will be able to detect it. You need to make sure you have plenty of links from other sources as well though for it to work. As to the OP, what can you do if you cannot detect them?
If I were Google, I would use a combination of Probables, eigenvectors and norms to detect them (like the current Google do). ___________________________________________ ___________________________________________ How would I deal with them? The same way the current Google do. ___________________________________________ ___________________________________________ Cheers James
Can you please expalin this, how the tools work or what they are? Else this does not really help does it.
Thanks James. Can you explain the theory behind this? Or are you trying to point out that it is complicated?
They are pretty hard to detect if done wisely and they are done for good. If I were Google, I'd just stopped thinking about that.
@GerBot One of the general concepts of it are that similar traits are analysed between all the sites that link to the same destination and these traits are converted into a probability into whether they were created to artificially boost the recipricating site's page rank. These rules are very refined, but there's still a small possibility that it could mistake a genuine, organic set of links. It easily detects things like micro sites being used for linking purposes for instance.
Yes, I was trying to point out its complicated and that its complicated by design. rosszero pretty much nailed it. Cheers James