I recently came across a method on accident that a lot of webmasters are doing to create one-way backlinks. I wanted to share this in the DP Search Engine Optimization section since it's directly related to backlinks SEO value. When you exchange links with someone you always do the obvious, you check for the nofollow attribute, robots meta tag and iframe injecting. Those methods are used to "block" or "not give value" to outbound links on the web page, which would make your link to them a one-way link and much more valuable. I've started to see lately a lot of webmasters using the robots.txt file to block link pages when doing link exchanges, yes a web page can have a Pagerank value if it's only blocked by robots.txt, it won't crawl or index the web page, but the inbound link value is still passed (I noticed this on my own web site, see bottom for example.). So, another method to check when doing link exchanges is to verify the page(s) your link is appearing on isn't blocked via robots.txt, or when you check your link exchanges to make sure they are still valid. I have a feeling most of you will run out and check this and find atleast one webmaster pulling this manipulative method off. Example of Pagerank passed to Blocked Documents http://www.civicseo.com/ask.html Code (markup): The above URL shows a PR0 in the Google toolbar, what you should notice is it's not N\A but PR0, meaning it is recieving link juice from internal\external links. This web page is blocked via my robots.txt file.
Ditto, thats why I reminding people, very few honest people out there. No problem, put it to good use not bad!
good article, so when we exchange a link, the pr at least 1, otherwise, we dont exchange. i have added your article at my site, thanks.
You missed the whole point, Pagerank is not a way to tell if this method is being used. I would assume if I looked around hard enough I could find a PR6 or PR7 page thats blocked by robots.txt, you want to manually check the robots.txt
It has nothing to do with Page Rank. NOfollow links are used to ensure that the search sngine spiders are not following those links and thus preserve your link juice. The same thing can be done using robots.txt where you can specify a webpage to be "index, no follow". So when you check the page for nofollow, you won't find any. But in reality the robots.txt file is telling the crawlers not to follow the outbound links. There is one problem with this approach. None of the links on the page will be followed by the crawler, even the link to your own site. So, if you are doing a link exchange from an index page or getting blog post link, in most cases you won't have to worry about this.
Excellent post. It makes a lot of sense and yes there are some people who do it. Should i name them? lol
Also add to that, the Nofollow Meta. View the pages source code and look in the Meta for: This will not pass link juice to links on the page, however even the Firefox SEOQuake Nofollow highlighter will not put a line through the link making it appear as a good Dofollow. An example of this is the PR8 W3C supporters page, at first it appears like a excellent backlink but not so it's useless. http://www.w3.org/Consortium/sup The other one i have seen a lot, is people cloaking their link page and showing links to people but a page with no links to Google. If they allow the page to be cached, it's easy enough to spot when you view the page cache and don't see any links but... Most people are more cunning than that and add a Nocache so you can't view what Google sees. If you are using Firefox, i advise a plugin called Useragent Switcher You can put any useragent in it such as Googlebot and view the page exactly as they see it and spot cloaking. I won't give any examples because i don't want to "out" people here, but there's one site in particular i found who has link exchanges with over 500 websites. Googlebot sees zero outbound and 500 one way links to them, needless to say the site is in the top 5 for all it's keywords.