Avoiding dup. content

Discussion in 'Search Engine Optimization' started by Liminal, Aug 9, 2005.

  1. #1
    Hello all,

    I am involved with an SEO development project of 2 ecommerce sites. The two sites are hosted on the server, have different URLs but sell the same products (product detail pages contain the same text). One of the sites serves US customers and the other serves Canadian ones. There is a link to the Canadian version at the header of every page on the higher travelled US site.

    In order to avoid the duplicate content penalty from Google and possibly others, the US site is optmized for search engines (metatags, title, se-friendly urls) while the Canadian is not. Morever, the Canadian site does not let search engines to spider it by including the following directives in the robots.txt:

    User-agent: *
    Disallow: /

    So the only way for someone to find the Canadian site is via a direct static backlink or via the link at the header of the US site (see above):

    Is this a proper way of handling things of that sort?

    Thanks a lot for any advice
    James
     
    Liminal, Aug 9, 2005 IP
  2. sypher

    sypher Guest

    Messages:
    36
    Likes Received:
    0
    Best Answers:
    0
    Trophy Points:
    0
    #2
    I've heard people complain about google ignoring the robots.txt file but I cant say for sure from first hand experience.
     
    sypher, Aug 9, 2005 IP
  3. Philarmon

    Philarmon Peon

    Messages:
    270
    Likes Received:
    7
    Best Answers:
    0
    Trophy Points:
    0
    #3
    Well, that SHOULD keep spiders away from the website but like Sypher its not always the case.

    You might want to ad a robots metatag (NOINDEX, NOFOLLOW) to your canadian website too ...
     
    Philarmon, Aug 13, 2005 IP