Block Pages and URL Using a Robots.txt

Discussion in 'Search Engine Optimization' started by loosediamonds, Aug 26, 2011.

  1. #1
    any one tell me how can i my website url and pages with the help of robots.txt file because of duplication of website url
    if any one know then sharing your knowledge with, so can resolve my problem

    thanks.........
     
    loosediamonds, Aug 26, 2011 IP
  2. stevenyoung

    stevenyoung Peon

    Messages:
    341
    Likes Received:
    4
    Best Answers:
    0
    Trophy Points:
    0
    #2
    Block or remove pages using a robots.txt file
    A robots.txt file restricts access to your site by search engine robots that crawl the web. These bots are automated, and before they access pages of a site, they check to see if a robots.txt file exists that prevents them from accessing certain pages. (All respectable robots will respect the directives in a robots.txt file, although some may interpret them differently. However, a robots.txt is not enforceable, and some spammers and other troublemakers may ignore it. For this reason, we recommend password protecting confidential information.)
     
    stevenyoung, Aug 26, 2011 IP
  3. danielwood

    danielwood Member

    Messages:
    115
    Likes Received:
    1
    Best Answers:
    0
    Trophy Points:
    28
    #3
    If you have duplicate pages in your website and want to block them through robots.txt then add the following code in your robots.txt file:

    User-agent: *
    disallow: /filepath
     
    danielwood, Aug 26, 2011 IP
  4. linda009

    linda009 Peon

    Messages:
    45
    Likes Received:
    1
    Best Answers:
    0
    Trophy Points:
    0
    #4
    Great!
    Once my site was in spambox so i use the same code in Robot file and it works for my site.
     
    linda009, Aug 26, 2011 IP