Hello, What does it mean in robots.txt ? User-agent: * Disallow: / Allow: /$ Does it only allow index page domain? Also if i add: Allow: /category/ Will it allow http://www.domain.com/category/catname/ ? Thank you.
I am a bit confused in first as you already disallow the all page and then adding the page with $ express i don't know whether it will allow only index page. second one is clear allow category but it doesn't means disallow other pages.
User-agent: * Disallow: / - (Disallow to all search engine for crawling) Allow: /$ - (There is ambiguity, what you want to allow ... be specific) Allow: /category/ - (Allow to all search engine for crawling) Check all details here http://www.robotstxt.org/
Allow: /category/ is definitely will allow all search engine to crawl http://www.domain.com/category/catname/
a) User Agent: * means all robots or search engines crawlers. b) Disallow: / means not allow whole website to crawl by crawlers. c) Allow: /$ $ is wild card character to specify matching the end of the URL. To block an URLs that end with .asp, you could use the following entry: User-agent: Googlebot Disallow: /*.asp$ Hope it will help you and enhance your knowledge. Have a nice time.