The robots.txt file is used to tell search engine crawlers which pages they should NOT index. For example, if you don't want any of your pages crawled, your robots.txt would look like this: User-agent: * Disallow: / Alternatively, if you want all of your pages indexed, your robots.txt should look like this: User-agent: * Disallow: For more information about robots.txt, read this guide: http://www.webgnomes.org/blog/robots-txt-file-guide-that-wont-put-you-to-sleep/
Robot.txt is use to tell robots which link to nofollow and which link to crawl (follow). This process is to tell crawler that which link is important and which is not. Secondly all link have juice for other link...so it depend upon you to tell crawler from which link you want to have link juice.
It's some thing like a pass or instructions for search engine bot, with instructions of do and don't in .txt format.