Robot.txt file is used to restrict the search engine bots to crawl the pages. if you want to hide something from search engine it is the best option. you can download robot.txt file simply from google webmaster, specify your pages which you want to restrict to crawl and download the file.
Use our Robots.txt generator to create a robots.txt file from this http://tools.seobook.com/robots-txt/
Web site owners use the /robots[dot]txt file to give instructions about their site to web robots; this is called The Robots Exclusion Protocol. It works likes this: a robot wants to vists a Web site URL, say http ://www[dot]example[dot]com/welcome[dot]html. Before it does so, it firsts checks for http ://www[dot]example[dot]com/robots[dot]txt, and finds: User-agent: * Disallow: / The "User-agent: *" means this section applies to all robots. The "Disallow: /" tells the robot that it should not visit any pages on the site.