If you want to block all the content of the website to the web crawlers User-agent: * Disallow: / If you want to access all the content of the website you do not need the robots.txt file.
Robots.txt is a simple text (not html) file you put on your website root directory. By defining a few rules in this text file, you can instruct robots to not crawl certain files, directories within your site, or at all... for more information, you can search for it in google.
Web site owners use the /robots.txt file to give instructions about their site to web robots; this is called The Robots Exclusion Protocol. It works likes this: a robot wants to visits a Web site URL, say http://www.example.com/welcome.html. Before it does so, it firsts checks for http://www.example.com/robots.txt, and finds: User-agent: * Disallow: / The "User-agent: *" means this section applies to all robots. The "Disallow: /" tells the robot that it should not visit any pages on the site.