Robots.txt is just a file where you tell Search Engine Bots which page to to allow and disallow for crawling. But if you don't want to disallow any search engine then you can leave it blank or simple way is don't upload(But that is not recommended). You can know everything in detail from here - Type this - robotstxt.org/robotstxt.html(Just avoided a link on post).
A robots.txt file is a file that prevents web crawlers like googlebot from crawling certain pages of your website. For example, if you disallow /contact-us in your robots.txt file, the url... www.yourdomain.com/contact-us ...will not appear in Google's index. Here's an article from Google that explains more about the uses of robots.txt files and how to use them: https://support.google.com/webmasters/answer/6062608?hl=en
Mainly robots.txt file is used to deindex pages of your website, or completely deindex your website from search engines. It is useful to use when some of your website pages no longer exist and you want to deindex them from search engines.
So if I put a list of de-indexed pages in Robots.txt file it will work for all bots, not just Google?
you must search for the name of other search engines bot and put the name in robots.txt, for example the name of google crawler bot is "googlebot" and you can disallow googlebot to index your websites and allow others to index...
Robots.txt is a file to tell Search Engine's bot to which page need to crawl or which is not? In a single line, To Control Search Engine's Bot activity on your website, you have to create Robots.txt file.
If I explain it to you in a simple way, it's main purpose is to tell the search engine's crawler whether or not to crawl a particular page.
You can tell Search Engines what to visit on your website and what not! For a blog, don't use it, but for a website for example written in PHP, robots. txt might take place!
If you do not want the Search engines to crawl specific pages on your website, Admin pages for example then you can do it through robots.txt
robots.txt allows search engines to crawl or un-crawl the links that you given in your website. It has ti be done with "rel" attribute as "dofollow" and "nodofollow" here you can see "dofollow" let search engines allowed to the crawl the link and "nodofollow" does not allowed to crawl it what have you given. It is used for credentials usage like bank transaction etc..,