Here you go: https://en.wikipedia.org/wiki/Robots_exclusion_standard tldr; it tells search engine spiders what part of a website they can access or are restricted from
Robots.txt is used for Website to communicate with web crawlers and other web robots. The standard specifies how to inform the web robot about which areas of the website should not be processed or scanned.
Robots.txt file is at the root of the website that involves sectors of your website you don’t want to be attained by search engine crawlers. Webmasters use a robot.txt file to instruct the search engine robots on how to crawl & index the web pages.