A search engine robot is a software program designed by search engines to seek out data on the internet. There are literally billions of web pages that exist on the internet. A spider will come to your website and gather all the data you have. The first thing the robot will look for on your web server is a file titled “robots.txt†this file which resides in your root directory will let the spider know if it is allowed to read your web pages or not. Please read the robots.txt for information on what exactly a robots.txt file can do to improve your chances on getting indexed. If the robot is allowed to read your webpages, it will then proceed to read your pages and then report the data back to the search engine that sent it on the mission. Once the data has been delivered to the search engine, the engine will then place the data in a database that is used by customers to find web pages. Such databases exist at Google.com, Inktomi, Teoma and other popular engines. All search engines have their own robots that scour the web for data.
Googlebot is the search bot software used by Google, which collects documents from the web to build a searchable index for the Google search engine. for more information you can check the en.wikipedia.org/wiki/Googlebot link.