Crawling is process which is used by search bot or crawler or spider(software/application) to index pages of website.
A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner. Other terms for Web crawlers are ants, automatic indexers, bots, and worms or Web spider, Web robot, or—especially in the FOAF community—Web scutter. This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for spam). A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.
Web crawling means spidering in particular search engine, use spidering as a means of providing up-to-date data.
Crawl means when a Search Engine Spider (also known as crawler) "crawls" along your page and reads the text and urls and stores them in it's huge index of already crawled pages. From here the results are displayed when someone then searches for that page.
Crawl means an software of a search engine when visits on any web-page then it said that crawler is crawling a page, and this technique is known by name crawl.
search engine spider known as crawlers it crawl your web pages and give them rank so its very important from the pr view.