what is logic behind google crawl? how they work? i really understand this technology in depth? how they collect the data?
When the webmaster uploads their site then Googlebot discovers new and updated pages to be added to the Google index. If we provide a sitemap in our site then Google can easily crawl and index all the pages. They uses huge set of computers to fetch (or “crawlâ€) billions of pages on the web. The program that does the fetching is called Googlebot (also known as a robot, bot, or spider). Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site. Now, I hope you have got an idea like how Google Crawls the pages...
can we develop this type of algorithm or crawl for small search engine ? what is the flow chart for that?
google crawl means fetch the data from the website and once data are fetch then all the fetching data are indexing. Once indexing the data then your website get traffic from users.and then it become step by step procedure to become clients.