Can someone explain the crawl process? My understanding is a bot is sent out from a server? It has a continuous link back and transfers data back to the server. It checks links and keeps sending data back. "Basics" I think Now my question is will it crawl an entire site based on links and url or just links and if you have OBLs then the bot can potentially leave your site and crawl others? Loking for someone to fill in the gaps
as far as i understand, the google bot is constantly crawling the web. when it reaches a site, it browses the content and follows the internal links throughout the site then bases that sites position based on relevancy, keyword density, link backs and whatever else is in its algorithm. Its much better to have a xml sitemap associated with your website in google webmaster tools, then, when the google bot gets to your site, it already knows exactly which links to look at, which have more priority, which were last updated etc and it helps a lot. when it comes to external links from your site to another, I believe the bot puts them into a queue almost, once finished its preset routine, it then goes onto its list of found links and crawls them too.
google crawls those pages constantly which have more backlinks ie Higher PR. ------------------------------------------------------------ Funny Commercials |Funny Riddles | Valentines Day | Funny Quotes
the crawl visit and index - cache - rank your website pages including your home page much more if you have a lot of quality links pointing to your website pages.
Thanks for the info, I understand the High PR and backlinks ... I want to know how the bot reacts. Can the bot be trapped? What keyword density is too high?