1. Advertising
    y u no do it?

    Advertising (learn more)

    Advertise virtually anything here, with CPM banner ads, CPM email ads and CPC contextual links. You can target relevant areas of the site and show ads based on geographical location of the user if you wish.

    Starts at just $1 per CPM or $0.10 per CPC.

How google Web Crawlers Work?

Discussion in 'Google' started by Danielnash, Apr 8, 2010.

  1. #1
    Please put your view here about "How google Web Crawlers Work?"


    thanx
     
    Danielnash, Apr 8, 2010 IP
  2. Soraya Marques

    Soraya Marques Peon

    Messages:
    64
    Likes Received:
    1
    Best Answers:
    0
    Trophy Points:
    0
    #2
    I think more speed you put more fast your content is indexed.

    My blog is in shared web hosting, so i can puch to the limit the web crawler?


    Kiss ;)
     
    Soraya Marques, Apr 8, 2010 IP
  3. Sxperm

    Sxperm Notable Member

    Messages:
    4,386
    Likes Received:
    142
    Best Answers:
    0
    Trophy Points:
    225
    #3
    Google WebCrawler does work by crawl through website and follow the link to another website and so. Each website information will be gathered each time they crawl. WebCrawler will use the gathered information to compare between website page by page and send all the data back to Google Datacenters.

    You may consider to read here for more information in depth - How does a Web Crawler work?
     
    Sxperm, Apr 8, 2010 IP
  4. kamalchandel

    kamalchandel Active Member

    Messages:
    127
    Likes Received:
    0
    Best Answers:
    0
    Trophy Points:
    71
    #4
    If google had to sequentially scan every word in (say) 500,000 documents to find all mentions of the word(s) you are searching for, the process could take hours. The purpose of an index, therefore, is to optimize the speed and performance of a search query by breaking down documents into the keywords they contain.

    This whole process is not too dissimilar to the old card indexes you might remember in public libraries. To read through every book in the library looking for the words “gulf war” might take you a lifetime. Even scanning the titles might take you several hours. However, flicking through a card index took a matter of minutes and normally helped you to locate the resources you needed.
     
    Last edited: Apr 8, 2010
    kamalchandel, Apr 8, 2010 IP
  5. SOULZRIPPER

    SOULZRIPPER Well-Known Member

    Messages:
    2,382
    Likes Received:
    39
    Best Answers:
    0
    Trophy Points:
    165
    #5
    Google webcrawlers is like a hungry baby. Feed it something tasty and it needs more :p
     
    SOULZRIPPER, Apr 8, 2010 IP
  6. Intense3.2

    Intense3.2 Peon

    Messages:
    8
    Likes Received:
    0
    Best Answers:
    0
    Trophy Points:
    0
    #6
    LOL - interesting.
     
    Intense3.2, Apr 8, 2010 IP
  7. Danielnash

    Danielnash Active Member

    Messages:
    62
    Likes Received:
    0
    Best Answers:
    0
    Trophy Points:
    61
    #7
    I have been studied almost googlebot. finally i reached these conclusion

    A Search Engine Spider (also known as a crawler, Robot, SearchBot or simply a Bot, google bot) is a program that most search engines use to find what’s new on the Internet. The program starts at a website and follows every hyperlink on each page. crawls from one website to another. Search engines may run thousands of instances of their web crawling programs simultaneously, on multiple servers. When a web crawler visits one of your pages, it loads the site’s content into a database. Once a page has been fetched, the text of your page is loaded into the search engine’s index, which is a massive database of words.

    basically three steps that are involved in the web crawling procedure. First, the search bot starts by crawling the pages of your site. Then it continues indexing the words and content of the site, and finally it visit the links (web page addresses or URLs) that are found in your site. When the spider doesn’t find a page, it will eventually be deleted from the index. However, some of the spiders will check again for a second time to verify that the page really is offline.

    when it visits your website is look for a file called “robots.txt”. This file contains instructions for the spider on which parts of the website to index, and which parts to ignore. The only way to control.
     
    Danielnash, Apr 14, 2010 IP
  8. gaurav17kumar

    gaurav17kumar Guest

    Messages:
    311
    Likes Received:
    0
    Best Answers:
    0
    Trophy Points:
    0
    #8
    i am agree with you , very helpful
     
    gaurav17kumar, Apr 15, 2010 IP
  9. John2010

    John2010 Guest

    Messages:
    218
    Likes Received:
    1
    Best Answers:
    0
    Trophy Points:
    0
    #9
    Google finds pages on the world wide web and records their details in its index by sending out 'google Web Crawlers or robots'. These Crawlers make their way from page to page and site to site by following text links.
     
    John2010, May 28, 2010 IP