Google and Crawl-dealy

Discussion in 'Google' started by forinnov, Jun 10, 2015.

  1. #1
    June 5, Friday near 14.00 our server based on HP Proliant (config above the average, not the top) with the regional media website on board (~ 100,000 pages in the index, up to 25,000 hosts per day, Google News, Yandex News and others. ) was droped.

    The reason - a huge number of hits and downloads (primarily images) to the subnets belonging to Yandex.

    Fixed a prompt, just remembering the directive Crawl Delay for robots.

    During my practice (about 5 years) it was happend for the first time. Earlier Crawl Delay I have not used.

    Understand that you can use the Crawl Delay may be fractional, it remains unclear:

    1. What is the maximum value can be used without any harm for indexing by Google (now is 0.1).
    2. Is it possible to write the rule in with robots *, ie for all search engines?

    P.S. That evening randomly ping other sites of regional media - have not worked a lot (the cause is unknown, perhaps a coincidence).
     
    forinnov, Jun 10, 2015 IP