http://infolab.stanford.edu/~backrub/google.html it is from standford. i like to talk to someone that know well google. my question after read that: 1. is cache of page in google retrieved from repository? 2. crawler often download(crawl) page (several time in a day for pr7 page,but cache page a time/several days.) every time crawl a page,then does storeserver put it into repository. if yes ,data in repository will very fresh. question 1 and 2 are relative.
Crawl rates and indexation are based on a variety of factors, Backlinks being an important one ( with age, freshness bla bla bla). This determines how often a bot visits and indexes (and how much of a site) Google says it does a data refresh every day or so. Outside of that I am not sure what the question is?