if google frowns upon duplicate content pages, why do so many alternate wikipedia and dmoz sites show up in the SERPs?
I think some they have some different text in title, or have different page structure eventhough the main data from those directories are the same.
It is reasonable to assume Google might be creating a type checksum for each page, and then comparing checksums. If the checksums are identical then it handles it as a duplicate.
Yeah, bling is correct. http://www.google.com/search?q=college+wrestling+news&hl=en&lr=&ie=UTF-8&start=10&sa=N 6th and 8th from the top are nearly identical.
yes there are lot of identical pages on top and they all are related to either wikipedia, wikipages, dmoz etc....
Because they are trusted domains. A source like Wikipedia is something that is going to benefit the average Internet user- so why not display as much as possible from a search subject supplied? This isn't even comparable to websites who would make duplicate content just to cheat SE's before the duplicate content filter was put into place.