Yes Content is king. because Adsense rate it as prioraty.But how Adsense rate contents as real or scraper?
Not entirely sure what you're trying to prove here. Content is king ??? Is it really ??? Have you managed to make money from a site with only content and no visitors ???? If you know how then please tell us. Content is important, but no more than traffic.
Content and Traffic are two sides of the same coin. A website is useless without both existing. One can come before the other and in the long term it doesn't really matter which is first. You must promote websites (ie drive traffic to them) nowadays for them to become successful. Content without traffic is a pointless waste of time. Traffic without content may generate income short term, but for longer term success you need both parts of the puzzle. You can think of it like a car. Content is the body work, traffic is the engine. Without the engine the car might look good but isn't going anywhere. Without the body work, you might move but after time you'll get mighty uncomfy.
I think wat the person was trying to say is that content is extremely important, but how does Google detect scraper content from real unique content? At least that's how I understand the post...
I don't think at present that G really can tell the difference between scraper content and genuine content. If they could then there wouldn't be quite so much crap appearing in the SERPs.
I think they can do that. I mean, they would definately have the technology to do that. But may the scale is stopping them. One of the patent papers submitted by google ( i think trust rank or something like that), basically states a mechanism to deal with scrapper or spam sites. The also see the age of the pages, how long the pages have been on the internet., how often they are modified, and when did the other pages link to the page. They also have access to domain info! Its just a matter of time before google cleans of the spam. again, it wont be 100% clean, but ... should be decent enuf.
I'm taking a guess here. I think it has something to do with word density. When the bot visits a page it detects the amount of words, number of occurences etc. Later this can be compared to another page. If the pages have the same word density it could be regarded as scraper content/ duplicate content. (At least that's how I would do it)