When we visit a website in the browser, it would be so nice if we have a plugin that can spot duplicate content and spot it just like copyscape does. This will help to identify copied and scrapped sites when editing directory submissions. Copying every site and searching in copyscape or google is a time consuming task. Does any one out here know any tool or browser plugin that does this job?
I may have an advantage as a writer but it just seems that sites with stolen/borrowed copy just smell wrong. It's like a sixth sense saying the content doesn't match the template or something. I suspect that the more sites you review, the more you'll develop that sixth sense too. When a site smells wrong to me, I visit an older page and do a Google search for a sentence that seems uniquely worded. Even if that comes up clean, I might search a few more phrases from different pages. The pitfall is that if the site owner is using stolen content and was clever enough to rewrite it, no program or technique will most likely lead you to the original. The other gotcha is determining whether or not the site in question is the thief or the victim. Again, that takes a bit of research and sometimes a bit of intuition. Another option is to check their about page (assuming they have one). It is the page most likely written by the site owner. If the quality of writing varies hugely from the rest of the site - the content is probably being written by someone else. Folks who steal about pages almost always leave at least one reference to the site they stole it from within the verbiage. Other than Google or Copyscape, there's really no easy way to determine if the content on a site is copied from article directories or stolen.