1.) The web is a public place. But is it legal for sites to scrape content from other sites and then reuse it? Such as Oodle scraping content from other classifieds (some classifieds give them the go-ahead) and then placing the material on their site even though it links up to the original content creator as a good-faith gesture? 2.) How does one prevent other sites from scraping original content? Is there a list of scraper bots to place under a block or a list of scraper ISP's? The notion of scrapers is a concern.
"Law is very important thing and rules for live a life." How profound. Anyone else care to address the topic?
Well i believe it's ok if you link back to the original source with the authors name in your scraped content. Which really is not a bad thing for a webmaster as all this would do is increase traffic and backlinks. If you are concerned with scraping from rss, then just make sure you're feeds are small extracts, this would mean they would have to link back to your site for full article. Also there is a addon for rss feeds which will state your terms of use if they want to scrape from your site, i believe this is legally binding. Hope this helps.
It is never legal unless you have explicit permission. You are breaking copyright law if you do not have that permission. I don't know if Oodle shows the entire scraped content, but snippets are OK (2-3 lines) with a link back to the original article.
re: #2 Well I scraped this from ostatic.com if that helps: The 1x1 pixel trick actually sounds pretty nifty to me.
I guess you can check awstats or something like and see which domain is accessing your site a lot. Once you find which site is doing it, you just simply block it via .htaccess
Thanks for all the great responses! I will look into some of the solutions offered and hope things go as planned.