Can takeing relivent articles from many diffrent article websites (that allow re-prints) help ranking or will it harm it because of the dublicate content?
Duplicate content is fine if it's done tastefully. Most importantly, you'll need to keep the author's bio intact so the search engines can compare and contrast it with the original source. And since it's not the original work, the impact on your SERP ranking is minimal.
Isn't duplicate content harmful to webpages? because search engines drop the other one when they see duplicate page?
It is harmful to a point, people get too paranoid about SEO though. Google doesn't really care if you quote a whole article as long as your entire site isn't simply duplicate content that you've stolen, but yes leave author bio intact otherwise it is copyright infringement and that's a good way to get permanently banned from google.
Some of my pages are articles from article directories. Some rank high in the SERPS and receive traffic.
Search engines do not penalizes Duplicate content.If SE's finds any then only reduces the importance of that page simply by reducing the Page Rank of that website.
If you write articles yourself, you can submit the same articles to hundreds of article directories. Google and other search engines will NOT penalize you. Don't worry!!
Duplicate things are never good for a long term, copy cats never can the ultimate success. If we take example of youtube and facebook, those are unique. But looking towards them a lot of peopel made such sites using similer scripts. Those sites are not popular as facebook or youtube.
Google has a dynamic profile of each website that it spiders. Basically, it compiles all of your webmaster's activity on a website and webpage. The program that is doing all of this work is called Googlebot. Think of the Googlebot as a bee that is collecting information about a particular flower or group of flowers. When this bee returns to the hive, the information that the bee learned about a flower or group of flowers will be shared among the other bees in the hive. And since no two webpages are exactly identical, Googlebot will always compare and contrast one or more documents to determine which document was first spidered by Googlebot. The original document or webpage will always be used a reference point to determine if the others are duplicates. Organically, the original webpage will always carry more weight than the duplicates assuming all things are equal. Furthermore, a duplicate webpage could potentially outrank an original document if there are more text links pointing to it. When this happens in the SERP, a #1 ranking for an original webpage could lose one or more ranking as a result of the influx of text links to a duplicate webpage.