Seems as though most of the duplicate content problems people experience with blogs (and, for that matter, forums) are due to having the same content crawlable via multiple urls, and in particular via archives. On a blog with solid internal linking, are archives more a hindrance than a benefit? How do you handle your archives?
If they're structured well, I think they can be good for users that want to browse through your blog. To avoid the duplicate content issue, you can just block the SEs from your archive pages with a combo of NoFollow and robots.txt
I have monthly archives only, and according to my stats people do use them (in my site). You don't have to nofollow or block them in your robots.txt file if you have an XML sitemap and it's registered with the big 3 engines. I just wrote the other day about increasing traffic with MSN, Google, and Yahoo by registering your XML Sitemap with them. What I do is use the Wordpress wpSEO plugin and tell it to not put my archive pages in my site map. So I have archives that people can use, but don't have to worry about duplicate content problems at all.
There's a plugin for wordpress called "Duplicate Content Cure" which also prevents SEs from taking the archived versions and going duplicate content with it.