Hey out there, Got a question for you fine people. I've built an article site using commonly available artilces from places like ezinearticles.com. To give the pages some uniqueness in their content I'm using a javascript to insert some homemade commentary above the articles. So far it's working great. This is the free script I'm using: http://www.web1marketing.com/resources/tools/random-quote.htm Now the real question is, if the googlebot (or any other bot), comes to visit, will it spider the unique text that the javascript is pulling (along with the regular article text)? On one hand, the text that's being pulled (from a .js file) is real and it shows up on each page everytime someone visits thus making the page unique. However, I've read that bots can't spider javascript? If the latter is the case, my script's text will be irrelevant and the page will be just be spidered as the article's text and nothing else. Which is very bad- duplicate content filter here I come... Ideas about this? Thanks! LC
Google doesn't scan JavaScript for content. They have been extracting complete URLs from JavaScript for about 2 years now, and at least one of their crawlers has been known to occasionally scan .js files for reasons we can only guess at, but that's the extent of things right now.
You need to add <noscript> tags to provide content that google can read. The best thing is to change your javascript to a php script if you can.