When you have too much of javascript on your page header, it can sometimes cause partial crawling of a page. When the crawler tries to access the page, it may load slowly, and the spider will take only initial lines and terminate the crawling from that page. It may not reach the real content which is optimised as per SEO. Keeping the code in external file, you will reduce the file size of your content page, resulting in a quick spidering.
The trick with SEO is to get the relevant content as close tot he top of the source code as possible. This means *JS and CSS get put into external, referenced files *The first bit of visible text on your page should be in an h1 tag (semantically important) *CSS source ordering can get the bulk of your content right at the top, with nav and side bars (right and left) appearing later in the source.
The following quote is getting a bashing today: http://searchenginewatch.com/showPage.html?page=3624222 "Some designers will use CSS to position content first in html thinking it helps with rankings (it doesn't)" This thread has got an astounding amount of misinformation in it.
Oh yes it does!!!! SEs tell us that content nearer the top of the source code is weighted more. So that's what I do I then use CSS to position other useful elements on the page. Having been in SEO for 4 years, I've seen the changes and moved with the time. I've tested both approaches and have found significan results. I also find that elegant XHTML with CSS easier to code as well. What doesn't work is keyword stuffing and content spam - no matter what you do with CSS. Some bots have crawled external CSS files. speculation has arisen that this is an attempt to find hidden content. As for how to put your CSS in an external file put this is the head section of your html document <link rel='stylesheet' type='text/css' href='styles.css' /> where styles.css is the path and filename to your css file
Where have the search engines documented this? Nowhere. Just out of interest, how did you test this? Not only is there no evidence that content placed near the top is weighted more, it doesn't make sense for it to be.
There is evidence - just try it and you'll see your rankings improve. true, SEs tell us that code:content ratio has no effect on rankings, nor does the compliance of the code used. It makes perfect sense to put the most relevant content towards the top of the page. It's called semantics!!!! And on a similar note, LSI is more than just alternative keywords - the 'latent' word describes the property that is 'hidden' and is only there by implication where other content is present. I 'tested' the principle by applying various concepts to over 500 sites across all industry sectors running various technologies, hosted in various parts of the planet. Results showed the CSS source ordering can help improve rankings. other observations the average top positions of sites with JS and CSS in external files is higher than the ave top position of sites with JS and CSS on the page The average rank for pages hosted on a LAMP server was higher than on a windows box hyphens in domains separating keywords used to out perform concatenated keywords until the Florida update. Things were equal between the two for a while, then concatenated overtook hyphenated pages made from scraped search content, do not perform as well as human written articles, even when keyword density and overall word count are the same the average top positions of sites using semantic HTML is higher than the ave top position of sites without semantic html (eg created in the design view of WYSIWYG editors with no thought of th content behind it) a page where the content remains exactly the same will fall stagnant a page where the content constantly changes (generally) doesn't make it into the serps pages where the bulk of the content remains static, but a small % of content is refreshed with rss remain in the index (re:above point)pages where the refreshing content is below the static content tends to rank higher well constructed, valid code with semantic markup acquired a higher number of natural one-way inbound links (30% more than sites without such attention to detail) The last point was a nice surprise! (and may suggest the point that the engines don't care, but the resulting effect of 3rd part appreciation is warmly welcomed!)
So to summarise, you can't provide any proof that content closer to the top of the document gives better rankings and you've confirmed that this hairbrained idea is not documented by any search engine anywhere.
It's all about what works for you. CSS source ordering works for me. It works for others I know (even people you respect on this forum!) In fact i went back to page i originally read it on (google webmaster guidelines), but that gets updated quite often. This time one would have to read between the lines to draw that conclusion. What your posts show me is that you have an unwillingness to learn. that you're not open to new ideas and that you will inevitable be overtaken by others, including your competition. Go back to first principles and work up from there in your own mind, that way you'll be able to have a good idea where things are at, and where things are going. What or who do you believe? your SEO guy? your best mate? The SEs your trying to get ranked on? your web-logs? your conversion rates? Your profits? I am the SEO guy, working in house. I listen to my site's ranking, the SEs, the profits and my test site results. I've summarised my findings above. Take them or leave them. they work for me and many others. hairbrained? start thinking outside the box. stop pretending to think and actually put your mind to work. be creative. pretend to be a SE and have a look!
No. Something works for you, but because you can't isolate (or even identify) all the factors that influence your rankings you can't possibly say that content positioning works for you. What my posts show is that I have an unwillingness to accept yet another SEO theory without proof. I'm open to learning something useful, but there is nothing to learn here except an unsubstantiated theory.
Did you not read my summary? Do you know anything about testing? experimenting? split testing? multi-variate testing? With a degree in physics, 2 a-levels in maths (including Stats) and 4 years SEO experience, I know what I'm doing when it comes to experiments and testing!
I read your summary, but you didn't explain how you tested. All you gave were your conclusions. I'd be interested to know how you tested "Content placed earlier in a document improves ranking"
Dupe content is an issue, so one can't test with identical content. testing was done with 'similar' content. I gave 4 ghost writers the same 20 article titles, and developed 2 two CSS source ordered templates, and got a web designer to make 2 non-css templates. All four sites were hosted on the same unix enabled box, each with it's unique IP (all 4 in the same range, no cross linking) The process was repeated for a windows box, at another data centre. (total 8 sites), and then repeated again for various industry sectors (all in all - health, finance, internet & travel, 8 sites each (32 sites), 80 keywords (20*4)). All 32 sites became live within 1 week. As I mentioned we had a range of 500 sites (about 400 at time of this test) so we used those to initiate the linking. The keywords were searched for daily, on the big 3 SEs inclusion times ranged from 5 days (but very poor rank) to 33 days. After 30 days, 10 of our 80 keywords had top 10 positions across the SEs, with no favouritism between css/non-css (but the unix hosted sites were above the windows hosted sites; until this point, this was just speculation and lead to another test) It took 120 days for favouritism towards the CSS sites to kick in. after 180 days ~38 keywords had a top 10 spot in at least 1 of the big 3 (some serps showing more than just 1 site. out of a possible 640 pages, ~300 pages in top 10 spots). of these 38 keywords, 33 were on one of the CSS templates. of the ~300 pages in top 10, 77% on CSS based templates So, what of the 42 (or the other half?) keywords not in the top 10? positions 11-20: 19, (12 CSS, 7 non-css) positions 21-30 12 (7 CSS, 5 non-css) remaining 11 - either >30 or not listed You can see form these that results that the css/non-css factor had little effect below #20 - this got us thinking like originally stated - we noticed more backlinks to the CSS sites over the non-CSS sites. To me (so this is subjective, and you have stated that you disagree with this) it has always made sense to put my most important content first in the source (10 years ago we were developing mainly plain text sites, little graphics/css - mainly because of poor download times and old browsers - so a semantic habit that has stuck) A conclusion that is speculation is this - although the big 3 SEs claim not to care about code as they are clever enough to work out poorly coded sites, there are many home-brew SEs and smaller ones that do not have the technology. Elegant code, relevant stuff higher up will lead to a better inclusion in these smaller engines - resulting in the higher number of back links. So, although you want a listing in one of the big 3, there are hundreds of others looking at you - make it easy for them too as they're the ones the big boys listen to!
As long as page is content rich then Javascript hardly affect the site. But yes too many Javascripts may affect the site. I think so
That's some test Northie! That must have taken loads of prep. When were the sites available for the search engines' delectation? Some things that I think corrupt your "conclusion that is speculation" though are: - the CSS pages had more backlinks - all the articles were different - the articles were hosted on different sites Because of these differences I'm not sure you can validly conclude that content positioning does influence rankings. Interesting test though
well having too much javascript is not good since its not spidered, esp if u do banner rotations - javascript in the header long and big ones are bad - since it takes time for spidering