My website loads fine and in good speed, but it has a tons of minor errors. Is this something to worry about for fixing?
The quick answer is - try running yahoo.com through the same validator. You'll probably find that has more errors than your site does! The longer answer - and probably the better one - is that it depends on the kind of errors and warnings. Don't sweat the small stuff - but sort out the more serious. If you want to know which ones to fix, maybe post the warning messages on a forum like this and let the community help you decide which ones...
Validation errors can mean there are problems with your HTML that will make it difficult for search engines to parse it. And, over time, they increase the chance that future versions of web browsers will be unable to display your site correctly. Better to fix them early. I don't launch a site unless it validates perfectly.
I think that it's better to fix it fast, Google like sites that have vaild XHTML/HTML template. Cheers.
That might be a wrong concept. How do you think a search bot is supposed to check a website for errors and why would it do that? From my understanding, search bots are there to collect information, not to check for errors. It's a bot, no likes or dislikes The real issue may be related to page display issues if you don't take proper care of the coding errors.
You right, But agree with me that for the bot easy to collect information from page that without errors. Cheers.
Some html error types can limit how search engine spiders index a website but I don't think that an extra </tr> closing tag is a huge deal while the W3C validator will still count it as an error.
Totally wrong. Search indexers are computer programmes. They have no feelings or personality. They follow rules which tell them how to extract content from pages. Those rules are the based on the same specs that validators use. If your page does not validate, then the bot will have trouble reading it for the same reason that the validator didn't like it. This is kind of an inane question. If a validator can check for errors, why can't a "search bot"?
Here's another insane question, actually, ask yourself why would a search bot be set up to check for site errors instead of just gathering information. Why would it be sent to your site and eat its bandwidth for doing tasks you haven't requested? I'm rather willing to believe that spiders have been set up wisely to strip most of the non-seo site tags to just index clean lines of content.
I have also some errors.They are crawl errors.I don't know how to remove these. Visit my site http://earningdiary.com
Here are some quick fixes: Fix for both: <img src="http://revtwt.com/images/TwtAd_referral01.jpg" alt="" /></a> HTML: Fix: remove the following attribute from the <form> tag: role="search" Code (markup): Fix for both: …Join earningdiary at MyBloglog!" /></a> Code (markup): There you go, a few more "crawl errors" left to be fixed
Think about it. Imagine how you would write a parser or structured lexical analyser. If the input doesn't follow the prescribed model then it becomes much more difficult to work with. A validator is basically just the part of a search indexer that processes a single page, except that it's set to output the errors back to you instead of quietly failing and moving on to the next page. P.S. "inane" != "insane".
You don't mean that Google Search cares about validation, do you? I actually believe I care more but I was talking about something else in my post. Why would Google force websites to follow a certain standard in order to get indexed if they don't follow it themselves? Think about it
It won't but if it can't read your page because the machine doesn't understand what you have on it then you have problems. I wouldnt' worry about alt tags but I would worry about tags not being complete, missing, mismatched.
True, Sarah. As funny as it may sound in this context, I am a coding perfectionit myself, and all my XHTML/CSS work validates as a standard deal. Customers don't even need to point that out to me, I am a huge fan of clean coding. What I don't like is the slightly paranoic approach to validation, with stuff like "google bot will not index a page if it doesn't validate" which is a misleading concept. Anyone can do a simple search and then run the top ranking results through the W3C validation tool. You'll be surprised to see how many fail the test, yet are on top. The conclusion? If browsers can read a page then bots should be able to crawl it, too.
I'm with you. Yesterday I was trying to suss out why a block of code looked good in all browsers but IE7. Turned out we have a div where a li was meant to be. It'll get indexed ok, viewed by most browsers but because the code isn't right we have a problem.
Valid code does not effect search engine rankings in my opinon, some people made some pretty valid points about the reasons. You should write valid code as a normal standard if you are making a website for a paying client. They would not be happy with anything else.