can anybody tell me how search engines spiders can decide if a webpage have quality and unique content or not ?
I forget the exact name of it - but Google is a Language engine - in other words, if you talk about generals, it will look for things that are similar to that term. May look for troops, tanks, or the word "war" listed on the same page. Now if you are trying to fool any kind of bot buy putting gibberish and then keywords - Google should detect it. However, I am working on a site right now (and just for the fun of it) to not put gibberish but sentences that relate to the keywords that I'm using and if the bot is looking at it - it looks fine to them. But the visitor reading the page will think it doesn't make sense. Again, it's just a test - that's all.
Remember that most of the larger search engines also have manual quality reviewers. Even though a bot may recognise you content as quality content, it does not mean that you will rank first in the serps. Manual quality reviewers can and will ban your site if you are trying to fool the bots.
The spiders don't do that; they just dumb bots that crawl the web and collect the data. Its the ranking algorithm that does a whole range of analysis to sort out the alleged quality.
spiders crawl ;links[mainly] ,title elements and some other minor factors determine position in SERP.Highest position in SERP doesnt mean it is high quality.
They just look at relevancy like "if keyword in title come in article body" , if somebody overstuffed keyword it will raise flag. By the way it often do errors which blackhat seo's takes advantage that's why google update its algorithm so frequently.