hi i am new to search engine concepts would like to know how does search engine read website content? 1.by reading the source code? if not why to add <meta tags> because <meta tags are not visible without reading source code> 2.by opening the site in 'text browsers' like(Lynx Viewer)
It reads the source code. But not javascript. put any page into this site and it will show you what the search engine sees: http://www.xml-sitemaps.com/se-bot-simulator.html
Well, crawlers will gain more understanding of Javascript over time. E.g. A1 Sitemap Generator can read some Javascript constructs to extract links from them. I also believe Google announced that they to some extent do crawl/understand Javascript.
Read this page, en.wikipedia.org/wiki/Web_crawler It will give you some idea about how SEs crawl sites.