if we allow all crawler in robot txt file and display most page url which we want to crawl fast then bot will crawl it fast.
There is no 'allow' command for robots.txt. So, if you list a page it can only be disallowed! To ensure that pages are crawled, use the tool that is made for doing just that, a site map file.
Like Nishail says. You can see an example of using robots.txt with xml sitemaps. Using robots.txt for is called "xml sitemaps autodiscovery"
if you want to crawl your site then you don't have to indicate or create robots.txt.. the default of it will be allowing bots to crawl all your pages..
good idea for the sitemap in the .txt. file, just make sure that you have updated everything in webmaster tools first. E.g. added the robots.txt. file, submitted the sitemap, and occasional recalculation of sitemaps and resubmits. Just let Google know what your doing basically.
If you want something not to get indexed, add this in your robots.txt: Disallow: <path of file or directory that you dont want SEs to crawl>
thanks to all to replay your valuable suggestion but i am notice that my website is crawling regular but robots.txt is not crawl and other website robot.txt file is crawl time to time.