Google's GoogleBot is now able to crawl forms and they say they can now "scan" Flash and Javascript. That's big news!! All those links could be crawlable now that were previously not found due to javascript.
I guess that explains why I have been seeing search URLs showing up in Google's index for some of my sites. It looked like someone was posting the URLs from the search results pages in the sites, but I could never find the source of the links. This is a double-edged sword. Search results pages typically are not optimized. I don't see any other type of form that they would care to crawl. What is the benefit of this? I have not seen any of these pages show up on Google's search pages and I don't think users want to land on internal search results pages. Am I missing something here?
Google has been scanning JavaScript and Flash files for URLs for 3-4 years now. They look for complete URLs in the form of "http://www.example.com/". They added the ability to extract text from Flash files over 2 years ago. What is truly new is that they are experimenting with following <form>s to see if they resolve to crawlable pages, such as a site's custom search form. Note that the <form> has to use the "GET" method so that there is a URL to put in the index.
That explains a few things about what I've been seeing for reported backlinks. Are you sure they've been scanning javascript for 3 or 4 years though? I've been hearing that they could not, up until now. It seems to me it's a fairly new development, not as new as reading forms, but new just the same.
Why is this great news? I don't understand. Can somebody give me an example where this will make a difference to a website.
More ways for Google to crawl and gather data means more ways for us to benefit. If they can find links in javascript and flash, then it gives us more avenues to have backlinks. There may be some benefits for them crawling forms as well.