Hi, I'm currently developing some stuff based on the Google SOAP Search API, which, as most of you know, requires 1 rare, near-extinct API key per 1000 requests / day. A friend of mine has another website up-and-running which also queries Google Search Result Pages - but without using the Search API, instead he's just scraping the SERPs with CURL and parses the HTML. His website does ~500-3000 queries to the Google server per day and he says he hasn't experienced any problems. So what's the way to go? What do you use, which problems did you face...?
I use this method on web diagnostic I use this method on Ask Mie without SOAP, sometime my ip / site temporary banned by Google SERP Service and redirect the result to http://sorry.google.com/sorry That's why currently i have several SOAP APIs to support my sites especially web diagnostic
Google used to track using IP so it is not possible to get always results but once u upload it in server you can do your things it will work perfectly for others
Beware, Google often changes it's css and html structure to ward off SERP scrappers. That said, i have a few non-paying apps on the web that rely entirely on Google search results.
If you look closely into their html page structure, you will see that certain patterns always remain the same. Even comments.