Just read a nice interview with Matt Cutts where he talks a bit about robots.txt. It might be helpful to some to know how Google treats pages in robots.txt. Even if you disallow a page to be crawled, it can still appear in the SERPs: This may be old hat, but there are always discussions on DP about how robots.txt is handled. Just thought Id let everyone know, straight from "Google's mouth". Make sure you use NOINDEX if you dont want to appear in the SERPs.
Just an example by own experience, sure that exeptions may happen sometimes but... I had an experiment on the theme uploading a new domain without robot restrictions. When the test pages appeared on Google the next day, I placed a robots.txt with disallow / and the site disappeared from the index within less than a week.
Did it completely disappear or only for certain keywords? By Matt's statement, it would still appear in "site:" but not for any keywords based on the page itself (but instead on references to that page). I'm just curious