I've got a domain in the network which I block from google using robots.txt (sounds insane, but i have legit reasons). Google sees pages from on the domain, but cannot get into the domain to crawl others (obviously). Can anything think of a quick way to get google to FIND the pages and list them (obviously it only lists the page name, no title or description in the serps).
I don't know what you mean. Personally, I wouldn't want my ads to be wasted on your site if google weren't to see them. If you are trying to include the site in the G index, then just remove the info from the robots.txt
It's not possible... that's why it's done by querying Google. Your weight will only grow if search engines actually see your content.
I have some weight, because google knows the page is there but (s)he wont index the page content. However I think I've answered my own question, google knows about the pages because links point to them. So if I create links on other sites to all my pages google will know about them....
Google will know about them, but it will not grow your weight unless those pages are actually indexed by Google.
A page is indexed if you do a cache:www.site.com/page.html and there is content. Also, Supplemental Google results do not count either.