I have a test site that has been indexed, but I don't want it to be, so I have used robots.txt to stop it being indexed. After more than a week it is still indexed. I have used this in the root: User-Agent: * Disallow: / Say if a bot enters my site higher than the root, how does it know not to index the files?
The robots.txt does not prevent robots from indexing your site. Most of the bots will follow your restrictions in your robots.txt, but some might not care. You probably have to wait more than one week if your page is already indexed.
I run bans and wordpress sites and have tried to stop the bot from crawling the store and it keeps coming. That's my experience.