Let say, I have a domain-name.com, and I don't have access to a cms (whatever this would be), or server. It can be someone else's site, any site on the Internet. I want to get all urls of the website, including all subdomains, subdomain pages, things like that. Is it possible to do it, if pages on this domain-name.com are not linked to from any other place on the Internet. This would include internal links, most importantly, and also search sites like Google and Bing. Is it possible for people to know that a page exists, if it is not linked to from any other place on the Internet, including internal links? If so, how? Thanks.
No, it's not possible to magically know pages on a site that isn't yours without someone telling you about it (either a search engine, link, etc.)
Maybe you can download full website but you can not download restricted directory . Try this soft > web2disk
They aren't trying to download a website, they are trying to find pages that have no links to them and that they don't already know about. So not sure what good downloading the part of a webpage you can already see would do.
You could run a scan, but if the links are completely random, and not linked, even internally from other pages of the site, it's gonna be very hard to find them. You would basically have to brute-force a scan (mostly like trying to find passwords) - however it would be 1000 times harder, as there is no real length-limit to an url, and unless the site is horribly set up, you won't have access to file-structure even if you're lucky enough to find a folder. Usually, there is some structure, however, and if you can get the name of ONE file, or preferably a small sub-set of files, you could probably device some sort of limit to the scan (say for getting links to image-files, for instance - usually these are named within certain limits, even if their names are random). Etc. etc. However, I wouldn't really bother.