1. Advertising
    y u no do it?

    Advertising (learn more)

    Advertise virtually anything here, with CPM banner ads, CPM email ads and CPC contextual links. You can target relevant areas of the site and show ads based on geographical location of the user if you wish.

    Starts at just $1 per CPM or $0.10 per CPC.

Getting all urls of a domain

Discussion in 'Site & Server Administration' started by joliett89, Apr 15, 2015.

  1. #1
    Let say, I have a domain-name.com, and I don't have access to a cms (whatever this would be), or server. It can be someone else's site, any site on the Internet. I want to get all urls of the website, including all subdomains, subdomain pages, things like that.

    Is it possible to do it, if pages on this domain-name.com are not linked to from any other place on the Internet. This would include internal links, most importantly, and also search sites like Google and Bing.

    Is it possible for people to know that a page exists, if it is not linked to from any other place on the Internet, including internal links? If so, how?

    Thanks.
     
    joliett89, Apr 15, 2015 IP
  2. digitalpoint

    digitalpoint Overlord of no one Staff

    Messages:
    38,333
    Likes Received:
    2,613
    Best Answers:
    462
    Trophy Points:
    710
    Digital Goods:
    29
    #2
    No, it's not possible to magically know pages on a site that isn't yours without someone telling you about it (either a search engine, link, etc.)
     
    digitalpoint, Apr 15, 2015 IP
  3. DarkMatrix

    DarkMatrix Active Member

    Messages:
    310
    Likes Received:
    14
    Best Answers:
    0
    Trophy Points:
    55
    #3
    Maybe you can download full website but you can not download restricted directory . Try this soft > web2disk
     
    DarkMatrix, Apr 15, 2015 IP
  4. digitalpoint

    digitalpoint Overlord of no one Staff

    Messages:
    38,333
    Likes Received:
    2,613
    Best Answers:
    462
    Trophy Points:
    710
    Digital Goods:
    29
    #4
    They aren't trying to download a website, they are trying to find pages that have no links to them and that they don't already know about. So not sure what good downloading the part of a webpage you can already see would do.
     
    digitalpoint, Apr 15, 2015 IP
  5. PoPSiCLe

    PoPSiCLe Illustrious Member

    Messages:
    4,623
    Likes Received:
    725
    Best Answers:
    152
    Trophy Points:
    470
    #5
    You could run a scan, but if the links are completely random, and not linked, even internally from other pages of the site, it's gonna be very hard to find them. You would basically have to brute-force a scan (mostly like trying to find passwords) - however it would be 1000 times harder, as there is no real length-limit to an url, and unless the site is horribly set up, you won't have access to file-structure even if you're lucky enough to find a folder.
    Usually, there is some structure, however, and if you can get the name of ONE file, or preferably a small sub-set of files, you could probably device some sort of limit to the scan (say for getting links to image-files, for instance - usually these are named within certain limits, even if their names are random). Etc. etc. However, I wouldn't really bother.
     
    PoPSiCLe, Apr 17, 2015 IP