This forum post discusses the use of the first Domain Parts API. First off all, I should begin this post by explaining the parts of a domain. Lets take the domain "forums.digitalpoint.com" -- this has three parts: TLD, SLD, and first subdomain. The TLD (Top-Level Domain) is COM The SLD (Second-Level Domain) is DIGITALPOINT The first subdomain is FORUMS One of the largest issues I have dealt with when working with any script is being able to accurately parse these parts (and lesser sub domains) of a domain, primarily recognizing with 100% accuracy the advanced TLDs, such as ccTLDs (Country-Code Top-Level Domains) and IDN ccTLDs (Internationalized country code top-level domain names). There are in total, over 6k TLDs and this API supports all current and future pending TLDs. <?php $domain = 'forums.digitalpoint.com'; $arr = file_get_contents('http://api.domainparts.us/?domain=' . $domain); $blah = json_decode($arr,1);// print_r($blah); // print the array $tld = $blah['tld'];$sld = $blah['sld'];for($i=1;$i<=10;$i+=1) ${'sub'.$i} = $blah['sub'][$i]; echo ($sub1.'.'.$sld.'.'.$tld); // echo the TLD and SLD ?> PHP: The Domain Parts API is a free API maintained by ACE Enterprises LLC of Wellesley, Massachusetts.
My question would be, why would I bother with the latency of an external API, when I could use parse_url and strip out the subdomain and the tld?
jestep, for the simple reason that parse_url (http://php.net/manual/en/function.parse-url.php) does not handle the majority of ccTLDs and doesn't even touch IDNs. If you only need/desire to support the main tlds (such as: com, info, net, org, us, biz, ect...) then the existing function works just fine, however, if you need or want advanced support for advanced tlds (such as: asaminami.hiroshima.jp, ירושלי×.museum, 个人.hk, philadelphiaarea.museum, x.se, ect...) then this is the only solution that will work for you.
The site is now working again. We apologize for the downtime this weekend. Hope you enjoy. If you need help parsing out the results in your script let me know.
The code to decode an address is really trivial, so I have to ask jestep's question with a little modification. Why put up with servers that go down (breaking my site) when I can just write an address decode function?
The answer to your modified question is the same as the one I gave jestep's: for the simple reason that parse_url (http://php.net/manual/en/function.parse-url.php) does not handle the majority of ccTLDs and doesn't even touch IDNs. If you only need/desire to support the main tlds (such as: com, info, net, org, us, biz, ect...) then the existing function works just fine, however, if you need or want advanced support for advanced tlds (such as: asaminami.hiroshima.jp, ירושלי×.museum, 个人.hk, philadelphiaarea.museum, x.se, ect...) then this is the only solution that will work for you. The choice to use a non-comprehensive function OR a dynamically updated comprehensive script is the end developer's decision and largely depends on wither you are wanting to support only a handful of tlds or you are wanting to keep up with all of the new TLDs and cctlds that are scheduled to be introduced by ICANN in the coming years.
We're going to go round and round on this one. Writing a full URL parser is trivial - it's just a string. Even not using parse-url at all is trivial. Writing an AD login for a large forest is a bit more work, but a string parser? That's about Computer Science 101. Your routine is great for people who, while claiming to be "web developers", can't actually write a program (and I'm not talking about knowing a programming language), but for a programmer, decoding a URL isn't a big deal.