1. Advertising
    y u no do it?

    Advertising (learn more)

    Advertise virtually anything here, with CPM banner ads, CPM email ads and CPC contextual links. You can target relevant areas of the site and show ads based on geographical location of the user if you wish.

    Starts at just $1 per CPM or $0.10 per CPC.

Measuring latency

Discussion in 'PHP' started by ApocalypseXL, Mar 23, 2014.

  1. #1
    I am building a media heavy website that has ~5 MB after all possible pre-deployment optimizations. It works great for people with a decent connection averaging 5s load time. But if you live in a 3rd world country or your connection is crap the load time skyrockets to 20s - which is unacceptable.

    My idea is to measure the latency and serve far less initial content for those with slower connections™ (patent pending).

    I first tried to is to measure the latency via ping like this :
    $latency = exec('ping -c 1 '.$_SERVER['REMOTE_ADDR']);
    PHP:
    Which works great for some ISPs , unfortunately most of ISPs are paranoid and hide their users via a random port therefore prohibiting ping. So i came up with this :
    $latency = exec('nping -c 1 '.$_SERVER['REMOTE_ADDR'].' '.$_SERVER['REMOTE_PORT']);
    PHP:
    Unfortunately echoing the result doesn't produce any latency numbers, possibly because PHP doesn't wait for nping to finish up the transaction or because the operation takes too long.

    Sending a 512 KB package and measuring the download time would normally be the solution but since speed is the issue that would take far too long. So here i am wondering what's the best way to measure latency via PHP. Ping was perfect but i can't use it. Any alternatives would be great.

    Ideas ?
     
    Solved! View solution.
    ApocalypseXL, Mar 23, 2014 IP
  2. deathshadow

    deathshadow Acclaimed Member

    Messages:
    9,732
    Likes Received:
    1,998
    Best Answers:
    253
    Trophy Points:
    515
    #2
    Testing for latency is actually a bit pointless as from region to region it's going to change; I'm in New Hampshire; I can get 30ms ping times to the other side of the country, while it takes 500ms to reach chicago and 2 seconds to New York -- and vice-versa.

    ... and careful with the 3rd country **** when it comes to latency; since high latency describes two thirds of North America; you get more than 30 miles from a major city -- and it's ****. Particularly when 'normal people' don't throw money away on anything faster than 768kbps in the first place (like around 3/4ths my neighbors, probably why they run ads 24/7 for FIOS and other services that aren't even available in our area...)

    Really I'd say your problem is too much crap on the page... Is that 5mb actually all content, or is it a bunch of presentational images that add NOTHING to the page and a meg or two of "JS for nothing and your scripts for free"? Remember, not counting content if your page is more than 70k of HTML+CSS+SCRIPTS+Images, it's probably a inaccessible steaming pile of slow loading crap... NOTE I'm saying without content; you want a 5 meg video go for it, but there's no legitimate reason for non-content garbage to piss all over your site -- no matter how 'flashy' or 'cool' all that crap is, it stops being cool the moment it hinders users ability to get to the content! (A message scripttards and PSD Jockeys often seem to want to stick their fingers in their ears and Vancome lady impersonations over. La-la-la-la La-la-la-la).

    If latency really is an issue, the solution is NOT to waste time measuring it or finding that magical non-existant host that's fast to everywhere; that's like looking for unicorns. Instead, reduce the number of files used to build the page. Get all your scripts into a single file, all your stylesheets into one file per media type, recombine presentational images, swing an axe at goofy pointless webfonts, and so forth. The less files you have, the less latency is an issue.

    Care to share a sample page? There might be some on-page optimizations you've failed to do or ways to do the exact same thing in less files and less total size without significantly changing the page? ... or from the size you said alone it could simply be that what you've done is NOT viable for web deployment.

    Remember, you have to work within the limitations of the medium. Alienating users by ignoring those limitations does NOT lead to a successful site unless your content really is that damned good.
     
    deathshadow, Mar 23, 2014 IP
  3. ThePHPMaster

    ThePHPMaster Well-Known Member

    Messages:
    737
    Likes Received:
    52
    Best Answers:
    33
    Trophy Points:
    150
    #3
    I doubt that the latency is that bad, even for third world countries. I think the issue here is Bandwidth (third world countries tend to use speeds < 125 kbps, if not 56 kbps). If you agree with that theory, maybe give the users the option to select their mode: High Speed Connection - Low Speed Connection.
     
    ThePHPMaster, Mar 23, 2014 IP
  4. sundaybrew

    sundaybrew Numerati

    Messages:
    7,294
    Likes Received:
    1,260
    Best Answers:
    0
    Trophy Points:
    560
    #4
    I have to agree with the others, waste of time, build the network and then worry
     
    sundaybrew, Mar 23, 2014 IP
  5. ApocalypseXL

    ApocalypseXL Notable Member

    Messages:
    6,095
    Likes Received:
    103
    Best Answers:
    5
    Trophy Points:
    240
    #5
    /facepalm why does eveyone consider the other coders are morons ?

    Deathshadow - thanks for your time and the for the concern buddy but my JS is ~ 20 kb and that's before minifing and compressions. CSS is ~15 kb and HTML is 4 kb. The truckload comes from the images and that's what this website is all about. Normally I'd load just a few images and call in the rest via AJAX but that's something that comes with it's own issues hence the PHP option. The page is well optimized ... the client ... not so much.

    Also although latency != bandwidth there is a strong correlation between the two. The system is designed to help people with low mobile signal and truly crap connection not Deadthsadow's neighbor that gets the website in ~ 6 seconds (ran tests on that). A ping of over 400 ms is usually related to a severely constricted BW and that's who I'm targeting. I will run extensive tests with the system and if the results are not adequate I'll throw it away. But first I'd rather build such a system and give it a spin.

    So how can I measure the latency ?
     
    Last edited: Mar 24, 2014
    ApocalypseXL, Mar 23, 2014 IP
  6. HackTactics

    HackTactics Member

    Messages:
    16
    Likes Received:
    9
    Best Answers:
    1
    Trophy Points:
    38
    #6
    Why not just use a CDN such as CloudFlare? That way your content will be served far faster.
    Furthermore,
    You cannot simply ping people visiting your website, most people's ISP put them behind a NAT, the chances of your users having a static IP are very slim. What you need to understand is that you cannot ping users accurately from your end, you would need to create a client-side solution which pings your server. You may (probably won't without using websockets) be able to achieve this in Javascript, else you'll have to use flash or Java. If you do continue to ping from your server based on the client's IP, you'll hit the machine/router where the NAT occurs and you'll have the latency to that machine/switch, or, more likely, your ping request will be dismissed by the machine/router.
    P.S. Not to be contemptuous, but as others have indicated, this is a matter of Bandwidth, not latency
     
    HackTactics, Mar 23, 2014 IP
  7. ApocalypseXL

    ApocalypseXL Notable Member

    Messages:
    6,095
    Likes Received:
    103
    Best Answers:
    5
    Trophy Points:
    240
    #7
    I am a Cloudflare business partner. Cloudflare is great for the northern hemisphere , not so much for the south. You'd need Akamai for global coverage and Akamai isn't included in the budget. Also this is the problem I'm trying to solve getting the local server/router whatever to yell back. I got ~ 20 connections that i can run the test on , i know the speeds of those connection the exact distance between them and the server , the exact distance between them and the Cloudflare PoP.

    I know ping is blocked , that's why i tried nping. So i could bounce a packet of it's random port while it's opened. There was the idea of using IMCP to measure packet response time but I don't have any hands on experience with that.

    Is it possible that in the age of SPDY, Node.JS and Clouds we can't get any info about the client's connection without sending him crap to yell back at us ?
     
    ApocalypseXL, Mar 24, 2014 IP
  8. HackTactics

    HackTactics Member

    Messages:
    16
    Likes Received:
    9
    Best Answers:
    1
    Trophy Points:
    38
    #8
    Try this:
    
    <?php
    function icmpChecksum($data){
    
    if (strlen($data)%2){
        $data .= "\x00";
    }
    $bit = unpack('n*', $data);
    $sum = array_sum($bit);
    while ($sum >> 16){
        $sum = ($sum >> 16) + ($sum & 0xffff);
    }
    return pack('n*', ~$sum);
    }
    
    $type= "\x08";
    $code= "\x00";
    $checksum= "\x00\x00";
    $identifier = "\x00\x00";
    $seqNumber = "\x00\x00";
    $data= "PingRequest";
    $package = $type.$code.$checksum.$identifier.$seqNumber.$data;
    $checksum = icmpChecksum($package);
    $package = $type.$code.$checksum.$identifier.$seqNumber.$data;
    
    $socket = socket_create(AF_INET, SOCK_RAW, 1);
    socket_connect($socket, $_SERVER["REMOTE_ADDR"], null);
    
    $startTime = microtime(true);
    socket_send($socket, $package, strlen($package), 0);
    
    if(socket_read($socket, 255)){
        echo round(microtime(true) - $startTime, 4) .' seconds';
    }else{
        echo socket_strerror(socket_last_error());
    }
    socket_close($socket);
    ?>
    
    PHP:
     
    Last edited: Mar 24, 2014
    HackTactics, Mar 24, 2014 IP
    ApocalypseXL likes this.
  9. HackTactics

    HackTactics Member

    Messages:
    16
    Likes Received:
    9
    Best Answers:
    1
    Trophy Points:
    38
    #9
    That, btw, is an ICMP method
     
    HackTactics, Mar 24, 2014 IP
    ApocalypseXL likes this.
  10. deathshadow

    deathshadow Acclaimed Member

    Messages:
    9,732
    Likes Received:
    1,998
    Best Answers:
    253
    Trophy Points:
    515
    #10
    It's what we're used to dealing with. Usually you hear "5mb" and it's a common kneejerk reaction. As I said, if that 5mb is content then there's not a lot you can do there.

    I still think measuring latency is pointless as it's GOING to be different to EVERYWHERE -- there is no panacea to that, and to be frank, using CDN's just makes it WORSE in most cases since CDN's spend so much time bouncing around where you get your content from, it introduces more latency. Also, I don't trust CDN's as they've bit me in the backside way too many times. To me unless you've got a budget the size of Google's and can run your own, it's just not a viable option.

    I'm assuming then that all those images are content? Is there any way you could break it into smaller pages? Leverage thumbnails? Combine them into single files somehow?

    Really that's the key -- how many separate files is the page? That's the big question if you're worried about latency and/or that's the issue. The only practical way to fight latency is to reduce the number of files used to build the page -- NOT to start trying to measure it as it's different location to location, and it's NOT to throw a CDN at it or any goofy server side tricks. Even if you COULD measure it it's not like you can do anything about it -- and if you DO find a location that's faster to one client location, it WILL be slower to somewhere else. That's just the nature of the beast. If I was in Boston I'd have blazingly fast low latency to NYC and painfully slow high latency to the west coast -- go 50 miles north to Nashua and it's flipped the other way around. You don't fight that by measuring it as it's always different every time you have a new location trying to get to you. You fight it by reducing the number of requests. There will ALWAYS be users SOMEWHERE that's going to have ***** latency to your hosting.

    There's a reason futaba/wakaba boards limits a page to showing 48 images. (16 'threads' per page, 4 images per thread, you want to see the rest you have to reply to the thread) and force little tiny thumbnails so that the images never break 192k total.

    How many separate files is your average page? That's the big question as that's the ONLY thing latency impacts. (well, unless you're running a game that's realtime to the server, like a MMO).

    Also, are you dicking with cache-control? Usually (despite google's wild claims to the contrary) that does more harm than good for clients; there's this noodle-doodle BS idea right now that the browser defaults for caching are inadequate -- usually that's only true if your HTML/CSS/Scripting is ****, and from what you say for sizes that doesn't sound like the case.

    How is your cache leveraging? Are any of the images in question used on more than one page? Is there anything static in the markup you can remove from the markup and into scripting or CSS? Even though your sizes are small, how they load can be as important as the number of files.

    Looking at a waterfall of your page might also help -- load orders can have a huge impact. In fact, a waterfall will show latency to a degree (though the accuracy of that sucks) and can help troubleshoot points at which element loads are 'hanging' things like the render.

    Part of why seeing the page in question might help -- speed boosts could be as simple as moving all your scripting to the end of the page (right before /body) or not waiting for onload; since poor load orders can make you think latency is an issue, when in fact the user-agent is 'stuck' working on something instead of continuing to make requests.
     
    deathshadow, Mar 24, 2014 IP
    ApocalypseXL and ryan_uk like this.
  11. deathshadow

    deathshadow Acclaimed Member

    Messages:
    9,732
    Likes Received:
    1,998
    Best Answers:
    253
    Trophy Points:
    515
    #11
    Actually, that's a good question -- waterfalls like those provided by firebug or dragonfly; is there something that makes their measurement of latency insufficient?
     
    deathshadow, Mar 24, 2014 IP
    ApocalypseXL likes this.
  12. ApocalypseXL

    ApocalypseXL Notable Member

    Messages:
    6,095
    Likes Received:
    103
    Best Answers:
    5
    Trophy Points:
    240
    #12
    @HackTactics pretty much what I was thinking. Thanks for this one. Did you wrote it ? IF so please add some comments. I'd love to better understand the script. I tried to implement it but so far the server refuses to use sockets even though php.ini shows them to be alive and well.

    @deathshadow Total number of files is under 20. It will probably be even less after I get the final feedback. Unfortunately I've also got a 400kb sprite that forces the average load time to be ~4 seconds (DNS time included) so even in the best case scenario the website will still average a 4s on a 1MB connection and a disgusting 12s on a crappy 3G connection. Load order is good , if the script file would be linked in the head you'd get NaN in some functions since they work with the size of some images.

    As for caching i don't bother, most of my websites load under 1 second without any of those modifications. I'm quite content with that :)
     
    ApocalypseXL, Mar 24, 2014 IP
  13. deathshadow

    deathshadow Acclaimed Member

    Messages:
    9,732
    Likes Received:
    1,998
    Best Answers:
    253
    Trophy Points:
    515
    #13
    If you're under 20 files, you shouldn't be sweating latency, it's a non-issue. "Real world" average for 20 files is around 2.4 seconds, and a 'worst case' would be 10 to 12 seconds, best case a third of a second. If you had 60 to 100 files, THEN I'd be worried about it.

    Though 400k of the incorrectly named CSS sprites is worrisome -- since that type of thing is typically NOT content, it's decoration. If it's decoration, swing an axe at it or put your optimization efforts there... though if it is in fact content, well... I wonder how you'd end up with a sprite sheet filled with content. I mean a 400k sprite sheet would be something noodle-doodle like 2560x2560 or something if you've got anything remotely resembling image optimization. Though that could be the fact I don't think alpha transparent images have any business being used on websites in the first place talking... Usually they're something PSD jockeys who don't know enough about websites to be designing jack **** come up with.

    Or are you saving it as uncompressed 32 bit with alpha as .png? Again, back to file sizes as 400k for one of the incorrectly named CSS sprite sheets is... well, it smells slightly of broken sitebuilding methodologies. Could be as simple as converting it to 8 bit with palette transparency -- or, as crazy as this sounds, breaking it back into smaller files. A good rule of thumb is one handshake = 4k, so if you can save 4k or more as smaller separate files, spend the extra handshake. It's a balancing act between file counts and data throughput. Palette transparency with 'close enough AA' can often result in far, FAR smaller files since you can leverage the palette.

    Though without seeing what you are actually doing that's a wild guess... but at only 20 files and five megs I'd be wondering if you went too far the other direction; it's often more of a balancing act as below a certain number of files image recombination can cost you speed if you aren't leveraging palette optimization or proper image compression... or have images combined with conflicting palettes or luma maps.

    But really if you're 20 files total for images+CSS+SCRIPTS+HTML, latency is NOT something I'd be looking at unless your host is such total **** you're getting 2000ms to everywhere.
     
    Last edited: Mar 24, 2014
    deathshadow, Mar 24, 2014 IP
    ApocalypseXL likes this.
  14. #14
    I found it on some website, then modified it slightly, nonetheless I know how it works
    If this were to implemented there should be code to check for an error during the socket_connect() function and a timeout set on the socket, else the script would run for a long time
    Commented:
    <?php
    /*
    * Generate 'internet checksum' (RFC1071 | https://tools.ietf.org/html/rfc1071)
    */
    function icmpChecksum($data){
       // If length of data in bytes is an odd number
       if (strlen($data)%2){
           // Append byte 0x00 to data
           $data .= "\x00";
       }
    
       // set bit to an array of 16 bit unsigned integers from data string
       $bit = unpack('n*', $data);
       // set sum to the summation of all values in bit
       $sum = array_sum($bit);
       // while the result of shifting sum 16bits to the right is true. see http://www.php.net/manual/en/language.operators.bitwise.php
       while ($sum >> 16){
          // set sum equal to the resultant shift + the result of AND operation on sum and 0xFFFF (65535)
          $sum = ($sum >> 16) + ($sum & 0xffff);
       }
       // return a string of unsigned 16bit integers from the NOT operation of sum
       return pack('n*', ~$sum);
    }
    // Type = byte 0x08 (8 in decimal)
    // States that the type of ICMP packet is an Echo Request - http://en.wikipedia.org/wiki/Internet_Control_Message_Protocol#Control_messages
    $type= "\x08";
    // Code = byte 0x00 (0 in decimal)
    $code= "\x00";
    // set checksum to two bytes of 0x00 - as per RFC1071
    $checksum= "\x00\x00";
    // do the same for identifier
    $identifier = "\x00\x00";
    // do the same for seqNumber
    $seqNumber = "\x00\x00";
    // set data to arbitrary string
    $data= "PingRequest";
    // Concatenate vars
    $package = $type.$code.$checksum.$identifier.$seqNumber.$data;
    // set checksum to the internet checksum of package
    $checksum = icmpChecksum($package);
    // reconstruct package with new checksum
    $package = $type.$code.$checksum.$identifier.$seqNumber.$data;
    // Create a socket, not UDP/TCP but RAW
    $socket = socket_create(AF_INET, SOCK_RAW, 1);
    // connect to user's IP address, the 1 means ICMP it's a crude heuristic - basically allows the socket layer to take care of some handshaking
    socket_connect($socket, $_SERVER["REMOTE_ADDR"], null);
    // get microTime, pretty standard function
    $startTime = microtime(true);
    // send the ICMP packet to the user
    socket_send($socket, $package, strlen($package), 0);
    // If there is a response
    if(socket_read($socket, 255)){
        // calculate microseconds since ICMP packet was sent and convert to seconds
        echo round(microtime(true) - $startTime, 4) .' seconds';
    }else{
       // else display socket error
        echo socket_strerror(socket_last_error());
    }
    // close socket
    socket_close($socket);
    ?>
    PHP:
    As for you server refusing to create the socket, most *nix systems require root to be able to create a raw socket - I'll assume your PHP runs under apache/httpd in which case you can either upgrade the permissions of the apache user or use sudo and just execute the file from a PHP script using:
    <?php
    
    exec("php -f /path/to/php-file.php");
    
    ?>
    PHP:
     
    Last edited: Mar 24, 2014
    HackTactics, Mar 24, 2014 IP
    ApocalypseXL likes this.
  15. deathshadow

    deathshadow Acclaimed Member

    Messages:
    9,732
    Likes Received:
    1,998
    Best Answers:
    253
    Trophy Points:
    515
    #15
    Just beware that depending on OS and PHP build, microtime usually does NOT have ms accuracy; it can range anywhere from 54.95ms to 0.5ms... the high end of that making it completely useless for determining latency.

    After all, if the timer interrupt were updated more often than that, you'd quickly run out of processing time to run actual code; difference between a GPOS and a RTOS.

    -- edit -- Oh, the laugh of that 54.95ms? Legacy from the original IBM 5150 which derived it's clock speed from the NTSC color burst crystal that also drove the video card. It's the original PC 4.77mhz clock divided by 65536. (0x00010000). AMAZINGLY can still be found in modern systems in one form or another.
     
    Last edited: Mar 24, 2014
    deathshadow, Mar 24, 2014 IP
    ryan_uk, ApocalypseXL and HackTactics like this.