1. Advertising
    y u no do it?

    Advertising (learn more)

    Advertise virtually anything here, with CPM banner ads, CPM email ads and CPC contextual links. You can target relevant areas of the site and show ads based on geographical location of the user if you wish.

    Starts at just $1 per CPM or $0.10 per CPC.

Iptables total bandwidth limiting issues

Discussion in 'Site & Server Administration' started by pig2cat, Dec 12, 2010.

  1. #1
    Hi,
    I want to achieve the following on a 100mbit dedicated server.
    traffic coming from Nginx on port 81: max 50 mbit usage
    all other traffic can use all they like.

    I created 3 different classes with tc qdisk via the following ssh commands
    tc qdisc del dev eth0 root
    
    tc qdisc add dev eth0 root handle 1: htb default 1 r2q 160
    
    tc class add dev eth0 parent 1: classid 1:1 htb rate 105000kbit burst 1k
    tc class add dev eth0 parent 1:1 classid 1:2 htb rate 105000kbit ceil 105000kbit burst 1k
    tc class add dev eth0 parent 1:1 classid 1:3 htb rate 50000kbit ceil 50000kbit burst 1k
    
    tc qdisc add dev eth0 parent 1:2 handle 2: sfq perturb 10
    tc qdisc add dev eth0 parent 1:3 handle 3: sfq perturb 10
    Code (markup):
    This goes without problems. I then want to apply this to Iptables where it goes wrong
    iptables -t mangle -A OUTPUT -o eth0 -j CLASSIFY --set-class 1:2
    iptables -t mangle -A OUTPUT -o eth0 -m tcp -p tcp --dport 81 -j CLASSIFY --set-class 1:3
    Code (markup):
    when entering these commands it does not do anything; port 81 still uses the full 100 mbit from the server. the output from iptables -L is:
    serveur:~# iptables -L
    Chain INPUT (policy ACCEPT)
    target     prot opt source               destination
    
    Chain FORWARD (policy ACCEPT)
    target     prot opt source               destination
    
    Chain OUTPUT (policy ACCEPT)
    target     prot opt source               destination
               all  --  anywhere             anywhere
    
    Code (markup):
    I have hardly any understanding of tc and iptables so I have no idea what is wrong... I hope someone can figure out what's wrong

    Additional info:
    fast:~# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP qlen 100
        link/ether 00:1c:c0:* brd ff:ff:ff:*
        inet 76.73.*/29 brd 76.73.* scope global eth0
        inet 76.73.*/29 brd 76.73.* scope global secondary eth0:0
        inet 76.73.*/29 brd 76.73.* scope global secondary eth0:1
        inet 76.73.*/29 brd 76.73.* scope global secondary eth0:2
        inet 76.73.*/29 brd 76.73.* scope global secondary eth0:3
        inet6 fe80::21c:*/64 scope link
           valid_lft forever preferred_lft forever
    Code (markup):

     
    pig2cat, Dec 12, 2010 IP
  2. jonasl

    jonasl Active Member

    Messages:
    40
    Likes Received:
    0
    Best Answers:
    0
    Trophy Points:
    61
    #2
    It seems that your iptables doesn't show up in the iptables -L listing. Do you get any error message?
     
    jonasl, Dec 12, 2010 IP
  3. pig2cat

    pig2cat Active Member

    Messages:
    299
    Likes Received:
    6
    Best Answers:
    0
    Trophy Points:
    60
    #3
    fast:~# tc qdisc del dev eth0 root
    fast:~# tc qdisc add dev eth0 root handle 1: htb default 1 r2q 160
    fast:~# tc class add dev eth0 parent 1: classid 1:1 htb rate 105000kbit burst 1k
    fast:~# tc class add dev eth0 parent 1:1 classid 1:2 htb rate 105000kbit ceil 10                                                                                                             5000kbit burst 1k
    fast:~# tc class add dev eth0 parent 1:1 classid 1:3 htb rate 50000kbit ceil 500                                                                                                             00kbit burst 1k
    fast:~# tc qdisc add dev eth0 parent 1:2 handle 2: sfq perturb 10
    fast:~# tc qdisc add dev eth0 parent 1:3 handle 3: sfq perturb 10
    fast:~# iptables -t mangle -A OUTPUT -o eth0 -j CLASSIFY --set-class 1:2
    fast:~# iptables -t mangle -A OUTPUT -o eth0 -m tcp -p tcp --dport 81 -j CLASSIF                                                                                                             Y --set-class 1:3
    fast:~# iptables -L
    Chain INPUT (policy ACCEPT)
    target     prot opt source               destination
    
    Chain FORWARD (policy ACCEPT)
    target     prot opt source               destination
    
    Chain OUTPUT (policy ACCEPT)
    target     prot opt source               destination
               all  --  anywhere             anywhere
    
    fast:~#
    Code (markup):
    no error message

    iptables v1.4.2


    When I try to make a new chain I get this:
    fast:~# iptables -N NGINX
    fast:~# iptables -L NGINX
    Chain NGINX (0 references)
    target     prot opt source               destination
    fast:~# iptables -t mangle -A NGINX -o eth0 -j CLASSIFY --set-class 1:2
    iptables: No chain/target/match by that name
    fast:~# iptables -P NGINX ACCEPT
    iptables: Bad built-in chain name
    
    Code (markup):
     
    pig2cat, Dec 12, 2010 IP
  4. pig2cat

    pig2cat Active Member

    Messages:
    299
    Likes Received:
    6
    Best Answers:
    0
    Trophy Points:
    60
    #4
    Hi, the error was that it should have been --sport 81 (I think)
    I used a script i found here:
    http://www.faqs.org/docs/Linux-HOWTO/ADSL-Bandwidth-Management-HOWTO.html

    and modified it to my needs, it works on my server.

    #!/bin/bash
    #
    # myshaper - DSL/Cable modem outbound traffic shaper and prioritizer.
    #            Based on the ADSL/Cable wondershaper (www.lartc.org)
    #
    # Written by Dan Singletary (8/7/02)
    #
    # NOTE!! - This script assumes your kernel has been patched with the
    #          appropriate HTB queue and IMQ patches available here:
    #          (subnote: future kernels may not require patching)
    #
    #       http://luxik.cdi.cz/~devik/qos/htb/
    #       http://luxik.cdi.cz/~patrick/imq/
    #
    # Configuration options for myshaper:
    #  DEV    - set to ethX that connects to DSL/Cable Modem
    #  RATEUP - set this to slightly lower than your
    #           outbound bandwidth on the DSL/Cable Modem.
    #           I have a 1500/128 DSL line and setting
    #           RATEUP=90 works well for my 128kbps upstream.
    #           However, your mileage may vary.
    #  RATEDN - set this to slightly lower than your
    #           inbound bandwidth on the DSL/Cable Modem.
    #
    #
    #  Theory on using imq to "shape" inbound traffic:
    #
    #     It's impossible to directly limit the rate of data that will
    #  be sent to you by other hosts on the internet.  In order to shape
    #  the inbound traffic rate, we have to rely on the congestion avoidance
    #  algorithms in TCP.  Because of this, WE CAN ONLY ATTEMPT TO SHAPE
    #  INBOUND TRAFFIC ON TCP CONNECTIONS.  This means that any traffic that
    #  is not tcp should be placed in the high-prio class, since dropping
    #  a non-tcp packet will most likely result in a retransmit which will
    #  do nothing but unnecessarily consume bandwidth.  
    #     We attempt to shape inbound TCP traffic by dropping tcp packets
    #  when they overflow the HTB queue which will only pass them on at
    #  a certain rate (RATEDN) which is slightly lower than the actual
    #  capability of the inbound device.  By dropping TCP packets that
    #  are over-rate, we are simulating the same packets getting dropped
    #  due to a queue-overflow on our ISP's side.  The advantage of this
    #  is that our ISP's queue will never fill because TCP will slow it's
    #  transmission rate in response to the dropped packets in the assumption
    #  that it has filled the ISP's queue, when in reality it has not.
    #     The advantage of using a priority-based queuing discipline is
    #  that we can specifically choose NOT to drop certain types of packets
    #  that we place in the higher priority buckets (ssh, telnet, etc).  This
    #  is because packets will always be dequeued from the lowest priority class
    #  with the stipulation that packets will still be dequeued from every
    #  class fairly at a minimum rate (in this script, each bucket will deliver
    #  at least it's fair share of 1/7 of the bandwidth).  
    #
    #  Reiterating main points:
    #   * Dropping a tcp packet on a connection will lead to a slower rate
    #     of reception for that connection due to the congestion avoidance algorithm.
    #   * We gain nothing from dropping non-TCP packets.  In fact, if they
    #     were important they would probably be retransmitted anyways so we want to
    #     try to never drop these packets.  This means that saturated TCP connections
    #     will not negatively effect protocols that don't have a built-in retransmit like TCP.
    #   * Slowing down incoming TCP connections such that the total inbound rate is less
    #     than the true capability of the device (ADSL/Cable Modem) SHOULD result in little
    #     to no packets being queued on the ISP's side (DSLAM, cable concentrator, etc).  Since
    #     these ISP queues have been observed to queue 4 seconds of data at 1500Kbps or 6 megabits
    #     of data, having no packets queued there will mean lower latency.
    #
    #  Caveats (questions posed before testing):
    #   * Will limiting inbound traffic in this fashion result in poor bulk TCP performance?
    #     - Preliminary answer is no!  Seems that by prioritizing ACK packets (small <64b)
    #       we maximize throughput by not wasting bandwidth on retransmitted packets
    #       that we already have.
    #   
    
    # NOTE: The following configuration works well for my 
    # setup: 100/100M FDC box and 100/100M ovh box
    
    DEV=eth0
    RATEUP=101000
    LIMITUP=75000
    
    # 
    # End Configuration Options
    #
    
    if [ "$1" = "status" ]
    then
            echo "[qdisc]"
            tc -s qdisc show dev $DEV
    #        tc -s qdisc show dev imq0
            echo "[class]"
            tc -s class show dev $DEV
    #        tc -s class show dev imq0
            echo "[filter]"
            tc -s filter show dev $DEV
    #        tc -s filter show dev imq0
            echo "[iptables]"
            iptables -t mangle -L MYSHAPER-OUT -v -x 2> /dev/null
    #        iptables -t mangle -L MYSHAPER-IN -v -x 2> /dev/null
            exit
    fi
    
    # Reset everything to a known state (cleared)
    tc qdisc del dev $DEV root    2> /dev/null > /dev/null
    #tc qdisc del dev imq0 root 2> /dev/null > /dev/null
    iptables -t mangle -D POSTROUTING -o $DEV -j MYSHAPER-OUT 2> /dev/null > /dev/null
    iptables -t mangle -F MYSHAPER-OUT 2> /dev/null > /dev/null
    iptables -t mangle -X MYSHAPER-OUT 2> /dev/null > /dev/null
    #iptables -t mangle -D PREROUTING -i $DEV -j MYSHAPER-IN 2> /dev/null > /dev/null
    #iptables -t mangle -F MYSHAPER-IN 2> /dev/null > /dev/null
    #iptables -t mangle -X MYSHAPER-IN 2> /dev/null > /dev/null
    #ip link set imq0 down 2> /dev/null > /dev/null
    #rmmod imq 2> /dev/null > /dev/null
    
    if [ "$1" = "stop" ] 
    then 
            echo "Shaping removed on $DEV."
            exit
    fi
    
    ###########################################################
    #
    # Outbound Shaping (limits total bandwidth to RATEUP)
    
    # set queue size to give latency of about 2 seconds on low-prio packets
    #ip link set dev $DEV qlen 30
    
    # changes mtu on the outbound device.  Lowering the mtu will result
    # in lower latency but will also cause slightly lower throughput due 
    # to IP and TCP protocol overhead.
    #ip link set dev $DEV mtu 1000
    
    # add HTB root qdisc
    tc qdisc add dev $DEV root handle 1: htb default 26
    
    # add main rate limit classes
    tc class add dev $DEV parent 1: classid 1:1 htb rate ${RATEUP}kbit
    
    # add leaf classes - We grant each class at LEAST it's "fair share" of bandwidth.
    #                    this way no class will ever be starved by another class.  Each
    #                    class is also permitted to consume all of the available bandwidth
    #                    if no other classes are in use.
    tc class add dev $DEV parent 1:1 classid 1:19 htb rate 50kbit ceil ${RATEUP}kbit prio 0
    tc class add dev $DEV parent 1:1 classid 1:20 htb rate 15000kbit ceil ${RATEUP}kbit prio 1
    tc class add dev $DEV parent 1:1 classid 1:21 htb rate 15000kbit ceil ${LIMITUP}kbit prio 2
    tc class add dev $DEV parent 1:1 classid 1:26 htb rate 50kbit ceil ${RATEUP}kbit prio 3
    
    
    # attach qdisc to leaf classes - here we at SFQ to each priority class.  SFQ insures that
    #                                within each class connections will be treated (almost) fairly.
    tc qdisc add dev $DEV parent 1:19 handle 19: sfq perturb 10
    tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10
    tc qdisc add dev $DEV parent 1:21 handle 21: sfq perturb 10
    tc qdisc add dev $DEV parent 1:26 handle 26: sfq perturb 10
    
    # filter traffic into classes by fwmark - here we direct traffic into priority class according to
    #                                         the fwmark set on the packet (we set fwmark with iptables
    #                                         later).  Note that above we've set the default priority
    #                                         class to 1:26 so unmarked packets (or packets marked with
    #                                         unfamiliar IDs) will be defaulted to the lowest priority
    #                                         class.
    tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 19 fw flowid 1:19
    tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 20 fw flowid 1:20
    tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 21 fw flowid 1:21
    tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 26 fw flowid 1:26
    
    # add MYSHAPER-OUT chain to the mangle table in iptables - this sets up the table we'll use
    #                                                      to filter and mark packets.
    iptables -t mangle -N MYSHAPER-OUT
    iptables -t mangle -I POSTROUTING -o $DEV -j MYSHAPER-OUT
    
    # add fwmark entries to classify different types of traffic - Set fwmark from 20-26 according to
    #                                                             desired class. 20 is highest prio.
    iptables -t mangle -A MYSHAPER-OUT -p tcp --dport ssh -j MARK --set-mark 19    # secure shell
    iptables -t mangle -A MYSHAPER-OUT -p tcp --sport ssh -j MARK --set-mark 19    # secure shell
    iptables -t mangle -A MYSHAPER-OUT -p tcp --sport http -j MARK --set-mark 20   # Local web server
    iptables -t mangle -A MYSHAPER-OUT -p tcp --dport http -j MARK --set-mark 20   # Local web server
    iptables -t mangle -A MYSHAPER-OUT -p tcp --sport 81 -j MARK --set-mark 21   # Local web server
    iptables -t mangle -A MYSHAPER-OUT -p tcp --dport 81 -j MARK --set-mark 21   # Local web server
    iptables -t mangle -A MYSHAPER-OUT -m mark --mark 0 -j MARK --set-mark 26      # redundant- mark any unmarked packets as 26 (low prio)
    
    # Done with outbound shaping
    #
    ####################################################
    
    echo "Outbound shaping added to $DEV.  Rate: ${RATEUP}Kbit/sec."
    
    Code (markup):
     
    pig2cat, Dec 21, 2010 IP
  5. pig2cat

    pig2cat Active Member

    Messages:
    299
    Likes Received:
    6
    Best Answers:
    0
    Trophy Points:
    60
    #5
    Bumping this thread with a new question.

    How to do this if a server runs with 'pfifo_fast' in stead of 'htb'? I wanted to apply this to another server from ovh but it errors at the qdisc add.

    I tried googling and found out this this specific is not possible with pfifo_fast so I want to ask if anyone knows if I can put port 80 at least on a higher priority or has another solution for this
     
    pig2cat, Jan 5, 2011 IP