1. Advertising
    y u no do it?

    Advertising (learn more)

    Advertise virtually anything here, with CPM banner ads, CPM email ads and CPC contextual links. You can target relevant areas of the site and show ads based on geographical location of the user if you wish.

    Starts at just $1 per CPM or $0.10 per CPC.

[jbd2/dm-0-8] usign alot of I/O

Discussion in 'Site & Server Administration' started by postcd, Jun 17, 2015.

  1. #1
    Hello,

    i see server %wa value get high and when doing "iotop -ao" i see top process is "[jbd2/dm-0-8]" its I/O percentage is higher than any other process: 30-55%

    2.6.32-042stab093.5
    CentOS release 6.6 (Final)

    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/vg-root 904G 579G 280G 68% /
    tmpfs 12G 0 12G 0% /dev/shm
    /dev/sda1 243M 85M 145M 37% /boot
    /dev/mapper/vg-tmp 1008M 34M 924M 4% /tmp

    # tune2fs -l /dev/mapper/vg-root | grep has_journal
    Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize

    # mount -l
    /dev/mapper/vg-root on / type ext4 (rw,noatime,discard)

    Please any idea how to safely without reboot fix this?

    PS: there is an idea and there too
     
    Last edited: Jun 17, 2015
    postcd, Jun 17, 2015 IP
  2. samirj09

    samirj09 Well-Known Member

    Messages:
    335
    Likes Received:
    8
    Best Answers:
    0
    Trophy Points:
    125
    #2
    [jbd2/dm-0-8]

    I've come across this process using quite a bit of IO on my server quite a while back. I tracked it down to a debug log that was being written to like crazy due to me enabling debug logging previously for another issue.

    This is probably not what is causing it in your case. jbd2 is very generic and refers to the journaling block device. Further details would be needed to pinpoint the source of the IO.
     
    samirj09, Aug 28, 2015 IP
  3. deathshadow

    deathshadow Acclaimed Member

    Messages:
    9,732
    Likes Received:
    1,998
    Best Answers:
    253
    Trophy Points:
    515
    #3
    Actually @samirj09 you probably hit it on the head, it probable IS some form of error logging hogging things. I've seen this behavior countless times for various clients and it's most often caused by things like old code no longer being valid after a PHP upgrade, or poorly written skins for forum or CMS software that hemorrhages errors like crazy-- sometimes from things as simple as a skinner who has no business skinning simply forgetting to check isset before accessing a variable.

    The process that actually kills you is usually NOT the error logging itself, but when the archive process kicks in. Suddenly IOWAIT shoots way up and the whole server bogs down.

    It's why poking your head into /var/logs and looking for files that are WAY larger than they should be is on the "check it once a week" to-do list. PARTICULARLY if you upgrade interpreted languages like PHP frequently or have artsy skins written by people who have no business designing websites. (see 99.99% of the scam artist nonsense at whorehouses like ThemeForest or TemplateMonster)
     
    deathshadow, Aug 30, 2015 IP
    postcd likes this.
  4. samirj09

    samirj09 Well-Known Member

    Messages:
    335
    Likes Received:
    8
    Best Answers:
    0
    Trophy Points:
    125
    #4

    @deathshadow Interesting... Quick question though, which archive process are you referring to? Do you mean the process of rotating the logs (logrotate) or another process i'm not aware of?
     
    samirj09, Aug 31, 2015 IP
  5. deathshadow

    deathshadow Acclaimed Member

    Messages:
    9,732
    Likes Received:
    1,998
    Best Answers:
    253
    Trophy Points:
    515
    #5
    logrotate is indeed it -- but on really heavy archiving operations the program itself sits there with it's thumb up it's backside thanks to IOWAIT, more so if there's other programs serving content. As such the offending program itself will not report in TOP as hogging time because it's waiting on the OS.

    In fact thanks to logrotate not reporting itself as what's hogging the drive, that's what can make it so hard to diagnose.
     
    deathshadow, Aug 31, 2015 IP