1. Advertising
    y u no do it?

    Advertising (learn more)

    Advertise virtually anything here, with CPM banner ads, CPM email ads and CPC contextual links. You can target relevant areas of the site and show ads based on geographical location of the user if you wish.

    Starts at just $1 per CPM or $0.10 per CPC.

Place TempDB on SSD or virtual RAM disc to increase temp-table performance

Discussion in 'Databases' started by headgrowe, Aug 4, 2016.

  1. #1
    On a monthly bases we have a lot of temp calculations that run 4 days in total. We can't really change so many scripts and therefore i am searching for a config / hardware solution.

    Background

    Its not a ETL process because we are creating bits of information out of at least 10 applications (therefore i would call it calculations or lots of update-statements). This combination of information is only 1 stage of a bigger process that saves the calculated data finally with a normal ETL process (SSIS) in a DWH. If this calculation stage fails (server crash), we can easily restart it.

    Problem

    The MSSQL server where this calculation is running has only 2 HDD-raid1. One raid1 for data, indices and tempdb and one for the log. This calculation produce so much I/O that the whole process takes days. The server has 180GB of rum and theoretically the whole data and result of this calculations could fit in 5GB memory only. That means we use the database/resources in the wrong way.

    Possible Solution

    I think we have to reduce the I/O in the first place and also handle the I/O that is needed anyway in a second step. BTW the server is MSSQL 2014 SE.

    1. Move the tempdb on a virtual rum disc (ca. 40GB). That means we move the main I/O on memory, right?
    2. Use more temp tables to reduce server-logging and recovery time and use the new rum disc (is there really less logging?)
    3. Put the logs, indices and important tables on a new SSD raid5
    4. Put all other data on the old HDD-drives
    Is this a good solution or would you suggest the standard solution d.h tempdb on the SSDs as well.

    Any help would be appreciated.

    Best regards
     
    headgrowe, Aug 4, 2016 IP
  2. jestep

    jestep Prominent Member

    Messages:
    3,659
    Likes Received:
    215
    Best Answers:
    19
    Trophy Points:
    330
    #2
    I all but stopped using MSSQL about 8 years ago so can't specifically comment but here's my thoughts.

    1. isn't going to fix much since you're still at the mercy of disc I/O speed and what you're suggesting is basically a partition on the same disc. I can't see any gain here.

    Moving it to its own SSD or even better a memory based table would help a ton. However, in the case of a memory table, you lose the data if the server crashes and there isn't a mechanism to restore it when the server comes back online. Since you have enough RAM this is an option I would seriously consider.

    2. Sounds counterproductive. The more tables you have, the more transactions you end up with, which increases overhead. Some benefit can be gained through logical table design, especially if there are tables with many columns where only a small number of them are modified. In this case, segmenting the table so that the commonly accessed data is in a smaller table can help. But, unless the database is poorly designed or there is truly an extremely large number of columns, there usually isn't a ton to gain. This is also denormalizing your design so can create additional complexity on the application level.

    3. Would definitely be an improvement. However, what logs are you referring to? I would not put error logs or any non transaction logs on the SSD. Ideally if you can swing it, throw the whole thing on a RAID 5 or 10 SSD array. The read / write speed of a SSD array compared to a RAID 1 HDD array is probably enough to account for poor database design by itself.
     
    jestep, Aug 17, 2016 IP