MySQL server optimization for HUGE table queries

Discussion in 'MySQL' started by afactory, Mar 13, 2009.

  1. #1
    Hi guys,

    I hope you will help me to find a solution for my startup project. From the beginning it became very huge ...

    I have a huge table with about ~200M rows.
    Table structure:

    CREATE TABLE `table` (
      `id` int(11) NOT NULL auto_increment,
      `id1` int(11) default NULL,
      `id2` int(11) default NULL,
      `num1` smallint(5) NOT NULL,
      `num2` decimal(11,2) default NULL,
      `num3` decimal(7,2) default NULL,
      PRIMARY KEY  (`id`),
      UNIQUE KEY `id1_2_id2` (`id1`,`id2`),
      KEY `id1` (`id1`),
      KEY `id2` (`id2`)
    ) ENGINE=MyISAM;
    Code (markup):

    Whole table is ~4G, idexes - ~10G

    During quite simple query (I have more complicated with group and 1 join to 20M table) server performance very low. For example, query:

    select count(tp.id), sum(tp.num1), sum(tp.num2)
    
    from
    table tp
    
    where
    id1 = 187085
    Code (markup):
    which are returns 20K rows (ya - I need all of them) are runs for about 22 secs. As you can see it's absolutely unacceptable.

    Do you think it has a sense to tune my dedicated server or I need to redesign my database structure?

    My dedicated box:

    Win Server 2003
    Core2Duo 2.13Ghz
    1G of RAM

    This query is most expensive:

    select count(tp.id), sum(tp.num1), sum(tp.num2), date_format(from_unixtime(t.date), "%Y%m%d") as d
    
    from
    table tp
    inner join table2 t on t.id = tp.id2
    
    where
    id1 = 7366
    
    group by d;
    Code (markup):
    table2 contains about 20M rows (everything indexed and also quite huge)
    As a result this query returns ~400 rows and runs for about 160 secs

    Do you have any ideas how to optimize it?
    Do you need me to attach SHOW GLOBAL STATUS here?

    Thank you in advance.
     
    afactory, Mar 13, 2009 IP
  2. jestep

    jestep Prominent Member

    Messages:
    3,659
    Likes Received:
    215
    Best Answers:
    19
    Trophy Points:
    330
    #2
    In my experience with the amount of RAM you're using, it may not be possible to get good performance given the size and complexity of your database. Even with 4Gb, it's going to be tough to get a usable database, especially if the size is going to go up.

    From your examples, the index on `id1`,`id2` is unnecessary and probably taking up about 3Gb by itself.

    I would also go with innodb and not myisam tables.

    Take a look at this site for some ideas on how to optimize it: http://www.mysqlperformanceblog.com/
     
    jestep, Mar 14, 2009 IP
  3. afactory

    afactory Well-Known Member

    Messages:
    336
    Likes Received:
    7
    Best Answers:
    0
    Trophy Points:
    108
    #3
    Ya, I've read this site before as well. Trying to use covering indexes, lots of other indexes combintion but without any success :(

    Will try to increase my dedicated server RAM this week till 8G
     
    afactory, Mar 14, 2009 IP
  4. timarcher52

    timarcher52 Peon

    Messages:
    62
    Likes Received:
    2
    Best Answers:
    0
    Trophy Points:
    0
    #4
    Perhaps build a summary table that is updated every time your table referenced above changes. You'll have to write some application logic to do this. For example, I have a table that tracks each and every time someone plays a game, storing their userid and a game id. The table grows large over time, however my application logic updates the summary table with the total number of times the game has been played every time the detailed game play tracking table gets updated.
     
    timarcher52, Mar 15, 2009 IP