how to make large mysql databses faster

Discussion in 'Databases' started by w47w47, Oct 2, 2009.

  1. #1
    hi,

    any ideas on how to make large mysql databases faster, so that you can read them faster in php without any duplicates?

    1. i tryed select distinct... but this make automatically a temp db and that's really slows down the server...

    2. also i tryed UNION too but then when i want to limit the SELECT to 30 results it takes out 60 30 from each if there are 2 dbs...

    also is there some other solution ? no need to split the big db, also it can be only one... just how to speed it up.... :S

    best regards, w47
     
    w47w47, Oct 2, 2009 IP
  2. w47w47

    w47w47 Peon

    Messages:
    255
    Likes Received:
    0
    Best Answers:
    0
    Trophy Points:
    0
    #2
    but there shouldn't be any duplicats... DISTINCT slows it all down... and when i try to split the DB in 2 then when i select from mysql and limit it to 30 resuts it does return like 24 from the first one and 30 from the secound one because it can't find more then 24 in the first db... maybe that i have written it wrong in php ?

    how to speed it up ? :S
     
    w47w47, Oct 2, 2009 IP
  3. w47w47

    w47w47 Peon

    Messages:
    255
    Likes Received:
    0
    Best Answers:
    0
    Trophy Points:
    0
    #3
    or is it better to try some other database then mysql? i have more then 2 million rows/results in my mysql DB.
     
    w47w47, Oct 2, 2009 IP
  4. theapparatus

    theapparatus Peon

    Messages:
    2,925
    Likes Received:
    119
    Best Answers:
    0
    Trophy Points:
    0
    #4
    Depends on what you're doing and the server environment you're in.

    Is the server yours or is this shared hosting?

    What are the stats of the server?

    What program(s) are you using?

    What specific versions of php and mysql are you running?
     
    theapparatus, Oct 2, 2009 IP
  5. w47w47

    w47w47 Peon

    Messages:
    255
    Likes Received:
    0
    Best Answers:
    0
    Trophy Points:
    0
    #5
    php and mysql are the latest version.
    it's shared hosting also only MYSQL on it.

    dual Intel Xeon (Harpertown) 2.0 GHz Quad Core Processors. CentOS Enterprise Linux. 8 GB of RAM and the connection speed is 100 mbps (mega bits per second).

    programs:

    Program
    Version
    Apache:
    2.2.11
    CentOS:
    .
    cPanel:
    11.24.5-RELEASE
    Curl:
    7.12.1
    MySQL
    5.1.30
    phpMyAdmin
    2.11.9.5
    Python:
    2.4.3
    Program
    Version
    Perl:
    5.8.8
    PHP:
    5.2.4
    ionCube Loader:
    3.1.34
    Zend Optimizer:
    3.3.3
    Ruby:
    1.8.7
    Rails:
    2.3.3
    OpenSSL:
    0.9.8e-fips-rhel5
     
    w47w47, Oct 2, 2009 IP
  6. w47w47

    w47w47 Peon

    Messages:
    255
    Likes Received:
    0
    Best Answers:
    0
    Trophy Points:
    0
    #6
    ok... i fixed this mess. added some indexes and splitted the one large table into more little tables, and now it works/loads like a rocket. :>

    thread closed.
     
    w47w47, Oct 5, 2009 IP
  7. rayqsl

    rayqsl Active Member

    Messages:
    91
    Likes Received:
    0
    Best Answers:
    1
    Trophy Points:
    53
    #7
    You'll get to love and hate indexes w47w47.

    Glad you resolved your current issue.
     
    rayqsl, Oct 5, 2009 IP
  8. phones2me

    phones2me Peon

    Messages:
    35
    Likes Received:
    0
    Best Answers:
    0
    Trophy Points:
    0
    #8
    generally, using mysql alternatives tends to get rather expensive lol

    glad you got it working

    indexing and data normalisation should always be considered first ...
     
    phones2me, Oct 14, 2009 IP