hi, any ideas on how to make large mysql databases faster, so that you can read them faster in php without any duplicates? 1. i tryed select distinct... but this make automatically a temp db and that's really slows down the server... 2. also i tryed UNION too but then when i want to limit the SELECT to 30 results it takes out 60 30 from each if there are 2 dbs... also is there some other solution ? no need to split the big db, also it can be only one... just how to speed it up.... :S best regards, w47
but there shouldn't be any duplicats... DISTINCT slows it all down... and when i try to split the DB in 2 then when i select from mysql and limit it to 30 resuts it does return like 24 from the first one and 30 from the secound one because it can't find more then 24 in the first db... maybe that i have written it wrong in php ? how to speed it up ? :S
or is it better to try some other database then mysql? i have more then 2 million rows/results in my mysql DB.
Depends on what you're doing and the server environment you're in. Is the server yours or is this shared hosting? What are the stats of the server? What program(s) are you using? What specific versions of php and mysql are you running?
php and mysql are the latest version. it's shared hosting also only MYSQL on it. dual Intel Xeon (Harpertown) 2.0 GHz Quad Core Processors. CentOS Enterprise Linux. 8 GB of RAM and the connection speed is 100 mbps (mega bits per second). programs: Program Version Apache: 2.2.11 CentOS: . cPanel: 11.24.5-RELEASE Curl: 7.12.1 MySQL 5.1.30 phpMyAdmin 2.11.9.5 Python: 2.4.3 Program Version Perl: 5.8.8 PHP: 5.2.4 ionCube Loader: 3.1.34 Zend Optimizer: 3.3.3 Ruby: 1.8.7 Rails: 2.3.3 OpenSSL: 0.9.8e-fips-rhel5
ok... i fixed this mess. added some indexes and splitted the one large table into more little tables, and now it works/loads like a rocket. :> thread closed.
generally, using mysql alternatives tends to get rather expensive lol glad you got it working indexing and data normalisation should always be considered first ...