My query looks like this... $query = "SELECT * FROM `$state` WHERE category = '$tcategory' AND city = '$tcity'"; looking at 'top' through ssh shows 3988mb virt 280mb Res for mysqld at the same time apache is using up between 1000mb and 2000mb Table has 10million rows and is 1.2gb any idea how to make this use up less Server at the moment is using up 4gb ram and 5gb swap so all ram used up swap is set to 10gb, no traffic to site other then bots
do not show all data at the same time, try using LIMIT, for example : $query = "SELECT * FROM `$state` WHERE category = '$tcategory' AND city = '$tcity' LIMIT 0,100"; to show first 100 row
Explicitly name the columns in the SELECT statement, limit the rows, and add appropriate indexes to the database as needed. Are category and city text columns? If so, these are expensive scans through the database.
i'd imagine if your result set is low that adding indexes to the appropriate columns would significantly improve efficiency of the query add the index to (category, city) note: this will take a while.
Well i use select * because i need all the information, as far as indexing the stuff, im not sure what to index as there are about 1 million possible distinct queries, well i have 700 categories and about 50k cities so that would be 35 million different queries. My site does load instantly though, in milliseconds even going through those 10 million records. My mysql i have at very high settings so maybe because of this it uses the most ram possible. key_buffer = 1500M table_cache = 4000 sort_buffer_size = 5M max_connections = 400 query_cache_type = 1 query_cache_limit = 1M query_cache_size = 1500M innodb_flush_log_at_trx_commit = 0 innodb_buffer_pool_size = 700M innodb_additional_mem_pool_size=50M innodb_log_file_size = 175 innodb_log_buffer_size=8M Also as far as ... $query = "SELECT * FROM `$state` WHERE category = '$tcategory' AND city = '$tcity'"; state is just a view of all that data wich all 50 states are all in 1 table, im pretty sure if all 50 states where seperate everything would use up less, so is this something i can index instead of using a view that i think doesnt help?
from your answer,it seems that you don't understand database indexing.. in rdbms usually colums that are used a lot as the search item i often index so that there are faster to search.. so when you search on that column the db don't have to scan the whole table.. it only scans the index.. the index is usually stored in a binary tree format.. so the searches are way faster espcially if you have a large dataset... the index does incur some overhead but if you index the columns that you use a lot in searches usually the speed you gained back from the index is worth it.. Also note that if you have a frequently updated table that the performance hit is more because the index have to be updated on each update of the table. from the looks of it.. the table itself is not really design to optimize speed.. can you post table names and columns.. it would help a lot in deciding how to optimize the query..