My story is : I have 1 bit table about 2Gig in size, this table contains column called "description" , this field stored a lot of texts which is basically making table so big. because of the size of this table - table works very slow, I want to make it smaller some how. So I got 2 ideas: 1) Compress that text some how before storing and decompress every time its using 2) Move column from general database and make it on separate table , so basically create couple small tables instead of 1 big. Need your help to advice me what else I could do or what would be the best way. Thank you very much
depends on your data access requirements, but the first thing to consider is a separate table linked in with an integer id (primary key on new table, foreign key on existing table)
Or edit your my.cnf and if you got enough memory tune it and enable file per table.That should improve the speed.
Normally webservers have around 4 gb of ram.I gave him only some examples what he can do but there are more things you can do to improve it.
If I understand you correctly, you might not have to do anything... Almost. First of all, you're saying that because each record in a table is big - it is slow. Not because there are lot of records, right? In any case, well indexed table with millions of records should perform fine. Let's say you have a table with 10 fields (one of them is "description") and you need to retrieve 1000 records. You would have a query like this: SELECT * FROM table WHERE field1=value Because field "description" is 4KB in size (for example), the result would return over 4000KB (4MB) of data. If a user's internet connection runs at 100KB/sec, it would take about 40 seconds to load the page. So, instead of moving the field "description" to a different place why don't you just run a query like this: SELECT field1, field2, field3, field4 FROM table WHERE field1=value If those fields are small (unlike "description") - it would take much less time to display 1000 results. Problem solved. (If you need to display the "description" field, moving it to a different table will not help - you will still need to retrieve this data and it's big according to you. In this case I would suggest liming the number of records that can be retrieved at the same time, add something like "LIMIT 100" to the end of your query) Let me know if this helps you.
First define slow for us. What is slow / which specific queries are slow ? What have you done to make them faster What is your DB schema and which indexes do you have? Are your Web and DB on different servers?