Has anyone had any experience with mysqld getting slower over time until you restart the process finally? The database server process for this forum ran for about 120 days (under moderate load: Queries per second avg: 109.454), slowly getting slower, until this morning I couldn't stand it anymore and after trying everything else, I restarted the mysqld process, and instantly everything is fast again. That really sucks IMO... I'm hoping it's a bug in the current version I'm using or something (4.1.10), so I'm off to go dig through MySQL's bug database to see if it's been fixed. Just wondering if anyone else has experienced the same thing?
The only thing I saw in the bug DB on that version was this. http://bugs.mysql.com/bug.php?id=9594 I am running 4.0.22 and haven't had any issues with a pretty large DB for 220 days and counting.
Perhaps you just explained an issue on my shared server we have not pinpointed. Apache seems up and ftp seems up but the forum don't respond and throws mysql errors. I'll check the details. Thanks for the heads up.
I had a similar issue on my provider when I was running a shared service. It ended up being that the mount point for the tmp directory was corrupting because it was not large enough for all the people sharing the database server. You might have them check that as well.
using Ver 14.7 Distrib 4.1.11, for pc-linux-gnu (i386) here. mysqlbox:~# uptime 11:44:17 up 84 days, 14:46, 3 users, load average: 3.52, 4.24, 3.79 Queries per second avg 175.233 I maybe totally irrelivent though shawn because I do a backup everynight that stops the server, tars up the d b dir and then restarts it. While this happens it fails over to one of the slave db servers for the 1-2 mins of downtime.
I don't see this in a few versions back from 4.1.10, I've run it for months with fractional load on cpu and memory.
Why are you taking down the db server? You can do a hot backup just fine... Anyway, it happened again, so I just upgraded to MySQL 4.1.14. Hopefully whatever was causing it was fixed since 4.1.10.
Well DP I like to do a full raw backup... tar /var/lib/mysql nightly. it rolls over to the slave for the 1 min of downtime and I have a good backup. I had a really bad experience with using mysqldump tools...
I run a couple of websites powered by mysql databases and one of them I have had no problems with apart from when lots of users login then the database won't connect anymore users.. The other one had a similar problem to mentioned above.. The site ran slower and slower and then stop accepting connections all together, even when "optimizing" database and clearing cache the site still refused to run. So I moved hosts and then it is all ok now and the database is busier then ever. Drew.
I know this is old thread, but since we have 2 Mysql Gurus here.... (well at least people that had experience with it)... I am running mysqlhotcopy every night and since tar.gz version of the DB is ~ 5Gb it takes a while to back it up (~30 min) (copy/tar.gz/send to remote computer). As you know mysql hotcopy locks all the tables in DB and makes copy of all phys files in /data/ directory. I am wondering if there is any better way .... Trying to escape downtime without installing a second server. I have not tried using mysqldump, but suspect that with dump backup will take longer and will be much bigger. And a Q to Shomoney: if you run everything to slave server for 2-5 minutes, are you able to insert/update any records in slave and if yes, how do you propagate those updates to primary when it comes up.