One of my old hosts basically wooed me back to put my database driven sites with them with a lot of new features.. The only problem is that one of my sites has a database thats around 65 MB. They won't dump it because they say it might crash the server. They want me to do it through phpMyAdmin. That means splitting off 2MB pieces. Is this reasonable? If so, what is the best way for me to do it. I kinda feel like if they wanted me to move my database sites to them, they should figure it out. I'm curious what others think.
if its just comprised of standard tables, you can just copy the database files under the data folder from one computer to another. No dumping required.
Well my last move about two months ago took about a day/half a day. I had to move a database with the size of 1,3 gb. I have used the phpadmin control panel to gzip the database, download it and then moved it to my new host. I have asked them to install all the files again and they did without any extra costs. Im currently using asmallorange as a host. Uptime is great and so is there support.
I don't think it's reasonable. 65MB will crash their server, that seems odd to me. It's pretty easy, I dump my all my mysql db's nightly and bzip2 them for backup. It isn't 65M, but it only takes a few seconds. mopacfan, I'm not sure that always works well, especially if mysql is running at the time, or the DB is in use. This is what I use to dump the whole thing : /usr/local/bin/mysqldump -uroot -pPASSWORD --opt --all-databases | bzip2 -c > /dir/mysql-backup/database.sql.bz2 It seems to work thus far. It's 12M atm. And it takes about 10 seconds to do it. All they have to do is use the same thing, just specify your DBs instead of "all-databases" zip it and let you download it. I don't understand why reading from their database and writing a file would crash the server. Unless it's like a 486... lol. Maybe they have a heavy DB load, but still, do it at off peak time. They should have a backup anyway... one would think.
I've never had a problem, even when the db is in use. I just make copies on the hard-drive then move the copies to wherever they need to go or just for backups. I just don't like messing w/ the command line stuff. But that does look pretty easy.
http://dev.mysql.com/doc/mysql/en/mysqldump.html Some info, there's a lot on mysql, I haven't tried it, but I've been told, if you just copy the DB files while it's in the middle of a write, some of the DB may be corrupt. But I don't know for sure, just what I was told. =) I've restored with mysqldump before, it's kind of a pain, mysqlhotcopy looks better perhaps.
Just to clarify, the servers aren't physically located in the same place. I suspect my host isn't even physically located next to the servers As far as database corruption, the site they are being transfered to isn't live yet so there would eb no issue with submissions at the same time of the transfer. I once saw a link for a program that sent the commands at a predetermined amount of time each. It was to overcome timing out, although it might be useful in this case if anyone is familiar with it. I split it up into 6 zipped pieces as a compromise. I will be amazed if they come back and say its still too big.
You can write a PHP script to do it for you as well. Just loop through all the tables. Are all your articles served dynamically from the DB at every request? If so, have you considered publishing them to static files every now and then?
Yup Lol, another project! I just finished the Mod Rewrite last night and Weir has been fixing a few bugs as part of the wedding fund. I may do that in the future as speed becomes an issue. We are at around 15,000 submitted so its still pretty fast.
They decided to do it with the 6 split up files. I still don't understand why they couldn't do it all in one go but at least its problem solved. Thanks for the feedback.