What is the Best method to transfer/clone large MySQL databases to another server?


#1

I am new to SQL Server Database. Recently I am facing a crazy problem. I am running a web application within a shared hosting environment which uses an MYSQL database which is about 3GB large.

For testing purposes, I have set up a XAMPP environment on my local macOS machine. To copy the online DB to my local machine I used mysqldump on the server and then directly imported the dump file to MySQL :

// Server
$ mysqldump -alv -h127.0.0.3 --default-character-set=utf8 -u dbUser -p'dbPass' --extended-insert dbName > dbDump.sql

// Local machine
$ mysql -h 127.0.0.3 -u dbUser -p'dbPass' dbName < dbDump.sql

The only optimization here is the use of extended-insert . However the import takes about 10 hours !

The dumb file already includes commands to disable unique and foreign key checks to speed up the import:

/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;

I am not an SQL export and not sure if the /*!40014 commands are executed or not, so added SET FOREIGN_KEY_CHECKS=0; manually to the file to make sure the checks are disabled. However this does not make any difference.

Why it takes so much time to import the data? Is there any better or faster way to do this?

The server is not the fastest but it takes just about 2 minutes to export/dump the data. That exporting is faster than importing is not surprising but 300 times faster? This is a huge difference…

Isn’t there any other solution that would be faster? Copy the binary DB file instead, for example? Anything would be better than using a text file as a transfer medium.

This is not just about transferring the data to another machine for testing purposes. I also create daily backups of the database. If it would be necessary to restore the DB it would be pretty bad if the site is down for 10 hours…

Thanks & Regards
Camillelola