I find myself more often than I think needing to shuffle around MySQL data. Usually this is about dumping one table to another database. For example, I sometimes want to get some production data into a test or development database to reproduce an issue. On my local box I often need to recreate an existing database structure for development purposes.

The task happened often enough that I took the time to write a little helper script for that. I wanted to have it as general as possible so I came up with the following command line interface.

$ migrate.pl --from srchost:srcuser:srcpass:srcdb \
    --to dsthost:dstuser:dstpass:dstdb \
    --dump-table table1 --dump-table table2 \
    --rename-table tableold:tablenew

The --from and --to arguments are mandatory and define the source and destination database. Since the databases could be everywhere (read: on different hosts) one specifies the database’s host and credentials, too.

The actual table(s) to be copied are set via --dump-table which can provided multiple times. Likewise, --rename-table will rename a table in the target database. The script does not dump & rename within one step.

Using a script with a nice command line interface has several advantages.

  • It’s usually faster than any GUI MySQL client.
  • Rerunning the same command or a slightly different (dumper another table) is simple and efficient.
  • Less error-prone compared to manually typing the MySQL commands and/or clicking around in the GUI.

Here is the script’s code:

# Shuffle some MySQL data around.

use strict;
use warnings;
use Getopt::Long;

sub main {
    my %opts = ();
    ) or die("Error in command line arguments!");
    my $from = $opts{from};
    my $to = $opts{to};
    if (!defined($from) || !defined($to)) {

    # Getting the access credentials:
    my ($from_host, $from_user, $from_pass, $from_db) = split(/:/, $opts{from});
    my ($to_host, $to_user, $to_pass, $to_db) = split(/:/, $opts{to});

    # Do some sanity checking:
    print "Data from: $from_user\@$from_host.$from_db\n";
    print "Data to  : $to_user\@$to_host.$to_db\n";
    print "Continue? [y/N] ";
    my $input = <STDIN>;
    if ($input ne "y" && $input ne "Y") {
        exit 0;

    # Export using the least intrusive database access:
    my $mysql_cmd_export = "mysqldump --single-transaction --lock-tables=false --skip-add-locks";
    $mysql_cmd_export .= " -h $from_host -u $from_user -p$from_pass";
    if ($opts{no_data}) { # Only copy table structure
        $mysql_cmd_export .= " --no-data";

    # The Import is a simply mysql client reading from stdin:
    my $mysql_cmd_import = "mysql";
    $mysql_cmd_import .= " -h $to_host -u $to_user -p$to_pass";

    # Dump all specified tables:
    for my $table (@{$opts{dump_tables}}) {
        my $cmd = "$mysql_cmd_export $from_db $table | $mysql_cmd_import $to_db";
        print "Dumping $from_db.$table to $to_db.$table ...\n";

    # Rename all specified tables:
    for my $oldcolonnew (@{$opts{rename_tables}}) {
        my ($old, $new) = split(/:/, $oldcolonnew);
        my $cmd = "$mysql_cmd_import $to_db -e \"RENAME TABLE $old TO $new;\"";
        print "Renaming $to_db.$old to $to_db.$new ...\n";


sub usage {
    my ($ret) = @_;
    print "Mandatory:\n";
    print "$0 --from 'host:user:pass:db' --to 'host:user:pass:db'\n";
    print "\n";
    print "Optionial:\n";
    print "This will completely dump the named table from \$from to \$to.\n";
    print "--dump-table <tablename>\n";
    print "\n";
    print "This will rename named table from \$old to \$new in \$to.\n";
    print "--rename-table <oldname>:<newname>\n";



The code is pretty forward I believe. I’d like to highlight some points, though.

Using the password on the command line is insecure. Anyone with read access to the process list (or your history) can read the password. This script is for development purposes and I believe it is thus okay to leave it like so.

There is a --no-data switch which allows to simply copy a table’s structure.

I am using mysqldump’s –single-transaction, –lock-tables=false & –skip-add-locks switches to make the dumping process faster. This will make sure the database is not blocked (for other applications) while the dump is going on (pretty important for a shared databases). The downside is that the dumped data may not be consistent (when multiple tables are dumped) but I never experienced any problems with that.

Before actually doing anything, the script prints the source & target databases and asks the user to confirm. This is the last resort before overwriting any existing data. It’s kind of funny: The less steps you have to do manually, the faster you can do stupid things…