Best way to automatically backup mysql db's?

I was wondering what the recommended method to auto backup mysql databases are. Is it ok to just set a cron job to tar.gz everything in the db folder?

Edit:

I just found mysqldump....i guess this would be the prefered method
 
that may lead you to a corrupted databases after recovering. I find the best to lock all the tables for writing and take a snapshot of the corresponding file system. For this is the best ZFS, as its snapshotting is really fast.
 
so far i have this:

Code:
#!/bin/sh
#
MyBackup="mysql.backup.`date +%Y.%m.%d.at.%H.%M.%S`.tgz"
/usr/local/bin/mysqldump -uxxxxx -pxxxxxxxx -hxxxxxxxx -c --add-drop-table forum > /var/backups/mysql/backup.sql
#
(cd /var/backups/mysql/ && tar cfz $MyBackup backup.sql)
rm /var/backups/mysql/backup.sql
scp /var/backups/mysql/`$MyBackup`.tgz user@host:/my/backup/dir

any tips would be appreciated.
 
Here's a bit of a perl script I made for work:

Code:
## go through each database from the list and dump the database
script_logline( "Backup MySQL databases." );
$failed = "";
foreach my $database (@databases) {
        ## all databases with _test / _old are not backed up
        if ( $database =~ /_test/ || $database =~ /_old/ || $database =~ /lost\+found/ ) {
                next;
        }
        ## start mysqldump utility for this database
        script_logline( "Dumping $database." );
        # Keep track of errors
        my $error = 0;

        # We need to capture STDERR
        # Save the original STDERR
        my( $ORIGSTDERR );
        open( $ORIGSTDERR, ">&", STDERR);

        # Create a temporary filehandle to catch STDERR
        my( $CATCHERR);
        if( open( $CATCHERR, "+>", undef ) ) {
                open( STDERR, ">&", $CATCHERR );
                # From here STDERR is captured

                # Execute mysqldump
                if( open( MYSQLDUMP, "$mysqldump --defaults-extra-file=/var/mysql/my.pwd --skip-lock-tables -q $database |" ) ) {
                        # Execute gzip
                        if( open( GZIP, "| $gzip -c > $backupdir/$database-$date.sql.gz" ) ) {
                                while( <MYSQLDUMP> ) {
                                        # Pipe the data from mysqldump to gzip
                                        print GZIP;

                                }
                                close( GZIP );
                        } else {
                                script_logline( "gzip failed for $database" );
                                $failed .= "- gzip failed for $database\n";
                                $error++;
                        }
                        close( MYSQLDUMP );
                } else {
                        script_logline( "mysqldump failed for $database" );
                        $failed .= "- mysqldump failed for $database\n";
                        $error++;
                }

                # Restore STDERR
                open( STDERR, ">&", $ORIGSTDERR );

                # Lets see if it produced any errors
                seek( $CATCHERR, 0, SEEK_SET );
                while( <$CATCHERR> ) {
                        my( $output ) = $_;
                        script_logline( "Error during mysqldump($database): $output" );
                        $failed .= "- Errors during mysqldump($database): $output";
                        $error++;
                }
                close( $CATCHERR );
        } else {
                my( $output ) = $!;
                script_logline( "Unable to open temp file: $output" );
                $failed .= "- Unable to open temp file: $output\n";
                $error++;
        }
        if( $error == 0 ) { # Only clean up old backups if the backup succeeds
                cleanupdirectory($backupdir, $database);
        } else {
                script_logline( "Backup of $database produced errors, not cleaning old backups" );
                $failed .= "- Backup of $database produced errors, not cleaning old backups\n";
        }
}

if ( $failed ne "" ) {
        $failed = "Failure in backing up the following databases:\n\n$failed\n";
        $failed .= "Check the log file /local/backup/mysqlbackup.log.\n";
        script_logline( "Sending email to SysMan mailbox" );
        mail_support($failed);
}

script_logline( "Ready dumping databases" );

mail_support and script_logline are functions of a module we wrote ourselves. It should be rather self-explaining though.
 
mysqldump is quick, easy and relatively painless for small databases, but it's next to useless for bigger setups (don't forget that it'll usually take many times longer to restore your data than to back it up!). For larger setups, the trick is to FLUSH TABLES WITH READ LOCK, take a filesystem snapshot, UNLOCK TABLES, mount the snapshot and copy/compress your data away before unmounting and removing the snapshot.

If you're doing this on a replicated slave server, you'll also need to make note of the Relay_Master_Log_File and Exec_Master_Log_Pos values from SHOW SLAVE STATUS output after you open the lock.

Incidentally, pbzip2 in ports is a superb tool for (de)compressing large amounts of data.
 
mjb said:
mysqldump is quick, easy and relatively painless for small databases, but it's next to useless for bigger setups (don't forget that it'll usually take many times longer to restore your data than to back it up!).
True but the mysqldump makes it also easier to just restore a single table or even only parts of a table.

For larger setups, the trick is to FLUSH TABLES WITH READ LOCK, take a filesystem snapshot, UNLOCK TABLES, mount the snapshot and copy/compress your data away before unmounting and removing the snapshot.
I know it's not a problem for MyISAM tables but does this also work for InnoDB?
 
Thanks for all the help. This is just a small forum. So i'm guessing mysqldump will be fine for our needs.
 
Back
Top