探索,發現,愛好、學習,記錄,分享。
學海無涯,天涯若比鄰,三人行,必有我師。

4.5.4 mysqldump — A Database Backup Program

The mysqldump client utility performs logical backups, producing a set of SQL statements that can be executed to reproduce the original database object definitions and table data. It dumps one or more MySQL databases for backup or transfer to another SQL server. The mysqldump command can also generate output in CSV, other delimited text, or XML format.

mysqldump requires at least the SELECT privilege for dumped tables, SHOW VIEW for dumped views, TRIGGER for dumped triggers, and LOCK TABLES if the –single-transaction option is not used. Certain options might require other privileges as noted in the option descriptions.

To reload a dump file, you must have the privileges required to execute the statements that it contains, such as the appropriate CREATE privileges for objects created by those statements.

mysqldump output can include ALTER DATABASE statements that change the database collation. These may be used when dumping stored programs to preserve their character encodings. To reload a dump file containing such statements, the ALTERprivilege for the affected database is required.

Note

A dump made using PowerShell on Windows with output redirection creates a file that has UTF-16 encoding:

shell> mysqldump [options] > dump.sql 

However, UTF-16 is not permitted as a connection character set (see Section 10.1.4, “Connection Character Sets and Collations”), so the dump file will not load correctly. To work around this issue, use the –result-file option, which creates the output in ASCII format:

shell> mysqldump [options] --result-file=dump.sql 

Performance and Scalability Considerations

mysqldump advantages include the convenience and flexibility of viewing or even editing the output before restoring. You can clone databases for development and DBA work, or produce slight variations of an existing database for testing. It is not intended as a fast or scalable solution for backing up substantial amounts of data. With large data sizes, even if the backup step takes a reasonable time, restoring the data can be very slow because replaying the SQL statements involves disk I/O for insertion, index creation, and so on.

For large-scale backup and restore, a physical backup is more appropriate, to copy the data files in their original format that can be restored quickly:

  • If your tables are primarily InnoDB tables, or if you have a mix of InnoDB and MyISAM tables, consider using the mysqlbackup command of the MySQL Enterprise Backup product. (Available as part of the Enterprise subscription.) It provides the best performance for InnoDB backups with minimal disruption; it can also back up tables from MyISAM and other storage engines; and it provides a number of convenient options to accommodate different backup scenarios. See Section 25.2, “MySQL Enterprise Backup Overview”.

  • If your tables are primarily MyISAM tables, consider using the mysqlhotcopy instead, for better performance than mysqldump of backup and restore operations. See Section 4.6.10, “mysqlhotcopy — A Database Backup Program”.

mysqldump can retrieve and dump table contents row by row, or it can retrieve the entire content from a table and buffer it in memory before dumping it. Buffering in memory can be a problem if you are dumping large tables. To dump tables row by row, use the –quick option (or –opt, which enables –quick). The –opt option (and hence –quick) is enabled by default, so to enable memory buffering, use –skip-quick.

If you are using a recent version of mysqldump to generate a dump to be reloaded into a very old MySQL server, use the –skip-opt option instead of the –opt or –extended-insert option.

For additional information about mysqldump, see Section 7.4, “Using mysqldump for Backups”.

Invocation Syntax

There are in general three ways to use mysqldump—in order to dump a set of one or more tables, a set of one or more complete databases, or an entire MySQL server—as shown here:

shell> mysqldump [options] db_name [tbl_name ...] shell> mysqldump [options] --databases db_name ... shell> mysqldump [options] --all-databases 

To dump entire databases, do not name any tables following db_name, or use the –databases or –all-databases option.

To see a list of the options your version of mysqldump supports, issue the command mysqldump –help.

Option Syntax – Alphabetical Summary

mysqldump supports the following options, which can be specified on the command line or in the [mysqldump] and [client] groups of an option file. For information about option files used by MySQL programs, see Section 4.2.6, “Using Option Files”.

Table 4.11 mysqldump Options

Format Description Introduced
–add-drop-database Add DROP DATABASE statement before each CREATE DATABASE statement  
–add-drop-table Add DROP TABLE statement before each CREATE TABLE statement  
–add-drop-trigger Add DROP TRIGGER statement before each CREATE TRIGGER statement  
–add-locks Surround each table dump with LOCK TABLES and UNLOCK TABLES statements  
–all-databases Dump all tables in all databases  
–allow-keywords Allow creation of column names that are keywords  
–apply-slave-statements Include STOP SLAVE prior to CHANGE MASTER statement and START SLAVE at end of output  
–bind-address Use specified network interface to connect to MySQL Server 5.6.1
–character-sets-dir Directory where character sets are installed  
–comments Add comments to dump file  
–compact Produce more compact output  
–compatible Produce output that is more compatible with other database systems or with older MySQL servers  
–complete-insert Use complete INSERT statements that include column names  
–compress Compress all information sent between client and server  
–create-options Include all MySQL-specific table options in CREATE TABLE statements  
–databases Interpret all name arguments as database names  
–debug Write debugging log  
–debug-check Print debugging information when program exits  
–debug-info Print debugging information, memory, and CPU statistics when program exits  
–default-auth Authentication plugin to use  
–default-character-set Specify default character set  
–defaults-extra-file Read named option file in addition to usual option files  
–defaults-file Read only named option file  
–defaults-group-suffix Option group suffix value  
–delayed-insert Write INSERT DELAYED statements rather than INSERT statements  
–delete-master-logs On a master replication server, delete the binary logs after performing the dump operation  
–disable-keys For each table, surround INSERT statements with statements to disable and enable keys  
–dump-date Include dump date as “Dump completed on” comment if –comments is given  
–dump-slave Include CHANGE MASTER statement that lists binary log coordinates of slave’s master  
–enable-cleartext-plugin Enable cleartext authentication plugin 5.6.28
–events Dump events from dumped databases  
–extended-insert Use multiple-row INSERT syntax  
–fields-enclosed-by This option is used with the –tab option and has the same meaning as the corresponding clause for LOAD DATA INFILE  
–fields-escaped-by This option is used with the –tab option and has the same meaning as the corresponding clause for LOAD DATA INFILE  
–fields-optionally-enclosed-by This option is used with the –tab option and has the same meaning as the corresponding clause for LOAD DATA INFILE  
–fields-terminated-by This option is used with the –tab option and has the same meaning as the corresponding clause for LOAD DATA INFILE  
–flush-logs Flush MySQL server log files before starting dump  
–flush-privileges Emit a FLUSH PRIVILEGES statement after dumping mysql database  
–force Continue even if an SQL error occurs during a table dump  
–help Display help message and exit  
–hex-blob Dump binary columns using hexadecimal notation  
–host Host to connect to (IP address or hostname)  
–ignore-table Do not dump given table  
–include-master-host-port Include MASTER_HOST/MASTER_PORT options in CHANGE MASTER statement produced with –dump-slave  
–insert-ignore Write INSERT IGNORE rather than INSERT statements  
–lines-terminated-by This option is used with the –tab option and has the same meaning as the corresponding clause for LOAD DATA INFILE  
–lock-all-tables Lock all tables across all databases  
–lock-tables Lock all tables before dumping them  
–log-error Append warnings and errors to named file  
–login-path Read login path options from .mylogin.cnf 5.6.6
–master-data Write the binary log file name and position to the output  
–max_allowed_packet Maximum packet length to send to or receive from server  
–net_buffer_length Buffer size for TCP/IP and socket communication  
–no-autocommit Enclose the INSERT statements for each dumped table within SET autocommit = 0 and COMMIT statements  
–no-create-db Do not write CREATE DATABASE statements  
–no-create-info Do not write CREATE TABLE statements that re-create each dumped table  
–no-data Do not dump table contents  
–no-defaults Read no option files  
–no-set-names Same as –skip-set-charset  
–no-tablespaces Do not write any CREATE LOGFILE GROUP or CREATE TABLESPACE statements in output  
–opt Shorthand for –add-drop-table –add-locks –create-options –disable-keys –extended-insert –lock-tables –quick –set-charset.  
–order-by-primary Dump each table’s rows sorted by its primary key, or by its first unique index  
–password Password to use when connecting to server  
–pipe On Windows, connect to server using named pipe  
–plugin-dir Directory where plugins are installed  
–port TCP/IP port number to use for connection  
–print-defaults Print default options  
–protocol Connection protocol to use  
–quick Retrieve rows for a table from the server a row at a time  
–quote-names Quote identifiers within backtick characters  
–replace Write REPLACE statements rather than INSERT statements  
–result-file Direct output to a given file  
–routines Dump stored routines (procedures and functions) from dumped databases  
–secure-auth Do not send passwords to server in old (pre-4.1) format 5.6.17
–set-charset Add SET NAMES default_character_set to output  
–set-gtid-purged Whether to add SET @@GLOBAL.GTID_PURGED to output 5.6.9
–shared-memory-base-name The name of shared memory to use for shared-memory connections  
–single-transaction Issue a BEGIN SQL statement before dumping data from server  
–skip-add-drop-table Do not add a DROP TABLE statement before each CREATE TABLE statement  
–skip-add-locks Do not add locks  
–skip-comments Do not add comments to dump file  
–skip-compact Do not produce more compact output  
–skip-disable-keys Do not disable keys  
–skip-extended-insert Turn off extended-insert  
–skip-opt Turn off options set by –opt  
–skip-quick Do not retrieve rows for a table from the server a row at a time  
–skip-quote-names Do not quote identifiers  
–skip-set-charset Do not write SET NAMES statement  
–skip-triggers Do not dump triggers  
–skip-tz-utc Turn off tz-utc  
–socket For connections to localhost, the Unix socket file to use  
–ssl Enable secure connection  
–ssl-ca Path of file that contains list of trusted SSL CAs  
–ssl-capath Path of directory that contains trusted SSL CA certificates in PEM format  
–ssl-cert Path of file that contains X509 certificate in PEM format  
–ssl-cipher List of permitted ciphers to use for connection encryption  
–ssl-crl Path of file that contains certificate revocation lists 5.6.3
–ssl-crlpath Path of directory that contains certificate revocation list files 5.6.3
–ssl-key Path of file that contains X509 key in PEM format  
–ssl-mode Security state of connection to server 5.6.30
–ssl-verify-server-cert Verify server certificate Common Name value against host name used when connecting to server  
–tab Produce tab-separated data files  
–tables Override –databases or -B option  
–triggers Dump triggers for each dumped table  
–tz-utc Add SET TIME_ZONE=’+00:00′ to dump file  
–user MySQL user name to use when connecting to server  
–verbose Verbose mode  
–version Display version information and exit  
–where Dump only rows selected by given WHERE condition  
–xml Produce XML output  


Connection Options

The mysqldump command logs into a MySQL server to extract information. The following options specify how to connect to the MySQL server, either on the same machine or a remote system.

  •  –bind-address=ip_address

    On a computer having multiple network interfaces, use this option to select which interface to use for connecting to the MySQL server.

    This option is supported beginning with MySQL 5.6.1.

  •  –compress, -C

    Compress all information sent between the client and the server if both support compression.

  •  –default-auth=plugin

    A hint about the client-side authentication plugin to use. See Section 6.3.7, “Pluggable Authentication”.

  •  –enable-cleartext-plugin

    Enable the mysql_clear_password cleartext authentication plugin. (See Section 6.5.1.7, “The Cleartext Client-Side Authentication Plugin”.)

    This option was added in MySQL 5.6.28.

  •  –host=host_name, -h host_name

    Dump data from the MySQL server on the given host. The default host is localhost.

  •  –login-path=name

    Read options from the named login path in the .mylogin.cnf login path file. A login path is an option group containing options that specify which MySQL server to connect to and which account to authenticate as. To create or modify a login path file, use the mysql_config_editor utility. See Section 4.6.6, “mysql_config_editor — MySQL Configuration Utility”. This option was added in MySQL 5.6.6.

  •  –password[=password], -p[password]

    The password to use when connecting to the server. If you use the short option form (-p), you cannot have a space between the option and the password. If you omit the password value following the –password or -p option on the command line,mysqldump prompts for one.

    Specifying a password on the command line should be considered insecure. See Section 6.1.2.1, “End-User Guidelines for Password Security”. You can use an option file to avoid giving the password on the command line.

  •  –pipe, -W

    On Windows, connect to the server using a named pipe. This option applies only if the server supports named-pipe connections.

  •  –plugin-dir=dir_name

    The directory in which to look for plugins. Specify this option if the –default-auth option is used to specify an authentication plugin but mysqldump does not find it. See Section 6.3.7, “Pluggable Authentication”.

  •  –port=port_num, -P port_num

    The TCP/IP port number to use for the connection.

  •  –protocol={TCP|SOCKET|PIPE|MEMORY}

    The connection protocol to use for connecting to the server. It is useful when the other connection parameters normally would cause a protocol to be used other than the one you want. For details on the permissible values, see Section 4.2.2, “Connecting to the MySQL Server”.

  •  –secure-auth

    Do not send passwords to the server in old (pre-4.1) format. This prevents connections except for servers that use the newer password format. This option is enabled by default; use –skip-secure-auth to disable it. This option was added in MySQL 5.6.17.

    Note

    Passwords that use the pre-4.1 hashing method are less secure than passwords that use the native password hashing method and should be avoided. Pre-4.1 passwords are deprecated and support for them will be removed in a future MySQL release. For account upgrade instructions, see Section 6.5.1.3, “Migrating Away from Pre-4.1 Password Hashing and the mysql_old_password Plugin”.

    Note

    This option is deprecated and will be removed in a future release. As of MySQL 5.7.5, it is always enabled and attempting to disable it produces an error.

  •  –socket=path, -S path

    For connections to localhost, the Unix socket file to use, or, on Windows, the name of the named pipe to use.

  •  –ssl*

    Options that begin with –ssl specify whether to connect to the server using SSL and indicate where to find SSL keys and certificates. See Section 6.4.5, “Command Options for Secure Connections”.

  •  –user=user_name, -u user_name

    The MySQL user name to use when connecting to the server.

You can also set the following variables by using —var_name=value syntax:

  •  max_allowed_packet

    The maximum size of the buffer for client/server communication. The default is 24MB, the maximum is 1GB.

  •  net_buffer_length

    The initial size of the buffer for client/server communication. When creating multiple-row INSERT statements (as with the –extended-insert or –opt option), mysqldump creates rows up to net_buffer_length bytes long. If you increase this variable, ensure that the MySQL server net_buffer_length system variable has a value at least this large.

Option-File Options

These options are used to control which option files to read.

  •  –defaults-extra-file=file_name

    Read this option file after the global option file but (on Unix) before the user option file. If the file does not exist or is otherwise inaccessible, an error occurs. file_name is interpreted relative to the current directory if given as a relative path name rather than a full path name.

  •  –defaults-file=file_name

    Use only the given option file. If the file does not exist or is otherwise inaccessible, an error occurs. file_name is interpreted relative to the current directory if given as a relative path name rather than a full path name.

    Exception: Even with –defaults-file, client programs read .mylogin.cnf.

  •  –defaults-group-suffix=str

    Read not only the usual option groups, but also groups with the usual names and a suffix of str. For example, mysqldump normally reads the [client] and [mysqldump] groups. If the –defaults-group-suffix=_other option is given, mysqldump also reads the [client_other] and [mysqldump_other] groups.

  •  –no-defaults

    Do not read any option files. If program startup fails due to reading unknown options from an option file, –no-defaults can be used to prevent them from being read.

    The exception is that the .mylogin.cnf file, if it exists, is read in all cases. This permits passwords to be specified in a safer way than on the command line even when –no-defaults is used. (.mylogin.cnf is created by the mysql_config_editor utility. SeeSection 4.6.6, “mysql_config_editor — MySQL Configuration Utility”.)

  •  –print-defaults

    Print the program name and all options that it gets from option files.

DDL Options

Usage scenarios for mysqldump include setting up an entire new MySQL instance (including database tables), and replacing data inside an existing instance with existing databases and tables. The following options let you specify which things to tear down and set up when restoring a dump, by encoding various DDL statements within the dump file.

  •  –add-drop-database

    Write a DROP DATABASE statement before each CREATE DATABASE statement. This option is typically used in conjunction with the –all-databases or –databases option because no CREATE DATABASE statements are written unless one of those options is specified.

  •  –add-drop-table

    Write a DROP TABLE statement before each CREATE TABLE statement.

  •  –add-drop-trigger

    Write a DROP TRIGGER statement before each CREATE TRIGGER statement.

  •  –all-tablespaces, -Y

    Adds to a table dump all SQL statements needed to create any tablespaces used by an NDB table. This information is not otherwise included in the output from mysqldump. This option is currently relevant only to MySQL Cluster tables.

  •  –no-create-db, -n

    Suppress the CREATE DATABASE statements that are otherwise included in the output if the –databases or –all-databases option is given.

  •  –no-create-info, -t

    Do not write CREATE TABLE statements that create each dumped table.

    Note

    This option does not exclude statements creating log file groups or tablespaces from mysqldump output; however, you can use the –no-tablespaces option for this purpose.

  •  –no-tablespaces, -y

    This option suppresses all CREATE LOGFILE GROUP and CREATE TABLESPACE statements in the output of mysqldump.

  •  –replace

    Write REPLACE statements rather than INSERT statements.

Debug Options

The following options print debugging information, encode debugging information in the dump file, or let the dump operation proceed regardless of potential problems.

  •  –allow-keywords

    Permit creation of column names that are keywords. This works by prefixing each column name with the table name.

  •  –comments, -i

    Write additional information in the dump file such as program version, server version, and host. This option is enabled by default. To suppress this additional information, use –skip-comments.

  •  –debug[=debug_options], -# [debug_options]

    Write a debugging log. A typical debug_options string is d:t:o,file_name. The default value is d:t:o,/tmp/mysqldump.trace.

  •  –debug-check

    Print some debugging information when the program exits.

  •  –debug-info

    Print debugging information and memory and CPU usage statistics when the program exits.

  •  –dump-date

    If the –comments option is given, mysqldump produces a comment at the end of the dump of the following form:

    -- Dump completed on DATE 

    However, the date causes dump files taken at different times to appear to be different, even if the data are otherwise identical. –dump-date and –skip-dump-date control whether the date is added to the comment. The default is –dump-date(include the date in the comment). –skip-dump-date suppresses date printing.

  •  –force, -f

    Continue even if an SQL error occurs during a table dump.

    One use for this option is to cause mysqldump to continue executing even when it encounters a view that has become invalid because the definition refers to a table that has been dropped. Without –force, mysqldump exits with an error message. With –force, mysqldump prints the error message, but it also writes an SQL comment containing the view definition to the dump output and continues executing.

  •  –log-error=file_name

    Log warnings and errors by appending them to the named file. The default is to do no logging.

  •  –skip-comments

    See the description for the –comments option.

  •  –verbose, -v

    Verbose mode. Print more information about what the program does.

Help Options

The following options display information about the mysqldump command itself.

  •  –help, -?

    Display a help message and exit.

  •  –version, -V

    Display version information and exit.

Internationalization Options

The following options change how the mysqldump command represents character data with national language settings.

Replication Options

The mysqldump command is frequently used to create an empty instance, or an instance including data, on a slave server in a replication configuration. The following options apply to dumping and restoring data on replication master and slave servers.

  •  –apply-slave-statements

    For a slave dump produced with the –dump-slave option, add a STOP SLAVE statement before the CHANGE MASTER TO statement and a START SLAVE statement at the end of the output.

  •  –delete-master-logs

    On a master replication server, delete the binary logs by sending a PURGE BINARY LOGS statement to the server after performing the dump operation. This option automatically enables –master-data.

  •  –dump-slave[=value]

    This option is similar to –master-data except that it is used to dump a replication slave server to produce a dump file that can be used to set up another server as a slave that has the same master as the dumped server. It causes the dump output to include a CHANGE MASTER TO statement that indicates the binary log coordinates (file name and position) of the dumped slave’s master. These are the master server coordinates from which the slave should start replicating.

    –dump-slave causes the coordinates from the master to be used rather than those of the dumped server, as is done by the –master-data option. In addition, specfiying this option causes the –master-data option to be overridden, if used, and effectively ignored.

    The option value is handled the same way as for –master-data (setting no value or 1 causes a CHANGE MASTER TO statement to be written to the dump, setting 2 causes the statement to be written but encased in SQL comments) and has the same effect as –master-data in terms of enabling or disabling other options and in how locking is handled.

    This option causes mysqldump to stop the slave SQL thread before the dump and restart it again after.

    In conjunction with –dump-slave, the –apply-slave-statements and –include-master-host-port options can also be used.

  •  –include-master-host-port

    For the CHANGE MASTER TO statement in a slave dump produced with the –dump-slave option, add MASTER_HOST and MASTER_PORT options for the host name and TCP/IP port number of the slave’s master.

  •  –master-data[=value]

    Use this option to dump a master replication server to produce a dump file that can be used to set up another server as a slave of the master. It causes the dump output to include a CHANGE MASTER TO statement that indicates the binary log coordinates (file name and position) of the dumped server. These are the master server coordinates from which the slave should start replicating after you load the dump file into the slave.

    If the option value is 2, the CHANGE MASTER TO statement is written as an SQL comment, and thus is informative only; it has no effect when the dump file is reloaded. If the option value is 1, the statement is not written as a comment and takes effect when the dump file is reloaded. If no option value is specified, the default value is 1.

    This option requires the RELOAD privilege and the binary log must be enabled.

    The –master-data option automatically turns off –lock-tables. It also turns on –lock-all-tables, unless –single-transaction also is specified, in which case, a global read lock is acquired only for a short time at the beginning of the dump (see the description for –single-transaction). In all cases, any action on logs happens at the exact moment of the dump.

    It is also possible to set up a slave by dumping an existing slave of the master, using the –dump-slave option, which overrides –master-data and causes it to be ignored if both options are used.

    Prior to MySQL 5.6.4, this option was required for dumping the replication log tables (see Section 17.2.2, “Replication Relay and Status Logs”).

  •  –set-gtid-purged=value

    This option enables control over global transaction ID (GTID) information written to the dump file, by indicating whether to add a SET @@global.gtid_purged statement to the output. This option may also cause a statement to be written to the output that disables binary logging while the dump file is being reloaded.

    The following table shows the permitted option values. The default value is AUTO.

    Value Meaning
    OFF Add no SET statement to the output.
    ON Add a SET statement to the output. An error occurs if GTIDs are not enabled on the server.
    AUTO Add a SET statement to the output if GTIDs are enabled on the server.

    The –set-gtid-purged option has the following effect on binary logging when the dump file is reloaded:

    • –set-gtid-purged=OFF: SET @@SESSION.SQL_LOG_BIN=0; is not added to the output.

    • –set-gtid-purged=ON: SET @@SESSION.SQL_LOG_BIN=0; is added to the output.

    • –set-gtid-purged=AUTO: SET @@SESSION.SQL_LOG_BIN=0; is added to the output if GTIDs are enabled on the server you are backing up (that is, if AUTO evaluates to ON).

    This option was added in MySQL 5.6.9.

Format Options

The following options specify how to represent the entire dump file or certain kinds of data in the dump file. They also control whether certain optional information is written to the dump file.

  •  –compact

    Produce more compact output. This option enables the –skip-add-drop-table–skip-add-locks–skip-comments–skip-disable-keys, and –skip-set-charset options.

  •  –compatible=name

    Produce output that is more compatible with other database systems or with older MySQL servers. The value of name can be ansi, mysql323, mysql40, postgresql, oracle, mssql, db2, maxdb, no_key_options, no_table_options, or no_field_options. To use several values, separate them by commas. These values have the same meaning as the corresponding options for setting the server SQL mode. See Section 5.1.8, “Server SQL Modes”.

    This option does not guarantee compatibility with other servers. It only enables those SQL mode values that are currently available for making dump output more compatible. For example, –compatible=oracle does not map data types to Oracle types or use Oracle comment syntax.

    This option requires a server version of 4.1.0 or higher. With older servers, it does nothing.

  •  –complete-insert, -c

    Use complete INSERT statements that include column names.

  •  –create-options

    Include all MySQL-specific table options in the CREATE TABLE statements.

  •  –fields-terminated-by=…, –fields-enclosed-by=…, –fields-optionally-enclosed-by=…, –fields-escaped-by=…

    These options are used with the –tab option and have the same meaning as the corresponding FIELDS clauses for LOAD DATA INFILE. See Section 13.2.6, “LOAD DATA INFILE Syntax”.

  •  –hex-blob

    Dump binary columns using hexadecimal notation (for example, ‘abc’ becomes 0x616263). The affected data types are BINARYVARBINARY, the BLOB types, and BIT.

  •  –lines-terminated-by=…

    This option is used with the –tab option and has the same meaning as the corresponding LINES clause for LOAD DATA INFILE. See Section 13.2.6, “LOAD DATA INFILE Syntax”.

  •  –quote-names, -Q

    Quote identifiers (such as database, table, and column names) within ` characters. If the ANSI_QUOTES SQL mode is enabled, identifiers are quoted within ” characters. This option is enabled by default. It can be disabled with –skip-quote-names, but this option should be given after any option such as –compatible that may enable –quote-names.

  •  –result-file=file_name, -r file_name

    Direct output to the named file. The result file is created and its previous contents overwritten, even if an error occurs while generating the dump.

    This option should be used on Windows to prevent newline \n characters from being converted to \r\n carriage return/newline sequences.

  •  –tab=dir_name, -T dir_name

    Produce tab-separated text-format data files. For each dumped table, mysqldump creates a tbl_name.sql file that contains the CREATE TABLE statement that creates the table, and the server writes a tbl_name.txt file that contains its data. The option value is the directory in which to write the files.

    Note

    This option should be used only when mysqldump is run on the same machine as the mysqld server. Because the server creates files *.txt file in the directory that you specify, the directory must be writable by the server and the MySQL account that you use must have the FILE privilege. Because mysqldump creates *.sql in the same directory, it must be writable by your system login account.

    By default, the .txt data files are formatted using tab characters between column values and a newline at the end of each line. The format can be specified explicitly using the –fields-xxx and –lines-terminated-by options.

    Column values are converted to the character set specified by the –default-character-set option.

  •  –tz-utc

    This option enables TIMESTAMP columns to be dumped and reloaded between servers in different time zones. mysqldump sets its connection time zone to UTC and adds SET TIME_ZONE=’+00:00′ to the dump file. Without this option, TIMESTAMPcolumns are dumped and reloaded in the time zones local to the source and destination servers, which can cause the values to change if the servers are in different time zones. –tz-utc also protects against changes due to daylight saving time. –tz-utc is enabled by default. To disable it, use –skip-tz-utc.

  •  –xml, -X

    Write dump output as well-formed XML.

    NULL, ‘NULL’, and Empty Values: For a column named column_name, the NULL value, an empty string, and the string value ‘NULL’ are distinguished from one another in the output generated by this option as follows.

    Value: XML Representation:
    NULL (unknown value)

    <field name=”column_name” xsi:nil=”true” />

    ” (empty string)

    <field name=”column_name“></field>

    ‘NULL’ (string value)

    <field name=”column_name“>NULL</field>

    The output from the mysql client when run using the –xml option also follows the preceding rules. (See Section 4.5.1.1, “mysql Options”.)

    XML output from mysqldump includes the XML namespace, as shown here:

    shell> mysqldump --xml -u root world City <?xml version="1.0"?>
    <mysqldump xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <database name="world">
    <table_structure name="City">
    <field Field="ID" Type="int(11)" Null="NO" Key="PRI" Extra="auto_increment" />
    <field Field="Name" Type="char(35)" Null="NO" Key="" Default="" Extra="" />
    <field Field="CountryCode" Type="char(3)" Null="NO" Key="" Default="" Extra="" />
    <field Field="District" Type="char(20)" Null="NO" Key="" Default="" Extra="" />
    <field Field="Population" Type="int(11)" Null="NO" Key="" Default="0" Extra="" />
    <key Table="City" Non_unique="0" Key_name="PRIMARY" Seq_in_index="1" Column_name="ID"
    Collation="A" Cardinality="4079" Null="" Index_type="BTREE" Comment="" />
    <options Name="City" Engine="MyISAM" Version="10" Row_format="Fixed" Rows="4079"
    Avg_row_length="67" Data_length="273293" Max_data_length="18858823439613951"
    Index_length="43008" Data_free="0" Auto_increment="4080"
    Create_time="2007-03-31 01:47:01" Update_time="2007-03-31 01:47:02"
    Collation="latin1_swedish_ci" Create_options="" Comment="" />
    </table_structure>
    <table_data name="City">
    <row>
    <field name="ID">1</field>
    <field name="Name">Kabul</field>
    <field name="CountryCode">AFG</field>
    <field name="District">Kabol</field>
    <field name="Population">1780000</field>
    </row> ... <row>
    <field name="ID">4079</field>
    <field name="Name">Rafah</field>
    <field name="CountryCode">PSE</field>
    <field name="District">Rafah</field>
    <field name="Population">92020</field>
    </row>
    </table_data>
    </database>
    </mysqldump>
    

    Prior to MySQL 5.6.5, this option prevented the –routines option from working correctly—that is, no stored routines, triggers, or events could be dumped in XML format. (Bug #11760384, Bug #52792)

Filtering Options

The following options control which kinds of schema objects are written to the dump file: by category, such as triggers or events; by name, for example, choosing which databases and tables to dump; or even filtering rows from the table data using aWHERE clause.

  •  –all-databases, -A

    Dump all tables in all databases. This is the same as using the –databases option and naming all the databases on the command line.

    Prior to MySQL 5.6.4, the slave_master_info and slave_relay_log_info tables (see Section 17.2.2, “Replication Relay and Status Logs”) were not included by this option.

  •  –databases, -B

    Dump several databases. Normally, mysqldump treats the first name argument on the command line as a database name and following names as table names. With this option, it treats all name arguments as database names. CREATE DATABASEand USE statements are included in the output before each new database.

    This option may be used to dump the INFORMATION_SCHEMA and performace_schema databases, which normally are not dumped even with the –all-databases option. (Also use the –skip-lock-tables option.)

  •  –events, -E

    Include Event Scheduler events for the dumped databases in the output. This option requires the EVENT privileges for those databases.

    The output generated by using –events contains CREATE EVENT statements to create the events. However, these statements do not include attributes such as the event creation and modification timestamps, so when the events are reloaded, they are created with timestamps equal to the reload time.

    If you require events to be created with their original timestamp attributes, do not use –events. Instead, dump and reload the contents of the mysql.event table directly, using a MySQL account that has appropriate privileges for the mysql database.

  •  –ignore-table=db_name.tbl_name

    Do not dump the given table, which must be specified using both the database and table names. To ignore multiple tables, use this option multiple times. This option also can be used to ignore views.

  •  –no-data, -d

    Do not write any table row information (that is, do not dump table contents). This is useful if you want to dump only the CREATE TABLE statement for the table (for example, to create an empty copy of the table by loading the dump file).

  •  –routines, -R

    Include stored routines (procedures and functions) for the dumped databases in the output. This option requires the SELECT privilege for the mysql.proc table.

    The output generated by using –routines contains CREATE PROCEDURE and CREATE FUNCTION statements to create the routines. However, these statements do not include attributes such as the routine creation and modification timestamps, so when the routines are reloaded, they are created with timestamps equal to the reload time.

    If you require routines to be created with their original timestamp attributes, do not use –routines. Instead, dump and reload the contents of the mysql.proc table directly, using a MySQL account that has appropriate privileges for the mysqldatabase.

    Prior to MySQL 5.6.5, this option had no effect when used together with the –xml option. (Bug #11760384, Bug #52792)

  •  –tables

    Override the –databases or -B option. mysqldump regards all name arguments following the option as table names.

  •  –triggers

    Include triggers for each dumped table in the output. This option is enabled by default; disable it with –skip-triggers.

    To be able to dump a table’s triggers, you must have the TRIGGER privilege for the table.

  •  –where=’where_condition‘, -w ‘where_condition

    Dump only rows selected by the given WHERE condition. Quotes around the condition are mandatory if it contains spaces or other characters that are special to your command interpreter.

    Examples:

    --where="user='jimf'"
    -w"userid>1"
    -w"userid<1"
    

Performance Options

The following options are the most relevant for the performance particularly of the restore operations. For large data sets, restore operation (processing the INSERT statements in the dump file) is the most time-consuming part. When it is urgent to restore data quickly, plan and test the performance of this stage in advance. For restore times measured in hours, you might prefer an alternative backup and restore solution, such as MySQL Enterprise Backup for InnoDB-only and mixed-use databases, or mysqlhotcopy for MyISAM-only databases.

Performance is also affected by the transactional options, primarily for the dump operation.

  •  –delayed-insert

    For those nontransactional tables that support the INSERT DELAYED syntax, use that statement rather than regular INSERT statements.

    As of MySQL 5.6.6, DELAYED inserts are deprecated, so this option will be removed in a future release.

  •  –disable-keys, -K

    For each table, surround the INSERT statements with /*!40000 ALTER TABLE tbl_name DISABLE KEYS */; and /*!40000 ALTER TABLE tbl_name ENABLE KEYS */; statements. This makes loading the dump file faster because the indexes are created after all rows are inserted. This option is effective only for nonunique indexes of MyISAM tables.

  •  –extended-insert, -e

    Write INSERT statements using multiple-row syntax that includes several VALUES lists. This results in a smaller dump file and speeds up inserts when the file is reloaded.

  •  –insert-ignore

    Write INSERT IGNORE statements rather than INSERT statements.

  •  –opt

    This option, enabled by default, is shorthand for the combination of –add-drop-table –add-locks –create-options –disable-keys –extended-insert –lock-tables –quick –set-charset. It gives a fast dump operation and produces a dump file that can be reloaded into a MySQL server quickly.

    Because the –opt option is enabled by default, you only specify its converse, the –skip-opt to turn off several default settings. See the discussion of mysqldump option groups for information about selectively enabling or disabling a subset of the options affected by –opt.

  •  –quick, -q

    This option is useful for dumping large tables. It forces mysqldump to retrieve rows for a table from the server a row at a time rather than retrieving the entire row set and buffering it in memory before writing it out.

  •  –skip-opt

    See the description for the –opt option.

Transactional Options

The following options trade off the performance of the dump operation, against the reliability and consistency of the exported data.

  •  –add-locks

    Surround each table dump with LOCK TABLES and UNLOCK TABLES statements. This results in faster inserts when the dump file is reloaded. See Section 8.2.4.1, “Optimizing INSERT Statements”.

  •  –flush-logs, -F

    Flush the MySQL server log files before starting the dump. This option requires the RELOAD privilege. If you use this option in combination with the –all-databases option, the logs are flushed for each database dumped. The exception is when using –lock-all-tables–master-data, or –single-transaction: In this case, the logs are flushed only once, corresponding to the moment that all tables are locked by FLUSH TABLES WITH READ LOCK. If you want your dump and the log flush to happen at exactly the same moment, you should use –flush-logs together with –lock-all-tables–master-data, or –single-transaction.

  •  –flush-privileges

    Add a FLUSH PRIVILEGES statement to the dump output after dumping the mysql database. This option should be used any time the dump contains the mysql database and any other database that depends on the data in the mysql database for proper restoration.

  •  –lock-all-tables, -x

    Lock all tables across all databases. This is achieved by acquiring a global read lock for the duration of the whole dump. This option automatically turns off –single-transaction and –lock-tables.

  •  –lock-tables, -l

    For each dumped database, lock all tables to be dumped before dumping them. The tables are locked with READ LOCAL to permit concurrent inserts in the case of MyISAM tables. For transactional tables such as InnoDB, –single-transaction is a much better option than –lock-tables because it does not need to lock the tables at all.

    Because –lock-tables locks tables for each database separately, this option does not guarantee that the tables in the dump file are logically consistent between databases. Tables in different databases may be dumped in completely different states.

    Some options, such as –opt, automatically enable –lock-tables. If you want to override this, use –skip-lock-tables at the end of the option list.

  •  –no-autocommit

    Enclose the INSERT statements for each dumped table within SET autocommit = 0 and COMMIT statements.

  •  –order-by-primary

    Dump each table’s rows sorted by its primary key, or by its first unique index, if such an index exists. This is useful when dumping a MyISAM table to be loaded into an InnoDB table, but makes the dump operation take considerably longer.

  •  –shared-memory-base-name=name

    On Windows, the shared-memory name to use, for connections made using shared memory to a local server. The default value is MYSQL. The shared-memory name is case sensitive.

    The server must be started with the –shared-memory option to enable shared-memory connections.

  •  –single-transaction

    This option sets the transaction isolation mode to REPEATABLE READ and sends a START TRANSACTION SQL statement to the server before dumping data. It is useful only with transactional tables such as InnoDB, because then it dumps the consistent state of the database at the time when START TRANSACTION was issued without blocking any applications.

    When using this option, you should keep in mind that only InnoDB tables are dumped in a consistent state. For example, any MyISAM or MEMORY tables dumped while using this option may still change state.

    While a –single-transaction dump is in process, to ensure a valid dump file (correct table contents and binary log coordinates), no other connection should use the following statements: ALTER TABLECREATE TABLEDROP TABLERENAME TABLE,TRUNCATE TABLE. A consistent read is not isolated from those statements, so use of them on a table to be dumped can cause the SELECT that is performed by mysqldump to retrieve the table contents to obtain incorrect contents or fail.

    The –single-transaction option and the –lock-tables option are mutually exclusive because LOCK TABLES causes any pending transactions to be committed implicitly.

    To dump large tables, combine the –single-transaction option with the –quick option.

Option Groups

  • The –opt option turns on several settings that work together to perform a fast dump operation. All of these settings are on by default, because –opt is on by default. Thus you rarely if ever specify –opt. Instead, you can turn these settings off as a group by specifying –skip-opt, the optionally re-enable certain settings by specifying the associated options later on the command line.

  • The –compact option turns off several settings that control whether optional statements and comments appear in the output. Again, you can follow this option with other options that re-enable certain settings, or turn all the settings on by using the –skip-compact form.

When you selectively enable or disable the effect of a group option, order is important because options are processed first to last. For example, –disable-keys –lock-tables –skip-opt would not have the intended effect; it is the same as –skip-opt by itself.

Examples

To make a backup of an entire database:

shell> mysqldump db_name > backup-file.sql 

To load the dump file back into the server:

shell> mysql db_name < backup-file.sql 

Another way to reload the dump file:

shell> mysql -e "source /path-to-backup/backup-file.sql" db_name 

mysqldump is also very useful for populating databases by copying data from one MySQL server to another:

shell> mysqldump --opt db_name | mysql --host=remote_host -C db_name 

You can dump several databases with one command:

shell> mysqldump --databases db_name1 [db_name2 ...] > my_databases.sql 

To dump all databases, use the –all-databases option:

shell> mysqldump --all-databases > all_databases.sql 

For InnoDB tables, mysqldump provides a way of making an online backup:

shell> mysqldump --all-databases --master-data --single-transaction > all_databases.sql 

This backup acquires a global read lock on all tables (using FLUSH TABLES WITH READ LOCK) at the beginning of the dump. As soon as this lock has been acquired, the binary log coordinates are read and the lock is released. If long updating statements are running when the FLUSH statement is issued, the MySQL server may get stalled until those statements finish. After that, the dump becomes lock free and does not disturb reads and writes on the tables. If the update statements that the MySQL server receives are short (in terms of execution time), the initial lock period should not be noticeable, even with many updates.

For point-in-time recovery (also known as roll-forward, when you need to restore an old backup and replay the changes that happened since that backup), it is often useful to rotate the binary log (see Section 5.4.4, “The Binary Log”) or at least know the binary log coordinates to which the dump corresponds:

shell> mysqldump --all-databases --master-data=2 > all_databases.sql 

Or:

shell> mysqldump --all-databases --flush-logs --master-data=2 > all_databases.sql 

The –master-data and –single-transaction options can be used simultaneously, which provides a convenient way to make an online backup suitable for use prior to point-in-time recovery if tables are stored using the InnoDB storage engine.

For more information on making backups, see Section 7.2, “Database Backup Methods”, and Section 7.3, “Example Backup and Recovery Strategy”.

Restrictions

mysqldump does not dump the INFORMATION_SCHEMA or performance_schema database by default. To dump either of these, name it explicitly on the command line. You can also name it with the –databases option. Also, use the –skip-lock-tablesoption.

mysqldump does not dump the MySQL Cluster ndbinfo information database.

Before MySQL 5.6.6, mysqldump does not dump the general_log or slow_query_log tables for dumps of the mysql database. As of 5.6.6, the dump includes statements to recreate those tables so that they are not missing after reloading the dump file. Log table contents are not dumped.

If you encounter problems backing up views due to insufficient privileges, see Section C.5, “Restrictions on Views” for a workaround.

 PREV   HOME   UP   NEXT 
 User Comments
   Posted by Jason Dias on February 16, 2004
For a faster “mysqldump” innodb tables
1. mysqldump –opt –user=username –password database > filetosaveto.sql

2. open the dump file put this statement at the beginning of the sql dump text file:

SET FOREIGN_KEY_CHECKS=0; 

3. mysql –user=username –password database < dumpfile.sql

Very fast.

   Posted by Tim To on February 24, 2004
After adding “SET FOREIGN_KEY_CHECKS=0;” remember to append the “SET FOREIGN_KEY_CHECKS=1;” at the end of the import file. The potential problem is that any data inconsistency that would’ve made the foreign key failed during import would have made it into the database even after the forieng keys are turned back on. This is especially true if the foreign keys aren’t turned back on after a long period of time which can happen if the “SET FOREIGN_KEY_CHECKS=1;” was not appended to the import file in the first place.

   Posted by Christopher Huhn on July 1, 2004
You can even do your mysqldump backups with logrotate. 
Simply put something like this into /etc/logrotate.conf:

/var/backups/mysql/dump.sql {
daily
rotate 14
missingok
compress
postrotate
/usr/bin/mysqldump –defaults-extra-file=/…/backup-credentials.cnf –opt –flush-logs –all-databases > /var/backups/mysql/dump.sql
endscript
}

   Posted by Vesa Kivistö on August 26, 2004
Following mysqldump import example for InnoDB tables is at least 100x faster than previous examples.

1. mysqldump –opt –user=username –password database > dumbfile.sql

2. Edit the dump file and put these lines at the beginning:

SET AUTOCOMMIT = 0;
SET FOREIGN_KEY_CHECKS=0;

3. Put these lines at the end:

SET FOREIGN_KEY_CHECKS = 1;
COMMIT;
SET AUTOCOMMIT = 1;

4. mysql –user=username –password database < dumpfile.sql

   Posted by Corey Tisdale on February 23, 2005
If you want to schedule a task on windows to backup and move your data somewhere, the lack of documentation and command-line tools in windows can make it a real beast. I hope this helps you keep your data safe.

First off, you will need a command line file compressor (or your should use one, anyway). I like GNU gzip. You can get it for windows here http://gnuwin32.sourceforge.net/packages/gzip.htm

Secondly, you will need to use windowsw FTP via command line. It took me all day to find documentation on this guy, so I hope this saves some time for somebody. 

Anyway, you need two files — the batch file and a script for your ftp client. The Batch file should look like this guy (it uses random numbers in the file name so that multiple backups are not overwritten):

@ECHO OFF

@REM Set dir variables. Use ~1 format in win2k
SET basedir=C:\BACKUP~1
SET workdir=c:\TEMP
SET mysqldir=c:\mysql\bin
SET gzipdir=c:\PROGRA~1\GnuWin32\bin
SET mysqlpassword=mygoodpassword
SET mysqluser=myrootuser

@REM Change to mysqldir
CD %mysqldir%

@REM dump database. This is all one line
mysqldump -u %mysqluser% -p%mysqlpassword% –all-databases >%workdir%\backup.sql

@REM Change to workdir
CD %workdir%

@REM Zip up database
%gzipdir%\gzip.exe backup.sql

@REM Move to random file name
MOVE backup.sql.gz backup.%random%.gz

@REM FTP file to repository
FTP -n -s:%basedir%\ftp-commands.txt

@REM Remove old backup files
del backup.sql
del backup.*.gz

@REM Change back to base dir
CD %basedir%

And your ftp script should look like this guy (and be named ftp-commands.txt so the above script can find it)

open
ftp.mybackuplocation.com
user
myusername
mypassword
bin
put backup.*.gz
quit

Make sure both of the above files are in whatever directory you set up as %basedir% and test it out and make sure everything works for you. Then schedule it to run every day to protect your data!

   Posted by Lon B on March 1, 2005
Corey’s example is helpful, but I don’t care for the random file name. Here is the manual script I use on Windows for kicking off a MYSQL backup. 

You could easily add all the other bells and whistles of ZIP, FTP, and scheduling should you need it. Note that I didn’t use a password or many of the other args for mysqldump, you can add those if ya need ’em.

@ECHO OFF
for /f “tokens=1-4 delims=/ ” %%a in (‘date/t’) do (
set dw=%%a
set mm=%%b
set dd=%%c
set yy=%%d
)

SET bkupdir=C:\path\to\where\you\want\backups
SET mysqldir=D:\path\to\mysql
SET dbname=this_is_the_name_of_my_database
SET dbuser=this_is_my_user_name

@ECHO Beginning backup of %dbname%…

%mysqldir%\bin\mysqldump -B %dbname% -u %dbuser% > %bkupdir%\dbBkup_%dbname%_%yy%%mm%%dd%.sql
@ECHO Done! New File: dbBkup_%dbname%_%yy%%mm%%dd%.sql
pause

   Posted by David Tonhofer on March 4, 2005
A little reformulation of the actions that occur during an online dump with log-point registration, i.e. a dump that does not unduly disturb clients using the database during the dump (N.B.: only from 4.1.8 on!) and that can be used to start a slave server from the correct point in the logs. 

Use these options:

–single-transaction
–flush-logs
–master-data=1
–delete-master-logs

If you have several databases that are binary-logged and you want to keep a consistent binary log you may have to include all the databases instead of just some (is that really so?):

–all-databases

Now, these are the actions performed by the master server:

1) Acquire global read lock using FLUSH TABLES WITH READ LOCK. This also flushes the query cache and the query result cache. Caused by option –single-transaction.
2) All running and outstanding transactions terminate. MySQL server stalls for further updates.
3) Read lock on all tables acquired.
4) All the logs are flushed, in particular the binary log is closed and a new generation binary log is opened. Caused by option –flush-logs
5) Binary lock coordinates are read and written out so that the slave can position correctly in the binary log. Caused by –master-data=1
6) Read lock is released, MySQL server can proceed with updates. These updates will also go to the binary log and can thus be replayed by the slave. Meanwhile, the InnoDB tables are dumped in a consistent state, which is the state they were in in step 5. (Not guaranteed for MyISAM tables)
7) Dump terminates after a possibly long time.
8) Any old binary log files are deleted. Caused by –delete-master-logs.

Additionally, there are performance-influencing options:

–extended-insert: use multiple-row insert statements
–quick: do not do buffering of row data, good if tables are large

And there are format-influencing options:

–hex-blob: dump binary columns in hex
–complete-insert: use complete insert statements that include column names works nicely with –extended-insert
–add-drop-table: add a DROP TABLE statement before each CREATE TABLE statement.

   Posted by Mike Ng on May 16, 2005
Following Lon B helpful post:

You can pipe it to gzip to compress in windows. I didn’t think it would work on windows, but apparently it does.

@ECHO Beginning backup of %dbname%…

%mysqldir%\bin\mysqldump -B %dbname% -u %dbuser% | gzip> %bkupdir%\dbBkup_%dbname%_%yy%%mm%%dd%.sql.gz

Of course,you need gng gzip in your path or directory.

   Posted by Eric Martel on January 16, 2006
When using mysqldump on a replication master, if you want the slave(s) to follow, you may want to avoid the –delete-master-logs option, because it can delete binary logs before the “CHANGE MASTER” is read by the slaves, therefore breaking the replication (then you have to issue manually the “CHANGE MASTER” on the slave(s)). If you want to get rid of old and useless binary logs, it is better to issue a “PURGE MASTER” SQL command on the master after the mysqldump. 

   Posted by Wade Hedgren on January 22, 2006
I moved my MySQL installation from Linux to Windows 2003 and had to create a new backup script. I was using hotcopy but with windows it’s not avaliable.

So, Inspired by Lon B and Corey Tisdale (above) I created a batch file that will create a mysqldump GZiped file for each database and put them into seperate folders. It also creates a log file. You will have to set the vars at the top to match your system.

You will also need GZip to do the compression…

It could still use some work (like no error trapping etc…) but it’s in production for me now.

I used a utility “commail.exe” to send the log file to me after the backup is complete.

//— Begin Batch File —//
@echo off

:: Set some variables
set bkupdir=E:\MySQL\backup
set mysqldir=E:\MySQL
set datadir=E:\MySQL\data
set logdir=E:\MySQL\logs
set dbuser=username
set dbpass=password
set zip=C:\GZip\bin\gzip.exe
set endtime=0

:GETTIME

:: get the date and then parse it into variables
for /F “tokens=2-4 delims=/ ” %%i in (‘date /t’) do (
set mm=%%i
set dd=%%j
set yy=%%k
)

:: get the time and then parse it into variables
for /F “tokens=5-8 delims=:. ” %%i in (‘echo.^| time ^| find “current” ‘) do (
set hh=%%i
set ii=%%j
set ss=%%k
)

:: If this is the second time through then go to the end of the file
if “%endtime%”==”1” goto END

:: Create the filename suffix
set fn=_%yy%%mm%%dd%_%hh%%mm%%ss%

:: Switch to the data directory to enumerate the folders
pushd %datadir%

:: Write to the log file
echo Beginning MySQLDump Process > %logdir%\LOG%fn%.txt
echo Start Time = %yy%-%mm%-%dd% %hh%:%ii%:%ss% >> %logdir%\LOG%fn%.txt
echo ————————— >> %logdir%\LOG%fn%.txt
echo. >> %logdir%\LOG%fn%.txt

:: Loop through the data structure in the data dir to get the database names
for /d %%f in (*) do (

:: Create the backup sub-directory is it does not exist
if not exist %bkupdir%\%%f\ (
echo Making Directory %%f
echo Making Directory %%f >> %logdir%\LOG%fn%.txt
mkdir %bkupdir%\%%f
) else (
echo Directory %%f Exists
echo Directory %%f Exists >> %logdir%\LOG%fn%.txt
)

:: Run mysqldump on each database and compress the data by piping through gZip
echo Backing up database %%f%fn%.sql.gz
echo Backing up database %%f%fn%.sql.gz >> %logdir%\LOG%fn%.txt
%mysqldir%\bin\mysqldump –user=%dbuser% –password=%dbpass% –databases %%f –opt –quote-names –allow-keywords –complete-insert | %zip% > %bkupdir%\%%f\%%f%fn%.sql.gz
echo Done…
echo Done… >> %logdir%\LOG%fn%.txt
)

:: Go back and get the end time for the script
set endtime=1
goto :GETTIME

:END
:: Write to the log file
echo. >> %logdir%\LOG%fn%.txt
echo ————————— >> %logdir%\LOG%fn%.txt
echo MySQLDump Process Finished >> %logdir%\LOG%fn%.txt
echo End Time = %yy%-%mm%-%dd% %hh%:%ii%:%ss% >> %logdir%\LOG%fn%.txt
echo. >> %logdir%\LOG%fn%.txt

:: Return to the scripts dir
popd

:: Send the log file in an e-mail
c:\commail\commail -host=smtp.yourcompany.com -from=”server <[email protected]>” [email protected] -subject=”MySQL Backup” -msg=%logdir%\LOG%fn%.txt

//— End Batch File —//

   Posted by Philip Sbrogna on February 2, 2006
Here’s a bash wrapper for mysqldump I cron’d to run at night. It’s not the sexiest thing but it’s reliable.

It creates a folder for each day, a folder for each db & single bzip2’d files for each table. There are provisions for exclusions. See below where it skips the entire tmp & test db’s and in all db’s, tables tbl_session & tbl_parameter. It also cleans up files older than 5 days (by that time they’ve gone to tape).

Be sure to update <user> & <pwd>. Ideally these would be in constants but I couldn’t get the bash escaping to work.

# setup
suffix=`date +%Y%m%d`
dest=/mirror/mysqldumps
cmd=’/usr/bin/mysqldump’

databases=(`echo ‘show databases;’ | mysql -u <user> –password='<pwd>’ | grep -v ^Database$`)

for d in “${databases[@]}”; do
if [[ $d != ‘tmp’ && $d != ‘test’ ]]
then
echo “DATABASE ${d}”
s=”use ${d}; show tables;”
tables=(`echo ${s} | mysql -u <user> –password='<pwd>’ | grep -v ‘^Tables_in_’`)
for t in “${tables[@]}”; do
if [[ $t != ‘tbl_parameter’ && $t != ‘tbl_session’ ]]
then
echo ” TABLE ${t}”
path=”${dest}/${suffix}/${d}”
mkdir -p ${path}
${cmd} –user=<user> –password='<pwd>’ –quick –add-drop-table –all ${d} ${t} | bzip2 -c > ${path}/${t}.sql.bz2
fi
done
fi
done

# delete old dumps (retain 5 days)
find ${dest} -mtime +5 -exec rm {} \;

   Posted by Julien GARINO on December 17, 2009
You always wanted to BACKUP your most important database somewhere in your Linux system, as well as send the dump by email, so that you can recover the entire content if the system crashes.
You can use these 2 scripts.

First Step:
-Install the mutt client that will transfer emails on the command-line : “apt-get install mutt” or “yum install mutt”
-Create the backup directory : “mkdir /home/backups”

Second Step:
– Copy these 2 scripts on your root directory or your user directory :

#!/bin/sh
# Script name : auto_mysql_dump.sh
# Backup the dbname database
dir=`date +%Y-%m-%d`
dbname=`mybase`
if [ -d /home/backups ]; then
mkdir /home/backups/$dir
mysqldump -B –user=user_of_my_base –password=pwd_of_my_base –host=host_of_my_base $dbname > /home/backups/$dir/$dbname.sql
if [ $?=0 ]; then
#Bzip2 the dump.sql
bzip2 -z9v /home/backups/$dir/$dbname.sql
#Remove the dump.sql from disk
rm -f /home/backups/$dir/$dbname.sql
fi
fi
# End of script auto_mysql_dump.sh

#!/bin/sh
# Script Name : auto_mail_dump.sh
# Sends an email with the dump realized before
dir=`date +%Y-%m-%d`
dbname=`mybase`
mutt -s “Today backup” -a /home/backups/$dir/$dbname.sql.bz2 [email protected] < /dev/null
# End of script auto_mail_dump.sh

-Don’t forget to change the access to make them executable:
“chmod 700 auto_mysql_dump.sh”
“chmod 700 auto_mail_dump.sh”

Third step:
-Edit the CronTab to schedule the execution of the two scripts.
“crontab -e” (you will use the vi editor)
We consider that the 2 scripts are in the /root directory
-I want the dump to be executed at 8.30 everyday
-I want the mail to be sent at 9.00 everyday
Thus I add these 2 rows after the existing lines :
Hit the “i” to insert new characters…

30 8 * * * /root/auto_mysql_dump.sh > /dev/null
00 9 * * * /root/auto_mail_dump.sh > /dev/null

Save the crontab by hitting : “Esc” + “:wq” (means Write and Quit)

What you should do now :
Once you’ve written the scripts, test-them !

Enjoy the automatic backup from now on 🙂

   Posted by on March 22, 2006
When you need to import the data from a mysqldump, instead of using “shell>mysql < dump.sql” using “mysql> source dump.sql” is much better.

This way of importing the data avoids problems with language specific characters being turned into garble.

   Posted by Mathieu van Loon on June 29, 2006
Here’s a DOS script that will backup all your databases to a seperate file in a new folder, zip the folder, encrypt the zip and email the encrypted zip to one or many adresses. If the backup is larger than a specified limit only the logfile is emailed. The unencrypted zipfile is left on your local machine.

The script is also available at http://www.jijenik.com/projects/mysqlbackup/

Many thanks to Wade Hedgren whose script formed the basis for this version.

//— Begin Batch File —//
::
:: Creates a backup of all databases in MySQL.
:: Zip, encrypts and emails the backup file.
::
:: Each database is saved to a seperate file in a new folder.
:: The folder is zipped and then deleted. 
:: the zipped backup is encrypted and then emailed, unless the file exceeds the maximum filesize
:: In all cases the logfile is emailed.
:: The encrypted backup is deleted, leaving the unencrypted zipfile on your local machine.
::
:: Version 1.1 
::
:: Changes in version 1.1 (released June 29th, 2006)
:: – backups are now sent to the address specified by the mailto variable
::
:: The initial version 1.0 was released on May 27th, 2006
:: 
::
:: This version of the script was written by Mathieu van Loon ([email protected])
:: It is based heavily on the script by Wade Hedgren (see comments at http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html)
::
:: This script requires several freeware libraries:
:: – zipgenius (a compression tool), www.zipgenius.it
:: – blat (an emailer tool), www.blat.net
:: – doff (extracts datetime, ignores regional formatting), www.jfitz.com/dos/index.html
::
:: Some areas where this script could be improved:
:: – include error trapping and handling
:: – make steps such as encryption and email optional 
:: – allow the user to specify a single database on the command line
::
@echo off

::
:: Configuration options
::

:: The threshold for emailing the backup file. If the backup is larger 
:: it will not be emailed (the logfile is always sent).
set maxmailsize=10000000

:: The passphrase used to encrypt the zipfile. Longer is more secure.
set passphrase=secret

:: Name of the database user
set dbuser=root

:: Password for the database user
set dbpass=password

:: Recipients of database backup, comma seperated, enclosed in quotes
set mailto=”[email protected],[email protected]

:: From address for email
set mailfrom=”MySQL Backup Service <[email protected]>”

:: Email server
set mailsmtp=localhost

:: Email subject
set mailsubject=”MySQL Backup”

:: directory where logfiles are stored
set logdir=C:\DatabaseBackups\logs

:: directory where backup files are stored
set bkupdir=C:\DatabaseBackups

:: Install folder of MySQL
set mysqldir=C:\Program Files (x86)\MySQL\MySQL Server 4.1

:: Data directory of MySQL (only used to enumerate databases, we use mysqldump for backup)
set datadir=C:\Program Files (x86)\MySQL\MySQL Server 4.1\data

:: Path of zipgenius compression tool
set zip=C:\Program Files (x86)\ZipGenius 6\zg.exe

:: Path of blat mail tool
set mail=C:\DatabaseBackups\Backupscript\libraries\Blat250\full\blat.exe

:: Path of doff date tool (specify only the folder not the exe)
set doff=C:\DatabaseBackups\Backupscript\libraries\doff10

::
::
:: NO NEED TO CHANGE ANYTHING BELOW
::
::

:: get the date and then parse it into variables
pushd %doff%
for /f %%i in (‘doff.exe yyyymmdd_hhmiss’) do set fn=%%i
for /f %%i in (‘doff.exe dd-mm-yyyy hh:mi:ss’) do set nicedate=%%i
popd

set logfile=”%logdir%\%fn%_Backuplog.txt”

:: Switch to the data directory to enumerate the folders
pushd “%datadir%”

:: Write to the log file
echo Beginning MySQLDump Process > %logfile%
echo Start Time = %nicedate% >> %logfile%
echo ————————— >> %logfile%
echo. >> %logfile%

:: Create the backup folder
if not exist “%bkupdir%\%fn%\” (
echo Making Directory %fn%
echo Making Directory %fn% >> %logfile%

mkdir “%bkupdir%\%fn%”
)

:: Loop through the data structure in the data dir to get the database names
for /d %%f in (*) do (

:: Run mysqldump on each database and compress the data by piping through gZip
echo Backing up database %fn%_%%f.sql
echo Backing up database %fn%_%%f.sql >> %logfile%
“%mysqldir%\bin\mysqldump” –user=%dbuser% –password=%dbpass% –databases %%f –opt –quote-names –allow-keywords –complete-insert > “%bkupdir%\%fn%\%fn%_%%f.sql”
echo Done… >> %logfile%
)

:: return from data dir
popd

pushd %bkupdir%

echo Zipping databases
echo Zipping databases >> %logfile%
REM C9 : maximum compression
REM AM : Delete source files
REM F1 : Store relative path
REM R1 : include subfolders
REM K0 : Do not display progress

“%zip%” -add “%fn%_MySQLBackup.zip” C9 AM F1 R1 K0 +”%bkupdir%\%fn%”

echo Crypting zipfile
echo Crypting zipfile >> %logfile%

REM C : Create non-executable zip
REM S : Do not delete after x tries
REM 3 : Use AES encryption
“%zip%” -encrypt “%fn%_MySQLBackup.zip” C S 3 “%passphrase%” %mailfrom%

echo Deleting directory %fn%
echo Deleting directory %fn% >> %logfile%

rmdir /s /q “%bkupdir%\%fn%”

:: Go back and get the end time for the script
set endtime=1

:: return from backup dir
popd

:: update the nicedate for the log
pushd %doff%
for /f %%i in (‘doff.exe dd-mm-yyyy hh:mi:ss’) do set nicedate=%%i
popd

:: Write to the log file
echo. >> %logfile%
echo ————————— >> %logfile%
echo MySQLDump Process Finished >> %logfile%
echo End Time = %nicedate% >> %logfile%
echo. >> %logfile%

:: Send the log file in an e-mail, include the backup file if it is not too large
:: We use the CALL Trick to enable determination of the filesize (type CALL /? at prompt for info)
:: note that you _must_ specify the full filename as the argument

pushd %bkupdir%
Call :MAILFILE “%bkupdir%\%fn%_MySQLBackup.czip”
echo Backup completed
goto :EOF

:MAILFILE

if /i %~z1 LSS %maxmailsize% (
echo Emailing backup file
“%mail%” %logfile% -q -attach %1 -serverSMTP %mailsmtp% -f %mailfrom% -to %mailto% -subject %mailsubject%
) ELSE (
echo Size of backup file %~z1 B exceeds configured email size %maxmailsize% B.
echo Emailing logfile only
echo Size of backup file %~z1 B exceeds configured email size %maxmailsize% B. only emailing logfile. >> %logfile%

“%mail%” %logfile% -q -serverSMTP %mailsmtp% -f %mailfrom% -to %mailto% -subject %mailsubject%
)

echo Deleting encrypted backup file
del %1

popd
//— End Batch File —//

   Posted by Don Gilman on July 25, 2006
RE: Mathieu van Loon 

Excellent, I had this installed and configured in about 10 minutes. I do have one minor fix however.

You aren’t getting the time portion of the DOFF command captured into your variable. It appears that the output formatting string MUST NOT CONTAIN ANY BLANKS so I changed mine to: 

for /f %%i in (‘doff.exe dd-mm-yyyy_at_hh:mi:ss’) do set nicedate=%%i

This is terrific, wish I found it 10 hrs ago (darn mySQL Administrator Backup – such a waste!!!
***
Now the problem is that my backups won’t restore…. I am backing up multiple instances of MediaWiki, Mantis, and Joomla. I’m playing around with the 
–max_allowed_packet= nnn and that should fix it based on manual backups working. Now is that nnn bytes or an abbreviation? Hmmm.

   Posted by Enrico Modanese on July 13, 2006
I often get errors [MySQL 4.* and 5.*] on reloading a dump of databases having big blobs. I found the solution disabling the –extended-insert (that comes inside the multiple option –opt, enabled by default) with –skip-extended-insert. I think this way is safer, but it is also more more slow.

   Posted by Roddi Walker on October 8, 2006
Here’s a python script that does rolling WinRAR’d backups on Windows. It should be trivial to change to Linux, or another compression program.
Please note:
1) this was a quick hack, so please test thoroughly before using in production. Still, I hope it will be a useful basis for your own script.
2) the –single-transaction switch is used as I am backing up InnoDB tables.
3) mysqldump is run with the root user. It would be A Good Thing to make this more secure – eg. create a backup user with read-only permissions to the tables.
4) <tab> is the tab character. Indentation is significant in Python.

import glob
import os
import time

# configuration
baseBackupFileName = “backupName”
maxBackups = 3
mySqlDumpCommand = “d:\\programs\\mysql\\bin\\mysqldump –user=root –password=rootpass –single-transaction DBName Table1Name Table2Name Table3Name”
winRarPath = “\”c:\\Program Files\\WinRAR\\WinRAR.exe\”” # path is quoted as it contains spaces

print “— START —“

# create new backup
newBackupFileName = baseBackupFileName + time.strftime(“_%Y%m%d_%H%M%S”, time.localtime())
os.system(mySqlDumpCommand+” > “+newBackupFileName+”.sql”)

# compress new backup
os.system(winRarPath+” a “+newBackupFileName+” “+newBackupFileName+”.sql”)
os.remove(newBackupFileName+”.sql”)
print “Created new backup \””+newBackupFileName+”.rar\””

# delete old backups
oldBackupFileNames = glob.glob(baseBackupFileName+”_*_*.rar”)
oldBackupFileNames.sort()
if len(oldBackupFileNames) > maxBackups:
<tab>for fileName in oldBackupFileNames[0:len(oldBackupFileNames)-maxBackups]:
<tab><tab>os.remove(fileName)
<tab><tab>print “Deleted old backup \””+fileName+”\””

print “— END —“

   Posted by Nick Littlestone on June 20, 2007
It seems one needs to be careful when using –skip-opt with databases containing non-ascii latin1 characters, especially if one has not been paying much attention to character sets.

I am just using default character sets – normally latin1. However, the dump produced by mysqldump is, perhaps surprisingly, in utf8. This seems fine, but leads to trouble with the –skip-opt option to mysqldump, which turns off –set-charset but leaves the dump in utf8.
This seems to lead to a dump that will be silently incorrectly reloaded if strings in the database contain non-ascii latin1 characters.
(Is this a documentation flaw, a design flaw or a bug??)
Perhaps the fact that mysqldump uses utf8 by default, and the importance of the –set-charset option should be more prominently documented (see the documentation for the –default-character-set attribute for the current mention of the use of utf8)

   Posted by Ryan Haynes on July 11, 2007
I am fairly new to bash scripting, however I encountered the problem of all the databases going into one *.sql file. If you have a large amount of databases to backup it can take forever to restore your backup. This is a script I wrote to accomplish what I felt was needed. It grabs the name of the database and puts them in separate *.sql.bz2 files with a corresponding timetamp. Please let me know if this helps and perhaps if I can make it more elegant.

#!/bin/sh

TIME_1=`date +%s`

cd /backup/mysql

DBS=”$(mysql –user=youruser –password=yourpass -Bse ‘show databases’)”

for db in ${DBS[@]}
do
echo ${db}-$(date +%m-%d-%y).sql.bz2 is being saved in /backup/mysql
# remember to add the options you need with your backups here.
mysqldump –user=youruser –password=yourpass $db –single-transaction -R | bzip2 -c > ${db}-$(date +%m-%d-%y).sql.bz2
done

TIME_2=`date +%s`

elapsed_time=$(( ( $TIME_2 – $TIME_1 ) / 60 ))

## just a sanity check to make sure i am not running a dump for 4 hours

echo “This mysql dump ran for a total of $elapsed_time minutes.” > mysql_dump_runtime.txt

# delete old databases. I have it setup on a daily cron so 
# anything older than 60 minutes is fine

for del in $(find /backup/mysql -name ‘*.sql.bz2’ -mmin +60)
do
echo This directory is more than one day old and it is being removed: $del
rm $del
done

   Posted by Bill Hernandez on August 22, 2007
#!/bin/bash

# SEE : http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html
# SEE : http://safari.oreilly.com/0596526784/date_and_time_string_formatting_with_strftime

# Improved by Bill Hernandez (Plano, Texas) on Tuesday, August 21, 2007 (12:55 AM)
# ( 1 ) Backs up all info to time stamped individual directories, which makes it easier to track
# ( 2 ) Now maintains a single log that contains additional information
# ( 3 ) Includes a file comment header inside each compressed file
# ( 4 ) Used more variables instead of hard-code to make routine easier to use for something else
# ( 5 ) Where I have mysql5, you may have to replace it with mysql
#
# Posted by Ryan Haynes on July 11 2007 6:29pm

# DO NOT DELETE AUTOMATICALLY FOR NOW, MAYBE LATER

DELETE_EXPIRED_AUTOMATICALLY=”TRUE”

# DELETE EXPIRED BACKUPS THAT ARE MORE THAN
# expire_minutes=$(( 1 * 30 )) # 30 minutes old
# expire_minutes=$(( 60 * 24 )) # 1 day old
# expire_minutes=$(( 60 * 24 * 7 )) # 7 days old
# expire_minutes=$(( 60 * 24 * 30 )) # 30 days old

expire_minutes=$(( 60 * 24 * 7 )) # 7 days old

if [ $expire_minutes -gt 1440 ]; then
    expire_days=$(( $expire_minutes /1440 ))
else
    expire_days=0
fi

function pause(){ 
read -p “$*” 
}

# pause “HIT RETURN, and then enter your sudo password…”
echo “Please enter your sudo password…”
sudo echo

mysql_username=”your_mysql_username”
mysql_password=”your_mysql_password”

current_dir=`pwd`
echo -n “Current working directory is : “
echo $current_dir
echo “————————————————————————“

TIME_1=`date +%s`
TS=$(date +%Y.%m.%d\-%I.%M.%p)

BASE_DIR=/backups/mysql
BACKUP_DIR=${BASE_DIR}/$TS
BACKUP_LOG_NAME=mysql_dump_runtime.log
BACKUP_LOG=${BASE_DIR}/${BACKUP_LOG_NAME}

sudo mkdir -p $BACKUP_DIR
sudo chown mysql:admin $BACKUP_DIR
sudo chmod 775 $BASE_DIR
sudo chmod -R 777 $BACKUP_DIR

cd $BACKUP_DIR
echo -n “Changed working directory to : “
pwd

echo “Saving the following backups…”
echo “————————————————————————“

DBS=”$(mysql5 –user=${mysql_username} –password=${mysql_password} -Bse ‘show databases’)”
for db in ${DBS[@]}
do
    normal_output_filename=${db}.sql
    compressed_output_filename=${normal_output_filename}.bz2
    echo $compressed_output_filename
    # remember to add the options you need with your backups here.
    
    echo “– $compressed_output_filename – $TS” > $normal_output_filename
    echo “– Logname : `logname`” >> $normal_output_filename
    # mysqldump5 –user=${mysql_username} –password=${mysql_password} $db –single-transaction -R | bzip2 -c > $compressed_output_filename
    mysqldump5 –user=${mysql_username} –password=${mysql_password} $db –single-transaction -R >> $normal_output_filename
    bzip2 -c $normal_output_filename > $compressed_output_filename
    rm $normal_output_filename
done
echo “————————————————————————“

TIME_2=`date +%s`

elapsed_seconds=$(( ( $TIME_2 – $TIME_1 ) ))
elapsed_minutes=$(( ( $TIME_2 – $TIME_1 ) / 60 ))

# just a sanity check to make sure i am not running a dump for 4 hours

cd $BASE_DIR
echo -n “Changed working directory to : “
pwd
echo “Making log entries…”

if [ ! -f $BACKUP_LOG ]; then
    echo “————————————————————————” > ${BACKUP_LOG_NAME}
    echo “THIS IS A LOG OF THE MYSQL DUMPS…” >> ${BACKUP_LOG_NAME}
    echo “DATE STARTED : [${TS}]” >> ${BACKUP_LOG_NAME}
    echo “————————————————————————” >> ${BACKUP_LOG_NAME}
    echo “[BACKUP DIRECTORY ] [ELAPSED TIME]” >> ${BACKUP_LOG_NAME}
    echo “————————————————————————” >> ${BACKUP_LOG_NAME}
fi
    echo “[${TS}] This mysql dump ran for a total of $elapsed_seconds seconds.” >> ${BACKUP_LOG_NAME}
    echo “————————————————————————” >> ${BACKUP_LOG_NAME}

# delete old databases. I have it setup on a daily cron so anything older than 60 minutes is fine
if [ $DELETE_EXPIRED_AUTOMATICALLY == “TRUE” ]; then
    counter=0
    for del in $(find $BASE_DIR -name ‘*-[0-9][0-9].[0-9][0-9].[AP]M’ -mmin +${expire_minutes})
    do
        counter=$(( counter + 1 ))
        echo “[${TS}] [Expired Backup – Deleted] $del” >> ${BACKUP_LOG_NAME}
    done
    echo “————————————————————————“
    if [ $counter -lt 1 ]; then
        if [ $expire_days -gt 0 ]; then
            echo There were no backup directories that were more than ${expire_days} days old:
        else
            echo There were no backup directories that were more than ${expire_minutes} minutes old:
        fi
    else
        echo “————————————————————————” >> ${BACKUP_LOG_NAME}
        if [ $expire_days -gt 0 ]; then
            echo These directories are more than ${expire_days} days old and they are being removed:
        else
            echo These directories are more than ${expire_minutes} minutes old and they are being removed:
        fi
        echo “————————————————————————“
        echo “\${expire_minutes} = ${expire_minutes} minutes”
        counter=0
        for del in $(find $BASE_DIR -name ‘*-[0-9][0-9].[0-9][0-9].[AP]M’ -mmin +${expire_minutes})
        do
        counter=$(( counter + 1 ))
           echo $del
           rm -R $del
        done
    fi
fi
echo “————————————————————————“
cd `echo $current_dir`
echo -n “Restored working directory to : “
pwd

   Posted by Ludovico Caldara on October 29, 2007
if you want to check the result of mysqldump in a shell, you should pay attention. this will not work:

mysqldump –all-databases | gzip -c > dumpfile.sql.gz
if [ $? -eq 0 ]

fi

since the “|gzip” will always return 0.

instead you should use a pipe:

### create a pipe named “pipe”
mkfifo pipe
### compress the pipe in background
gzip < pipe > dumpfile.sql.gz &
### write directly to the pipe
mysqldump –all-databases > pipe
### get the real return code of mysqldump
result=$?
### wait until the gzip completes
wait
### now it is safe to remove the pipe
rm pipe

kind regards….
— 
Ludovico

   Posted by Lance Keay on November 13, 2007
Hi all,
And thanks for the great script examples – I’ve taken a bit of the batch files and made a perl script for backing up mysql databases – it’s pretty crude, but it’s what I’m using right now to back up the servers nightly.

It’s done on a win2003 server box that has perl and gzip (http://gnuwin32.sourceforge.net/packages.html)installed.

I chose perl ’cause if I need to port the script to ‘nix it’s trivial. (well, so’s the script, really) And it’s easy to bang out small scripts in!

Hope this helps someone!

=============

#!/usr/bin/perl 

## DB info struct
##{ host, db, username, password}
@DBsToBackup =(
##host 1
[‘192.168.0.1’, ‘database1’, ‘username’, ‘password’],
[‘192.168.0.1’, ‘database2’, ‘username’, ‘password’],
[‘192.168.0.1’, ‘database3’, ‘username’, ‘password’],
[‘192.168.0.1’, ‘database4’, ‘username’, ‘password’],

##host 2
[‘192.168.0.2’, ‘database5’, ‘username’, ‘password’]
);

$backupdir = ‘D:\\Database\\DBdumps’;

print “Starting dump of databases…\n\n”;

foreach $dbinfo (@DBsToBackup){
($Second, $Minute, $Hour, $Day, $Month, $Year, $WeekDay, $DayOfYear, $IsDST) = localtime(time) ;
$Year += 1900 ; $Month += 1;
$outputFilename = sprintf(“%04d%02d%02d[%02d%02d%02d]”, $Year, $Month, $Day, $Hour, $Minute, $Second ) . “_$$dbinfo[1]_.sql.gz”;

print “dumping: $$dbinfo[0] / $$dbinfo[1]…\n”;
$ex = “\”C:\\Program Files\\MySQL\\MySQL\ Server\ 5.0\\bin\\mysqldump\” –user=$$dbinfo[2] –password=$$dbinfo[3] –host=$$dbinfo[0] –add-drop-database –add-drop-table $$dbinfo[1] | gzip > ${backupdir}\\$outputFilename”;

print $ex .”\n\n”;
print `$ex`;
}

print “DONE dumping\n”;

print “deleting dumps older than 14 days…\n”;
print `forfiles /P $backupdir /D -14 /C “cmd /c del @path\@file” /S`;
print “script finished\n”;

==============

You can find this script and more at http://www.lancetek.com

Thanks,
Lance

   Posted by Dave Walker on December 18, 2007
Host on a shared server?
Then you will most probably not be able to create or retrieve a MySQL dump.

For a solution try this:
http://www.dwalker.co.uk/phpmysqlautobackup/
it will export your database and email to you. Its GPL open source.

   Posted by Bret Warren on March 2, 2008
This is a variation from the example above to deal with huge dump files that are too big to edit:

1. mysqldump –opt –user=username –password database > dumbfile.sql

2. Create file to execute prior to the dump file:

echo ‘SET AUTOCOMMIT = 0;
SET FOREIGN_KEY_CHECKS=0;
‘ > pre.sql

3. Create a file to execute after the dump file is imported:

echo ‘SET FOREIGN_KEY_CHECKS = 1;
COMMIT;
SET AUTOCOMMIT = 1;
‘ > post.sql

4. cat pre.sql dumpfile.sql post.sql | mysql –user=username –password database 

   Posted by Steve Hand on March 7, 2008
In case this helps anyone. This backs up all databases and tables and keeps all files for a week, placing them in a directory structure of the format:

/backup_dir/db_name/day/table.sql.gz

Useful if you want to restore to a particular days data.

It checks new backups are different to last before overwriting files. This helps if you are rsyncing your filesystems as normally mysql writes a date into the dump so the files always appear to differ even if the data is the same.

Also saves a directory with just your schema in, checks and repairs tables where necessary and defrags tables on a Sunday.

 <?php
        // Change Me 

    $username "root";
    $password "password";
    $backup_dir "/mnt/backup";
    $dump "/usr/bin/mysqldump";
    $grep "/bin/grep";
    $gzip "/bin/gzip";

        // This should not need changing from here

    function sql_dict($sql){
        $x mysql_query($sql);
        if ($x) return mysql_fetch_assoc($x);
    }
    
    function cleanup($dir){
        $d dir($dir);
        echo "Path: " $d->path "\n";
        while (false !== ($entry $d->read())) {
            if ($entry=="." or $entry=="..") continue;
            $e str_replace(".sql.gz","",$entry);
            $x sql_dict("describe $e");
            if (!$x) {
                print "Removing old backup file [$entry]\n";
                unlink("$dir/$entry");
            }
        }
        $d->close();
    }

    function crc32_file($filename)
    {
          global $gzip;
        $x exec("$gzip --list --verbose $filename");
        $x explode(" ",$x);
        return $x[1];
    }

    if (mysql_connect("localhost",$username,$password)) print "Connected.\n";
    else die("Failed to connect to database."); 
    $dbs mysql_query("show databases");
    if ($dbs) while ($db mysql_fetch_array($dbsMYSQL_ASSOC)) {
        $db $db['Database'];
        if ($db=="information_schema") continue;
        if (mysql_select_db($db)) print "Selected [$db]\n";
        else die("Failed to select db [$db]");
        foreach (array("schema","data") as $pass){
            $sql mysql_query("show tables");
            $day date("l");
            if ($pass=="schema"$dir "/$backup_dir/$db/schema";
            else $dir =  "/$backup_dir/$db/$day";
            if (!file_exists($dir)) system("mkdir -p $dir");
            if (!file_exists($dir)) die("Couldn't Create $dir");
            if ($pass=="data"){
                $latest "/$backup_dir/$db/latest";
                unlink($latest);
                system("/bin/ln -s \"$dir\" \"$latest\"");
            }            
            cleanup($dir);
            if ($sql) while ($s mysql_fetch_assoc($sql)) {
                if (!isset($s["Tables_in_{$db}"])) {
                    print "no result";
                    print_r($sql);
                    die();
                }
                $t $s["Tables_in_{$db}"];
                if (
                 $pass=="schema" $data "--no-data";
                 else $data "--lock-tables";
                 $tab $t;
                 $lim 30;
                 if (strlen($tab)>$lim$tab substr($tab,0,$lim-3)."...";
                 while (strlen($tab)<30$tab .= " ";
                print "BACKUP: $pass : $day : $db : $tab : ";
                if ($pass=="data"){
                    print "Check : ";
                    $check sql_dict("check table $t");
                    $check $check['Msg_text'];
                    print "$check : ";
                    if ($check != "OK") {
                        print "Repair";
                        $repair sql_dict("repair table $t");
                        $repair $repair['Msg_text'];
                        print " : $repair : ";
                    }
                    if ($day=="Sunday"){
                        // optimize
                        print "Optimize : ";
                        $type sql_dict("show table status like '$t'");
                        $type $type['Engine'];
                        if ($type=="MyISAM"sql("optimize table $t");
                        if ($type=="InnoDB"sql("alter table $t engine='InnoDB'");
                    }
                }
                if (isset($argv[1])){
                    print "Skipping dump\n";
                } else {
                    $temp "/tmp/backup.$t.sql.gz";
                    $out  "$dir/$t.sql.gz";
                    print "Dump : ";
                    $cmd "$dump -u$username -p$password $data --quick --add-drop-table $db $t | $grep -v 'Dump completed' | $gzip -n > $temp";
                    system($cmd);
                    print "CRC32 : ";
                    if (!file_exists($out)){
                        print "Saving  : ";
                        $cmd "/bin/mv $temp $out";
                        system($cmd);
                    } else {
                        $md5  crc32_file($temp);
                        $nmd5 crc32_file($out);
                        if ($md5!=$nmd5) {
                            print "Saving  : ";
                            $cmd "/bin/mv $temp $out";
                            system($cmd);
                        } else {
                            print "Skipped : ";
                            unlink($temp);
                        }
                    }
                    $size filesize($out);
                    print "[$size]\n";
                }
            }
        }
    } ?>  

   Posted by Alexander Driantsov on June 16, 2008
Here is example of usage mysqldump with rdiff-backup inside of noe server with synchronization using rsync:

#!/bin/sh
# Incremental backup script using rdiff
# Author: Driantsov Alexander
# Requirements: 
# * rdiff-backup – http://www.nongnu.org/rdiff-backup/
# * rsync
# * ssh 😉
####

BACKUP_ADMIN_EMAIL=”[email protected]
BACKUP_MAIL_SUBJECT=”`hostname`: SQL Backup Synchronization Result”
BACKUP_LOG=”/var/log/backup-sql”

BACKUP_PRIO=”20″ # Priority for the MySQL dump and rdiff-backup Min: 20 Max: -20
BACKUP_TMP_DIR=”/var/backup/mysql_tmp” # New dumps will be stored here 
BACKUP_DIFF_DIR=”/var/backup/hosting/mysql” # Diffs of dumps will be stored there
SYNC_SRV=”BAC.KUP.SER.VER” # Remote server for backup storage
SYNC_USER=”backup_user” # User at remote storage
SYNC_SPEED=”200″ # Limit Synchronization Bandwidth to this number of KB/s
SYNC_DIR=”/backup/hosting/mysql” #Directory on Remote server to synchronize backups in
MYSQL_USER=”admin” # MySQL user
MYSQL_PASSWD=`cat /etc/psa/.psa.shadow` # Password for MySQL. You may obtain password from /etc/psa/.psa.shadow if you are using Plesk on your server.

#Implementaition
RSCONSTR=”$SYNC_USER@$SYNC_SRV”
# Dump
echo “Backup Started at `date`” > $BACKUP_LOG
load_average=`uptime|awk ‘{print $10″ “$11” “$12}’`
echo “Load overage at start: $load_average” >> $BACKUP_LOG
echo “\nBackingUP MySQL:” >> $BACKUP_LOG
for i in `mysql -u$MYSQL_USER -p$MYSQL_PASSWD -Bse “show databases”`; do echo “Backing up database $i…” >> $BACKUP_LOG ; nice -n $BACKUP_PRIO mysqldump –single-transaction –quick –skip-extended-insert -u$MYSQL_USER -p$MYSQL_PASSWD $i > $BACKUP_TMP_DIR/$i.dump; done

echo “MySQL dump completed at `date`” >> $BACKUP_LOG
load_average=`uptime|awk ‘{print $10″ “$11” “$12}’`
echo “\nLoad overage after MySQL dumps: $load_average\n” >> $BACKUP_LOG
# Diff
echo “Looking for difference in database” >> $BACKUP_LOG
nice -n $BACKUP_PRIO rdiff-backup $BACKUP_TMP_DIR $BACKUP_DIFF_DIR/$HOSTNAME 2>&1 >> $BACKUP_LOG
echo “Done” >> $BACKUP_LOG

echo “Rdiff-backup completed at `date`” >> $BACKUP_LOG
load_average=`uptime|awk ‘{print $10″ “$11” “$12}’`
echo “\nLoad overage after rdiff-backup: $load_average\n” >> $BACKUP_LOG

# Synchronize

sql_result=`rsync -avz –bwlimit $SYNC_SPEED $BACKUP_DIFF_DIR $RSCONSTR:$SYNC_DIR|tail -n 2`
free_space_info=`ssh $RSCONSTR df -h –sync -t ext3`

echo -en ” MySQL backup synchronization:\n $sql_result \nSynchronization completed at `date` \n\nInformation on free space on remote backup server: \n $free_space_info \n\n Backup Log: \n\n`cat $BACKUP_LOG`”| mail -s “$BACKUP_MAIL_SUBJECT” $BACKUP_ADMIN_EMAIL

   Posted by Bradford Mitchell on July 18, 2008
This is an example of a Windows batch script that implements a rotating archive of backups on a daily, weekly, and monthly basis. It also provides an installation option for the creation of the backup directories and an option to add a scheduled task to the system to run the batch file.

I started with what Lon B posted and many editions/revisions later this was produced. I hope you find it as useful as we have.

~~~ BEGIN FILE ~~~

@ECHO OFF
SET VERSIONMAJOR=10
SET VERSIONMINOR=6

FOR /f “tokens=1-4 delims=/ ” %%a IN (‘date/t’) DO (
SET dw=%%a
SET mm=%%b
SET dd=%%c
SET yy=%%d
)

REM *** VERIFY AND UPDATE THESE SETTINGS BEFORE INITIAL RUN ***
REM *** mysqldir must point to the \bin directory! ***
SET bkupdir=C:\MySQL-Backups
SET mysqldir=C:\wamp\bin\mysql\mysql5.0.51b\bin
SET dbhost=localhost
SET dbuser=
SET dbpass=
REM *** END USER CONFIGURABLE SETTINGS ***

IF /i “%1” == “–INSTALL” GOTO INSTALLER
IF /i “%1” == “–CREATEDIRS” GOTO CREATEDIRS
IF /i “%1” == “–ADDSCHEDULEDTASK” GOTO TASKSCHED
IF “”%1″” == “””” GOTO ALLDB
IF /i “%1” == “–ALL” GOTO ALLDB
IF /i “%1:~0,2%” == “–” GOTO PARAMERROR

SET ALLDBS=0
SET dbnames=%1
SET dbnamesf=%1
SHIFT
:setArgs
IF “”%1″”==”””” GOTO BKUP
SET dbnames=%dbnames% %1
SET dbnamesf=%dbnamesf%_%1
SHIFT
GOTO setArgs

:ALLDB
SET ALLDBS=1
SET dbnames=ALL DATABASES
SET dbnamesf=ALL_DATABASES

:BKUP
@ECHO MySQLdump script for Windows v%VERSIONMAJOR%.%VERSIONMINOR% > %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO. >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO MySQLdump script for Windows v%VERSIONMAJOR%.%VERSIONMINOR%
@ECHO.

IF NOT EXIST %bkupdir%\INSTALLED.OK (
@ECHO DIRECTORY STRUCTURE NOT IN PLACE. >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO PLEASE RUN %0 –INSTALL OR %0 –CREATEDIRS >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO FAILED TO BACKUP DATABASES. >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO DIRECTORY STRUCTURE NOT IN PLACE.
@ECHO PLEASE RUN %0 –INSTALL OR %0 –CREATEDIRS
@ECHO FAILED TO BACKUP DATABASES.
GOTO BOTTOM
)

@ECHO Beginning backup of %dbnames%… >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO Beginning backup of %dbnames%…

IF %ALLDBS% == 1 (
SET dumpparams=–host=%dbhost% -u %dbuser% -p%dbpass% -A -f -x -q –create-options –flush-privileges -r %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql
) ELSE (
SET dumpparams=–host=%dbhost% -u %dbuser% -p%dbpass% -f -x -q –create-options –flush-privileges -r %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql –databases %dbnames%
)

%mysqldir%\mysqldump %dumpparams% >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log

@ECHO Done! New File: dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO Done! New File: dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql

COPY /Y %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql /A %bkupdir%\Daily\dbBkup_%dbnamesf%_%dw%.sql /A > NUL
@ECHO Created Daily Backup: Daily\dbBkup_%dbnamesf%_%dw%.sql >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO Created Daily Backup: Daily\dbBkup_%dbnamesf%_%dw%.sql

REM Check to see if it’s time for the Weekend backup 
IF /i “%dw%” NEQ “Sat” GOTO SKIPWKBK
IF EXIST %bkupdir%\Weekly\safety_%dbnamesf%_%yy%%mm%%dd%.txt GOTO WKCUR
IF NOT EXIST %bkupdir%\Weekly\dbBkup_%dbnamesf%_Current.sql GOTO WKCUR
IF NOT EXIST %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous.sql GOTO WKPRE
IF NOT EXIST %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous_2.sql GOTO WKPR2
MOVE /Y %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous_2.sql %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous_3.sql > NUL
:WKPR2
MOVE /Y %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous.sql %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous_2.sql > NUL
:WKPRE
MOVE /Y %bkupdir%\Weekly\dbBkup_%dbnamesf%_Current.sql %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous.sql > NUL
:WKCUR
COPY /Y %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql /A %bkupdir%\Weekly\dbBkup_%dbnamesf%_Current.sql /A > NUL
@ECHO. > %bkupdir%\Weekly\safety_%dbnamesf%_%yy%%mm%%dd%.txt
@ECHO Created Weekly Backup: Weekly\dbBkup_%dbnamesf%_Current.sql >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO Created Weekly Backup: Weekly\dbBkup_%dbnamesf%_Current.sql

:SKIPWKBK
REM if (day >= 28) write EoM backup
IF %dd% GEQ 28 (
COPY /Y %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql /A %bkupdir%\Monthly\dbBkup_%dbnamesf%_%mm%.sql /A > NUL
@ECHO Created End of Month Backup: Monthly\dbBkup_%dbnamesf%_%mm%.sql >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO Created End of Month Backup: Monthly\dbBkup_%dbnamesf%_%mm%.sql
)

DEL /q /f %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql

@ECHO Backup stored in rotating archives. >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO. >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO End MySQLdump Script >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO Backup stored in rotating archives.
@ECHO.
@ECHO End MySQLdump Script
GOTO BOTTOM

:INSTALLER
@ECHO VERIFY: Path to mysqldump: %mysqldir%
@ECHO VERIFY: Path to backups: %bkupdir%
@ECHO VERIFY: MySQL User: %dbuser%
@ECHO VERIFY: MySQL Pass: %dbpass%
@ECHO VERIFY: MySQL Host: %dbhost%
IF NOT EXIST %bkupdir%\INSTALLED.OK (
@ECHO ALERT: Backup directory does not exist. Create base directory and subdirectories?
SET /p domkdir=[Y/N]:
IF /i “%domkdir%” == “N” (
@ECHO ALERT: CANNOT CONTINUE WITHOUT DIRECTORIES IN PLACE.
GOTO BOTTOM
)
)
:CREATEDIRS
IF NOT EXIST %bkupdir%\INSTALLED.OK (
MD “%bkupdir%” > NUL
MD “%bkupdir%\Daily” > NUL
MD “%bkupdir%\Weekly” > NUL
MD “%bkupdir%\Monthly” > NUL
@ECHO INSTALLED CORRECTLY > %bkupdir%\INSTALLED.OK
)
GOTO BOTTOM

:TASKSCHED
@ECHO Preparing add Scheduled Task…
:STUPIDUSER1
SET /p taskuser=Domain\User to run task:
IF /i “”%taskuser%”” == “””” GOTO STUPIDUSER1
:STUPIDUSER2
SET /p taskpwd1=Password:
SET /p taskpwd2=Confirm Password:
IF %taskpwd1% NEQ %taskpwd2% GOTO STUPIDUSER2
:STUPIDUSER3
SET /p taskname=Task name:
IF /i “”%taskname%”” == “””” GOTO STUPIDUSER3
SET /p taskparam=Parameters to pass to batch file:
SCHTASKS /Create /SC DAILY /ST 04:00:00 /TN “%taskname%” /TR “%~f0 %taskparam%” /RU “%taskuser%” /RP %taskpwd1%
GOTO BOTTOM

:PARAMERROR
@ECHO ERROR: Unknown Parameter Passed.
@ECHO Current supported parameters:
@ECHO –ALL – Backup all databases, same as passing nothing to batch file
@ECHO –ADDSCHEDULEDTASK – Adds a scheduled task for this process
@ECHO –CREATEDIRS – Creates Directory Structure
@ECHO –INSTALL – Creates directory structure and outputs configuration settings that need verification

:BOTTOM

   Posted by Agusti Sanchez on October 28, 2008
To export a limited amount of records, you may use the following:

mysqldump -u [username] -p [dbname] {–opt} –where=”true limit 100″ > dump.sql

This adds the clause “where true limit 100” when creating the SELECT statement. (Adding “true” is a workaround: I ignore whether there is a way to add “limit 100” without adding the WHERE keyword). 

Original post: http://www.geeknewz.com/board/index.php?automodule=blog&blogid=76&showentry=434

   Posted by Chris Salch on February 22, 2009
I wrote a perl script a that uses mysqldump to backup and compress an arbitrary list of databases. You can find the script and a rough explination here:

http://kc5vzm.com/cms/wiki/view/MySQL+-+Simple+Backup+Script

   Posted by Jonathan Watt on May 28, 2009
To drop ALL tables in your database (fill out the first line as appropriate):

MYSQL=”mysql -h HOST -u USERNAME -pPASSWORD -D DB_NAME”
$MYSQL -BNe “show tables” | awk ‘{print “set foreign_key_checks=0; drop table `” $1 “`;”}’ | $MYSQL
unset MYSQL

This can be useful if you need to empty a database in order to restore a backup made by mysqldump, but you couldn’t use –add-drop-database because you don’t have CREATE DATABASE privileges on the command line (e.g. you’re on shared hosting). mysqldump adds DROP TABLE by default, but if tables may have been added or renamed since the time of your backup (e.g. by some sort of update process that you’re trying to revert from), failing to drop those tables will likely cause serious headaches later on.

Of course this raises the question of why MySQL doesn’t support “DROP TABLE *;” (in which case mysqldump could just insert that)?

   Posted by Meg Liebenstein on August 21, 2009
we can also write sql to write sql to drop tables:

select concat(‘drop table ‘,TABLE_SCHEMA,’.’,TABLE_NAME,’;’)
from information_schema.tables
where <TABLE_SCHEMA = db or what ever you want!)

   Posted by Joseph Edmonds on February 11, 2010
If you need to split a dumpfile back into its component parts, for example to retrieve a particular table, you might find this little PHP script handy:

http://www.edmondscommerce.co.uk/blog/mysql/updated-mysql-dump-splitter-and-cleaner/

   Posted by kedar vaijanapurkar on March 11, 2010
Here is a shell script for mysqldump extractor: “mydumpsplitter”.
This shell script will be grabbing the tables you want and pass it to tablename.sql.
It’s capable to understand regular expressions as I’ve added sed -r option.
Also MyDumpSplitter can split the dump in to individual table dumps.

http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script/

   Posted by Boyd Hemphill on March 15, 2010
There is an irony of SQL injection here but this is handy when I am building small files to represent tables for unit tests:

mysqldump –skip-opt exp –tables foo –where “foo_parent_id = 72 limit 100” > ~/foo.dmp

(note the injection of the limit clause in the where)

I now have 100 insert statements. Foo itself has several million rows, so this is quite handy.

   Posted by Vlatko Šurlan on July 4, 2010
There is an interesting script sample here: http://www.docplanet.org/linux/backing-up-linux-web-server-live-via-ssh/
showing a way to dump mysql databases directly into gzip and then into ssh connection, thus creating a gzipped dump archive that never resided on the server hard drive. This can be a handy way to ensure that backup does not fill up the server hard drive.

   Posted by Alasdair Barclay on September 6, 2010
The post by Bradford Mitchell on July 18 2008 7:11pm gave a nice script with, among other functions, the ability to automatically create a Windows Scheduled Task. However, if the name of the script contains spaces, the creation of the scheduled task will fail.

To get it to work, amend the following line

SCHTASKS /Create /SC DAILY /ST 04:00:00 /TN “%taskname%” /TR “%~f0 %taskparam%” /RU “%taskuser%” /RP %taskpwd1%

as follows:

SCHTASKS /Create /SC DAILY /ST 04:00:00 /TN “%taskname%” /TR “\”%~f0\” %taskparam%” /RU “%taskuser%” /RP %taskpwd1%

The escaped quote characters around the script filename (%~f0) will now allow this to work.

   Posted by David Bennett on September 15, 2010
Extract a single table from mysqldump output.

This Perl script will extract a single table definition and it’s data from a large mysqldump file. Very nice tool for quick restorations from backups.

#!/usr/bin/perl -w
use strict;

# Usage: perl extract_table_from_mysqldump.pl {table name} < {full mysqldump} > {single table dump}

my $regex=’^\-\- Table structure for table `’.$ARGV[0].’`’;
my $output=0;
while (<STDIN>) {
if ($output && m/^\-\- Table structure/) { exit; }
if (!$output && m/$regex/) { $output=1; }
if ($output) { print; }
}

–Dave Bennett (dbennett at bensoft com)

   Posted by Siva Kranthi Kumar on October 29, 2010
Bat file commands to take SQL DB backup with compression 

With MKS Toolkit (http://www.mkssoftware.com/products/tk/)
———————————————————–

set DateTime=%date:~7,2%-%date:~4,2%-%date:~-4%
set WorkingDirectory=D:\Backup Directory\
set MySQLDirectory=D:\mysqldump.exe Directory\

“%MySQLDirectory%mysqldump” –host=”127.0.0.1″ –user=”root” –password=”” –routines –triggers –events dbname > “%WorkingDirectory%DBName %DateTime%.sql”

zip -m “%WorkingDirectory%DBName %DateTime%” “DBName %DateTime%.sql”

With 7-zip (http://www.7-zip.org/download.html)
———————————————–

set DateTime=%date:~7,2%-%date:~4,2%-%date:~-4%
set WorkingDirectory=D:\Backup Directory\
set MySQLDirectory=D:\mysqldump.exe Directory\

“%MySQLDirectory%mysqldump” –host=”127.0.0.1″ –user=”root” –password=”” –routines –triggers –events dbname > “%WorkingDirectory%DBName %DateTime%.sql”

7z a -tzip “%WorkingDirectory%DBName %DateTime%” “DBName %DateTime%.sql”

if ERRORLEVEL 1 goto :EOF
del “%WorkingDirectory%SnD %DateTime%.sql”
:EOF 

   Posted by Milo de Vries on December 1, 2010
Creating a backup scheduled task on Windows with a oneliner:

schtasks /create /sc daily /st 04:30 /ru SYSTEM /tn MySQL_backup /tr “\”C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqldump.exe\” -B <DB_NAME> -u <USER_NAME> -p<PASSWORD> -r C:\MySQL_backup\<DB_NAME>_%date:~0,2%.sql

This creates a daily task to run at 04:30, creating a seperate backupfile each day (<DB_NAME>_ma.sql, <DB_NAME>_di.sql, <DB_NAME>_wo.sql etc.)

   Posted by Ilan Hazan on April 6, 2011
Restoring a dump table into the MySQL master server can lead to serious replication delay. 
To overcome the replication delay, caused by restoring the dump table on the master, there is a need to widespread the massive inserts. This can be done by the MySQL SLEEP command.
See http://www.mysqldiary.com/as-restoring-a-dump-table-into-the-mysql-master-you-better-get-some-sleep/

   Posted by Arbaoui Mehdi on May 15, 2011

   Posted by Alejandro Arauz on October 10, 2011
The mysqldump command is very helpful but sometimes you don’t have the necessary permissions on the server to run this command (like in shared enviroments).

I had this problem before and searching for backup tools I found MySqlBackupFTP (http://mysqlbackupftp.com).It is easy to use and it has a free version that allows you to connect to a remote phpMyAdmin instance.

I hope it helps others with the same problem.

   Posted by DBA Lead on June 29, 2012
Fast, parallel restore from SQL dumps (mysqldump) for MySQL. The technique is very simple and straightforward.

http://www.geeksww.com/tutorials/database_management_systems/mysql/tips_and_tricks/fast_parallel_restore_from_sql_dumps_mysqldump_for_mysql.php

   Posted by Bill Plimpton on March 14, 2014
To get return code of mysql use:

mysqldump $MYSQL_OPTS $database $tables |gzip -1 >$path
if [ ${PIPESTATUS[0]} != 0 ]; then
echo ” ## Backup of $db tables $tables FAILED! ##”
else
echo “Backup of $db $tables successful!”
fi

版權聲明:本文采用知識共享 署名4.0國際許可協議 [BY-NC-SA] 進行授權
轉載事宜:如需轉載需徵得應允,轉載必須注明來源於本站的信息。
文章名称:《4.5.4 mysqldump — A Database Backup Program》
文章链接:https://www.thefreesky.com/blog/24667.html
本站資源僅供個人學習交流,請於下載後24小時內刪除,不允許用於商業用途,否則法律問題自行承擔。

評論 抢沙发