1. Check yes log_bin Open or not

Incremental backup needs to be turned on log_bin

If the status is on It is already on log_bin
mysql > show variables like '%log_bin%';
+---------------------------------+-----------------------------+ |
Variable_name | Value |
+---------------------------------+-----------------------------+ | log_bin |
ON | | log_bin_basename | /var/lib/mysql/binlog | | log_bin_index |
/var/lib/mysql/binlog.index | | log_bin_trust_function_creators | OFF | |
log_bin_use_v1_row_events | OFF | | sql_log_bin | ON |
+---------------------------------+-----------------------------+
If it's not turned on log_bin, Then modify mysql configuration file my.cnf, Add the following configuration , restart mysql Make configuration take effect
log-bin=/var/lib/mysql/mysql-bin
View the log of currently recording operations log_bin File name
mysql > show master status;
+---------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+---------------+----------+--------------+------------------+-------------------+
| binlog.000017 | 155 | | | |
+---------------+----------+--------------+------------------+-------------------+
2. Full backup

Just execute the following order , You can do full backup
mysqldump -uroot -p123456 --quick --events --all-databases --flush-logs
--delete-master-logs --single-transaction > data.sql
The process of full backup is as follows :

* Create the corresponding directory first
* use mysqldump Full backup
* Compress backup files
* use scp Copy the backup file to another server
* Clean up expired full backup files
* Automatic full backup for scheduled tasks
Full backup script

mysqlFullBack.sh
#!/bin/bash # On initialization , Create the corresponding directory # mkdir -p /home/mysql/backup/daily # Timed tasks #
Every Sunday morning 3:00 Execute full backup script # 0 3 * * 0 /bin/bash -x /root/mysqlFullBack.sh >/dev/null
2>&1 # Monday to Saturday morning 3:00 Execute incremental backup script # 0 3 * * 1-6 /bin/bash -x /root/mysqlPartBack.sh
>/dev/null 2>&1 BakDir=/home/mysql/backup LogFile=/home/mysql/backup/bak.log
Date=`date +%Y%m%d` Begin=`date +"%Y year %m month %d day %H:%M:%S"` cd $BakDir
DumpFile=$Date.sql GZDumpFile=$Date.sql.tgz mysqldump -uroot -p123456 --quick
--events --all-databases --flush-logs --delete-master-logs --single-transaction
> $DumpFile /bin/tar -zvcf $GZDumpFile $DumpFile /bin/rm $DumpFile Last=`date
+"%Y year %m month %d day %H:%M:%S"` echo [FullBack] start :$Begin end :$Last $GZDumpFile
successful >> $LogFile # scp Duplicate full backup of files to other servers # scp $GZDumpFile
root@xxxx:/usr/mysql/backup/$GZDumpFile # delete 30 Full backup files days ago find $BakDir -mtime +30
-type f -name "*.sql.tgz" | xargs rm -f # Delete incremental backup files cd $BakDir/daily /bin/rm -f *
mysqldump Parameter description

* --quick : This option is useful when exporting large tables , It forces mysqldump Instead of fetching all records and caching them in memory, the records are directly output from the server query .
* --all-databases : Export all databases
* --flush-logs : Generate a new binary log file
* --single-transaction : This option sets the isolation level to :REPEATABLE READ. And then one more START
TRANSACTION sentence , Let the whole data in the dump Ensure data consistency in the process , This option is right InnoDB It's very useful , And it won't lock the table . But this is not guaranteed MyISAM Table and MEMORY Data consistency of tables .
In order to ensure the use of --single-transaction On command , ensure dump Validity of documents . Do you need the following statement ALTER TABLE, CREATE TABLE,
DROP TABLE, RENAME TABLE, TRUNCATE TABLE
, Because consistent reads cannot isolate the above statements . So if dump In the process , Use the above statement , May cause dump The file data is inconsistent or unavailable .
Recover data from full backup files

Direct execution of full backup files sql that will do
mysql > source /home/data.sql;
3. Incremental backup

Just execute the following order , The corresponding incremental backup log is generated , Incremental backup can be restored based on the log

stay /var/lib/mysql Lower or /var/lib/mysql/mysql-bin You can view the incremental backup log under
mysqladmin -uroot -p123456 flush-logs
Incremental backup process :

* Refresh log
* Traverse log index file
* Backup new log files , Ignore files that have been backed up
Incremental backup script

mysqlPartBack.sh
#!/bin/bash # Before use , Please create the following directories in advance backupDir=/home/mysql/backup/daily
# Copy on incremental backup mysql-bin.00000* Target directory for , Manually create this directory in advance mysqlDir=/var/lib/mysql #mysql Data directory for
logFile=/home/mysql/backup/bak.log BinFile=/var/lib/mysql/binlog.index
#mysql Of index File path , Put the data directory under mysqladmin -uroot -p123456 flush-logs
# This is used to generate new ones mysql-bin.00000* file # wc -l Count the number of rows # awk
In short awk Is to read the file line by line , Slice each line with space as the default separator , All kinds of partial re treatment were performed . Counter=`wc -l $BinFile |awk
'{print $1}'` NextNum=0 # this for Loop for comparison $Counter,$NextNum These two values determine whether the file exists or is up-to-date for
file in `cat $BinFile` do base=`basename $file` echo $base
#basename For interception mysql-bin.00000* file name , Remove ./mysql-bin.000005 ahead ./ NextNum=`expr
$NextNum + 1` if [ $NextNum -eq $Counter ] then echo $base skip! >> $logFile
else dest=$backupDir/$base if(test -e $dest) #test
-e Used to detect whether the target file exists , Write as you exist exist! reach $logFile go then echo $base exist! >> $logFile else cp
$mysqlDir/$base $backupDir echo $base copying >> $logFile fi fi done echo
[PartBack] `date +"%Y year %m month %d day %H:%M:%S"` $Next Bakup successful! >> $logFile
Restore incremental backup data
mysqlbinlog /home/mysql/backup/daily/binlog.000008 | mysql -uroot -p123456;
4. Timed tasks

Timed tasks

Every Sunday morning 3:00 Execute full backup script
0 3 * * 0 /bin/bash -x /root/mysqlFullBack.sh >/dev/null 2>&1
Monday to Saturday morning 3:00 Execute incremental backup script
0 3 * * 1-6 /bin/bash -x /root/mysqlPartBack.sh >/dev/null 2>&1
5. common problem

When the linux Timed tasks crontab To execute the backup script , You'll find that after full backup sql File is an empty file . however , It's normal for you to execute commands manually .

reason :mysqldump Order in crontab It doesn't exist in the execution environment , To use an absolute path , To execute this command
see mysqldump location
which mysqldump // /usr/local/mysql/bin/mysqldump
Then, backup the mysqldump Replace with /usr/local/mysql/bin/mysqldump

Technology
©2020 ioDraw All rights reserved
Interview questions : Handwritten list ( Include reverse linked list )C Notes on language learning —— Pointer : Pointer and one dimensional array error: (-215:Assertion failed) Solution layui.table Examples of dynamically getting header and list data python Simple record of network programming After the outbreak Which programming has a bright future Java Summary of basic learning (162)—— How to ensure thread safety ?vue el-input Do not enter special characters Only numbers can be entered Regular verification MYSQL database DML Common commands python in list and str Mutual conversion