The original scheme of linux is inotify+rsync to realize real-time data backup;
inotify monitors the file system under linux at the linux kernel level, and records the open/access/modify operations of files;
inotify has two shortcomings:
rsync takes up too many resources due to the combination of rsync and rsync files. 1. It takes too long time to operate rsync files;
2. In the directory monitored by inotify, when operating multi-level directory structure files, there is a probability of random loss of operation records;
Based on the above ideas of inotify and aiming at the above shortcomings, Zhou Yang of Jinshan company developed sersync tool with c + +, which retained the advantages of inotify, optimized the above shortcomings, simplified the configuration, and achieved a perfect data synchronization scheme with rsync;
The advantages of sersync are:
1.sersync is written in c + +, and filters the temporary files and repeated file operations generated by the file system of linux system. Therefore, when combined with rsync synchronization, it saves running time consumption and network resources. So faster.
2. The sersync configuration is very simple. There are statically compiled binary files in the bin directory, which can be used directly with the xml configuration file in the bin directory.
3.sersync uses multithreading for synchronization. Especially when synchronizing large files, it can ensure that multiple servers keep synchronized in real time.
4.sersync has an error handling mechanism. It resynchronizes the wrong file through the failure queue. If it still fails, it resynchronizes the failed file according to the set duration.
5.sersync has its own crontab function. Just open it in the xml configuration file, and you can synchronize it as a whole at intervals as required. There is no need to configure the crontab function.
6.sersync can be redeveloped.
Recommendations:
(1) When the amount of synchronized directory data is small, rsync+inotify is recommended
(2) rsync+sersync is recommended when the amount of synchronized directory data is large (hundreds of G or even more than 1T) and there are many files
host | host name | IP address | install |
Conference host | confluence | 172.16.10.10 | confluence,MySQL,sersync |
Conference standby | backup | 172.16.20.10 | confluence,MySQL,rsync |
1, Install the server configured with rsync on the standby machine:
Install rsync:
[root@backup ~]# yum -y install rsync
Configure rsync's configuration file / etc / rsyncd conf
1 # /etc/rsyncd: configuration file for rsync daemon mode 2 3 # See rsyncd.conf man page for more options. 4 5 # configuration example: 6 port = 8787 #Set listening port 7 uid = confluence5 #The user running rsync synchronizes the owner of the file. Setting conference 5 here is consistent with the requirements of conference 8 gid = confluence5 #User group running rsync and user group synchronizing files 9 # use chroot = yes 10 max connections = 0 #Unlimited connections 11 # pid file = /var/run/rsyncd.pid 12 # exclude = lost+found/ 13 # transfer logging = yes 14 # timeout = 900 15 # ignore nonreadable = yes 16 # dont compress = *.gz *.tgz *.zip *.z *.Z *.rpm *.deb *.bz2 17 18 # [ftp] 19 # path = /home/ftp 20 # comment = ftp export area 21 # 22 log format = %h %o %f %l %b 23 log file = /var/log/rsync.log 24 25 [confluence] #The conference module sets the actual path to the file directory of the conference attachment 26 path = /var/atlassian/application-data/confluence/attachments/ver003 27 comment = ver003 #It's a description 28 read only = false 29 auth users = rsync_backup #Authenticated users of this module 30 31 secrets file = /etc/rsyncd.passwd #authenticated document
Create rsync authentication file:
[root@backup ~]# echo 'rsync_backup:rsyncpasswd' > /etc/rsyncd.passwd [root@backup ~]# chmod 600 /etc/rsyncd.passwd [root@backup ~]# chattr +i /etc/rsyncd.passwd
rsync enabled port on Firewall:
[root@backup ~]# firewall-cmd --add-port=8787/tcp
Start rsync:
[root@backup ~]# rsync --daemon --config=/etc/rsyncd.conf
Check whether port 8787 is enabled to listen, whether there is rsync process, and judge whether it is started successfully;
2, Install sersync on the conference host:
[root@confluence ~]# wget http://down.whsir.com/downloads/sersync2.5.4_64bit_binary_stable_final.tar.gz [root@confluence ~]# tar zxvf sersync2.5.4_64bit_binary_stable_final.tar.gz [root@confluence ~]# mv GNU-Linux-x86 /usr/local/sersync [root@confluence ~]# echo 'export PATH=$PATH:/usr/local/sersync' >> ~/.bash_profile [root@confluence ~]# source ~/.bash_profile
##The sersync directory / usr/local/sersync has only two files: a binary program file and a configuration file in xml format.
[root@confluence ~]# ls /usr/local/sersync/
confxml.xml sersync2
## confxml.xml is the configuration of sersync. An example is as follows:
<?xml version="1.0" encoding="ISO-8859-1"?> <head version="2.5"> <host hostip="localhost" port="8008"></host> <debug start="false"/> #Whether to turn on debugging mode <fileSystem xfs="false"/> #Is the monitored an xfs file system <filter start="false"> #Whether to turn on file type filtering. After it is turned on, the following filtered file types will not be monitored <exclude expression="(.*)\.svn"></exclude> <exclude expression="(.*)\.gz"></exclude> <exclude expression="^info/*"></exclude> <exclude expression="^static/*"></exclude> </filter> <inotify> #The default monitored event is delete/close_write/moved_from/moved_to/create folder <delete start="true"/> <createFolder start="true"/> <createFile start="false"/> <closeWrite start="true"/> <moveFrom start="true"/> <moveTo start="true"/> <attrib start="false"/> <modify start="false"/> </inotify> <sersync> #rsync command configuration section <localpath watch="/var/atlassian/application-data/confluence/attachments/ver003"> #Monitoring data directory. Here is the directory of attachment files of conference <remote ip="172.16.20.10" name="confluence"/> #The ip address and rsync daemon module name of the standby machine, so rsync needs to be run in daemon mode on the standby machine <!--<remote ip="192.168.8.39" name="tongbu"/>--> <!--<remote ip="192.168.8.40" name="tongbu"/>--> </localpath> <rsync> <commonParams params="-rtDuz"/> #Parameters of rsync <auth start="true" users="rsync_backup" passwordfile="/etc/rsync.pas"/> #Enable rsync authentication mode <userDefinedPort start="true" port="8787"/> #Specify the port number that rsync listens on the standby machine <timeout start="true" time="100"/> #Enable authentication timeout <ssh start="false"/> #For security reasons, ssh authentication is not recommended </rsync> <failLog path="/tmp/rsync_fail_log.sh" timeToExecute="60"/><!--default every 60mins execute once--> #If the transmission fails, it will be retransmitted, and if it fails again, it will be written to rsync_fail_log, execute the script every other period of time (timeToExecute) and transfer it again <crontab start="true" schedule="600"><!--600mins--> #The overall synchronization between the monitoring directory and the target server is carried out at regular intervals. The default is 600 minutes. Whether it is enabled according to personal conditions <crontabfilter start="false"> #If file filtering has been turned on before, it should also be set here <exclude expression="*.php"></exclude> <exclude expression="info/*"></exclude> </crontabfilter> </crontab> <plugin start="false" name="command"/> </sersync> <plugin name="command"> #The following is the plug-in settings (not too much description) <param prefix="/bin/sh" suffix="" ignoreError="true"/> <!--prefix /opt/tongbu/mmm.sh suffix--> <filter start="false"> <include expression="(.*)\.php"/> <include expression="(.*)\.sh"/> </filter> </plugin> <plugin name="socket"> <localpath watch="/opt/tongbu"> <deshost ip="192.168.138.20" port="8009"/> </localpath> </plugin> <plugin name="refreshCDN"> <localpath watch="/data0/htdocs/cms.xoyo.com/site/"> <cdninfo domainname="ccms.chinacache.com" port="80" username="xxxx" passwd="xxxx"/> <sendurl base="http://pic.xoyo.com/cms"/> <regexurl regex="false" match="cms.xoyo.com/site([/a-zA-Z0-9]*).xoyo.com/images"/> </localpath> </plugin> </head>
Create an authentication password file for rsync:
[root@confluence ~]# echo syncpasswd > /etc/rsync.pas [root@confluence ~]# chmod 600 /etc/rsync.pas [root@confluence ~]# chattr +i /etc/rsync.pas
Start sersync:
[root@confluence ~]# sersync2 -n 10 -d -o /usr/local/sersync/confxml.xml
-n number of threads enabled
-d daemon mode start
-o specify profile
Check whether there is a sersync process and judge whether it is started successfully;
Test:
Create a new file / directory and modify the file / directory under / var / atlas / application data / conference / attachments / ver003 directory on the conference host to see whether it can be synchronized to the same directory on the backup host; It can be synchronized successfully;
Supplement:
[root@confluence ~]# sersync -r #Directories that can be monitored in full synchronization
----------------The above realizes the real-time backup of the file directory of the conference attachment. As for the real-time synchronization of the database, you can use keepalive. See my Another article mysql + kept to achieve high availability of mysql database---------------------------