搭建MySQL的基本、半联合进行、主主复制架构

实例拓扑图:

复制其最终指标是让一台服务器的多少和其余的服务器的多御史持同步,已达到规定的标准多少冗余也许服务的负载均衡。一台主服务器能够一而再多台从服务器,并且从服务器也得以反过来作为主服务器。主从服务器可以置身区别的互联网拓扑中,由于mysql的强劲复制成效,其复制目的能够是装有的数据库,也得以是一些数据库,甚至是有些数据库中的有些表展开复制。

图片 1

MySQL支持的二种复制方案:依据语句复制,基于行复制
依照语句复制基于行复制,那三种复制格局都以经过记录主服务器的二进制日志中任何有恐怕造成数据库内数据发生转移的SQL语句到衔接日志,并且在从服务器上进行以下中继日志内的SQL语句,而落成与主服务器的数码同步。分裂的是,当主服务器上执行了一个依据变量的多少并将其履新到数据库中,如now()函数,而那时基于语句复制时记下的便是该SQL语句的全方位语法,而基于行复制便是将now()更新到数据库的数值记录下来。
诸如:在主服务器上实施以下语句:
mysql>update user set createtime=now() where sid=16;
一经此时now()重回的值是:2013-04-16 20:46:35
基于语句的复制就会将其记录为:update user set createtime=now() where
sid=16;
基于行复制的就会将其记录为:update user set createtime=’二〇一三-04-16
20:46:35′ where sid=16;

D卡宴1和DHighlander2陈设keepalived和lvs作主从架构或主主框架结构,LX570S1和奥德赛S2布署nginx搭建web站点。

进行主从复制运维的多少个线程
Binlog dump线程:将二进制日志的剧情发送给从服务器
I/O从线程:将经受的的数目写入到衔接日志
SQL线程:贰遍从中继日志中读出一句SQL语句在从服务器上实施

在意:各节点的时间供给一块(ntpdate
ntp1.aliyun.com);关闭firewalld(systemctl stop
firewalld.service,systemctl disable
firewalld.service),设置selinux为permissive(setenforce
0);同时确定保证各网卡辅助MULTICAST(多播)通讯。

一 、主从复制:
预加防范干活:
1.改动配置文件(server_id一定要修改)
2.创立复制用户
3.起初从服务器的从服务进度

因而命令ifconfig能够查看到是或不是打开了MULTICAST:

规划:
Master:IP地址:172.16.4.11    版本:mysql-5.5.20
Slave:IP地址:172.16.4.12    版本:mysql-5.5.20
此处需注意,mysql复制大多数都以后向包容,所以,从服务器的本子一定要压倒或等于主服务器的版本。
1、Master
修改配置文件,将其设为mysql主服务器
#vim /etc/my.cnf
server_id=11                #修改server_id=11
log_bin=mysql-bin            #拉开二进制日志
sync_binlog=1              
#任何多少个事情提交之后就当下写入到磁盘中的二进制文件
innodb_flush_logs_at_trx_commit=1      
#此外叁个事物提交以后就应声写入到磁盘中的日志文件

     
 图片 2

保留退出
#service mysql reload             #再次载入mysql的布局文件

keepalived的骨干架构

贰 、Master上创造用户,授予复制权限
mysql>grant replication client,replication slave on *.* to
repl@172.16.4.12 identified by ‘135246’;
mysql>flush privileges;

搭建RS1:

[root@RS1 ~]# yum -y install nginx   #安装nginx
[root@RS1 ~]# vim /usr/share/nginx/html/index.html   #修改主页
    <h1> 192.168.4.118 RS1 server </h1>
[root@RS1 ~]# systemctl start nginx.service   #启动nginx服务
[root@RS1 ~]# vim RS.sh   #配置lvs-dr的脚本文件
    #!/bin/bash
    #
    vip=192.168.4.120
    mask=255.255.255.255
    case $1 in
    start)
        echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ifconfig lo:0 $vip netmask $mask broadcast $vip up
        route add -host $vip dev lo:0
        ;;
    stop)
        ifconfig lo:0 down
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ;;
    *) 
        echo "Usage $(basename $0) start|stop"
        exit 1
        ;;
    esac
[root@RS1 ~]# bash RS.sh start

3、Slave
修改配置文件,将其安装为贰个mysql从服务器
#vim /etc/my.cnf
server_id=12                #修改server_id=12
#log-bin               
#诠释掉log-bin,从服务器不需求二进制日志,由此将其倒闭
relay-log=mysql-relay               
#概念中继日志名,开启从服务器中继日志
relay-log-index=mysql-relay.index    
#概念中继日志索引名,开启从服务器中继索引
read_only=1                   
#设定从服务器只可以进行读操作,不能够举办写操作

参照MuranoS1的配备搭建本田CR-VS2。

保存退出
#service mysql reload             #重新载入mysql的安顿文件

搭建DR1:

[root@DR1 ~]# yum -y install ipvsadm keepalived   #安装ipvsadm和keepalived
[root@DR1 ~]# vim /etc/keepalived/keepalived.conf   #修改keepalived.conf配置文件
    global_defs {
       notification_email {
         root@localhost
       }
       notification_email_from keepalived@localhost
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id 192.168.4.116
       vrrp_skip_check_adv_addr
       vrrp_mcast_group4 224.0.0.10
    }

    vrrp_instance VIP_1 {
        state MASTER
        interface eno16777736
        virtual_router_id 1
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass %&hhjj99
        }
        virtual_ipaddress {
          192.168.4.120/24 dev eno16777736 label eno16777736:0
        }
    }

    virtual_server 192.168.4.120 80 {
        delay_loop 6
        lb_algo rr
        lb_kind DR
        protocol TCP

        real_server 192.168.4.118 80 {
            weight 1
            HTTP_GET {
                url {
                  path /index.html
                  status_code 200
                }
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
            }
        }
        real_server 192.168.4.119 80 {
            weight 1
            HTTP_GET {
                url {
                  path /index.html
                  status_code 200
                }
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
            }
         }
    }
[root@DR1 ~]# systemctl start keepalived
[root@DR1 ~]# ifconfig
    eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.4.116  netmask 255.255.255.0  broadcast 192.168.4.255
            inet6 fe80::20c:29ff:fe93:270f  prefixlen 64  scopeid 0x20<link>
            ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)
            RX packets 14604  bytes 1376647 (1.3 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 6722  bytes 653961 (638.6 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    eno16777736:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.4.120  netmask 255.255.255.0  broadcast 0.0.0.0
            ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)
[root@DR1 ~]# ipvsadm -ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  192.168.4.120:80 rr
      -> 192.168.4.118:80             Route   1      0          0         
      -> 192.168.4.119:80             Route   1      0          0

肆 、验证Slave上衔接日志以及server_id是或不是均生效
mysql>show variables like ‘relay%’;
+———————–+—————–+
| Variable_name         | Value           |
+———————–+—————–+
| relay_log             | relay-bin       |
| relay_log_index       | relay-bin.index |
| relay_log_info_file   | relay-log.info  |
| relay_log_purge       | ON              |
| relay_log_recovery    | OFF             |
| relay_log_space_limit | 0               |
+———————–+—————–+
mysql>show variables like ‘server_id’;
+—————+——-+
| Variable_name | Value |
+—————+——-+
| server_id     | 12    |
+—————+——-+

D瑞鹰2的搭建基本同DLacrosse1,首要修改一下配备文件中/etc/keepalived/keepalived.conf的state和priority:state BACKUP、priority 90. 同时大家发现作为backup的DXC602没有启用eno16777736:0的网口:

图片 3

⑤ 、运行从服务器的从劳动进度
场景一 、假设主服务器和从服务器都是新成立的,并从未新增其余数据,则举行以下命令:
mysql>change master to \
master_host=’172.16.4.11′,
master_user=’repl’,
master_password=’135246′;
mysql>show slave status\G
*************************** 1. row
***************************
Slave_IO_State:
Master_Host: 172.16.4.11
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 107
Relay_Log_File: relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: No
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 25520
Relay_Log_Space: 2565465
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 0
mysql>start slave;
mysql>show slave status\G
*************************** 1. row
***************************
Slave_IO_State: Queueing master event to the relay log
Master_Host: 172.16.4.11
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 107
Relay_Log_File: relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 360
Relay_Log_Space: 300
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 11

客户端实行测试:

[root@client ~]# for i in {1..20};do curl http://192.168.4.120;done   #客户端正常访问
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>

[root@DR1 ~]# systemctl stop keepalived.service   #关闭DR1的keepalived服务

[root@DR2 ~]# systemctl status keepalived.service   #观察DR2,可以看到DR2已经进入MASTER状态
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-09-04 11:33:04 CST; 7min ago
  Process: 12983 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 12985 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─12985 /usr/sbin/keepalived -D
           ├─12988 /usr/sbin/keepalived -D
           └─12989 /usr/sbin/keepalived -D

Sep 04 11:37:41 happiness Keepalived_healthcheckers[12988]: SMTP alert successfully sent.
Sep 04 11:40:22 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) Transition to MASTER STATE
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) Entering MASTER STATE
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) setting protocol VIPs.
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) Sending/queueing gratuitous ARPs on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120

[root@client ~]# for i in {1..20};do curl http://192.168.4.120;done   #可以看到客户端正常访问
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>

场景贰 、倘诺主服务器已经运维过一段了,从服务器是新加上的,则需求将主服务器以前的数量导入到从服务器中:
Master:
#mysqldump -uroot -hlocalhost -p123456 –all-databases
–lock-all-tables –flush-logs –master-data=2 >
/backup/alldatabase.sql
mysql>flush tables with read lock;
mysql>show master status;
+——————+———-+————–+——————+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+——————+———-+————–+——————+
| mysql-bin.000004 |      360 |              |                  |
+——————+———-+————–+——————+
mysql>unlock tables;
#scp /backup/alldatabase.sql 172.16.4.12:/tmp

keepalived的主主架构

Slave:
#mysql -uroot -p123456 < /tmp/alldatabase.sql
mysql>change master to \
master_host=’172.16.4.11′,
master_user=’repl’,
master_password=’135246′,
master_log_file=’mysql-bin.000004′,
master_log_pos=360;
mysql>show slave status\G
*************************** 1. row
***************************
Slave_IO_State:
Master_Host: 172.16.4.11
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000004
Read_Master_Log_Pos: 360
Relay_Log_File: relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: mysql-bin.000004
Slave_IO_Running: No
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 360
Relay_Log_Space: 107
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 0
mysql>start slave;

 修改OdysseyS1和途睿欧S2,添加新的VIP:

[root@RS1 ~]# cp RS.sh RS_bak.sh
[root@RS1 ~]# vim RS_bak.sh   #添加新的VIP
    #!/bin/bash
    #
    vip=192.168.4.121
    mask=255.255.255.255
    case $1 in
    start)
        echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ifconfig lo:1 $vip netmask $mask broadcast $vip up
        route add -host $vip dev lo:1
        ;;
    stop)
        ifconfig lo:1 down
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ;;
    *)
        echo "Usage $(basename $0) start|stop"
        exit 1
        ;;
    esac
[root@RS1 ~]# bash RS_bak.sh start
[root@RS1 ~]# ifconfig
    ...
    lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 192.168.4.120  netmask 255.255.255.255
            loop  txqueuelen 0  (Local Loopback)

    lo:1: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 192.168.4.121  netmask 255.255.255.255
            loop  txqueuelen 0  (Local Loopback) 
[root@RS1 ~]# scp RS_bak.sh root@192.168.4.119:~
root@192.168.4.119's password: 
RS_bak.sh                100%  693     0.7KB/s   00:00

[root@RS2 ~]# bash RS_bak.sh   #直接运行脚本添加新的VIP 
[root@RS2 ~]# ifconfig
    ...
    lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 192.168.4.120  netmask 255.255.255.255
            loop  txqueuelen 0  (Local Loopback)

    lo:1: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 192.168.4.121  netmask 255.255.255.255
            loop  txqueuelen 0  (Local Loopback)

mysql>show slave status\G
*************************** 1. row
***************************
Slave_IO_State: Queueing master event to the relay log
Master_Host: 172.16.4.11
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000004
Read_Master_Log_Pos: 360
Relay_Log_File: relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: mysql-bin.000004
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 360
Relay_Log_Space: 300
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 11

修改DR1和DR2:

[root@DR1 ~]# vim /etc/keepalived/keepalived.conf   #修改DR1的配置文件,添加新的实例,配置服务器组
    ...
    vrrp_instance VIP_2 {
        state BACKUP
        interface eno16777736
        virtual_router_id 2
        priority 90
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass UU**99^^
        }
        virtual_ipaddress {
            192.168.4.121/24 dev eno16777736 label eno16777736:1
        }
    }

    virtual_server_group ngxsrvs {
        192.168.4.120 80
        192.168.4.121 80
    }
    virtual_server group ngxsrvs {
        ...
    }
[root@DR1 ~]# systemctl restart keepalived.service   #重启服务
[root@DR1 ~]# ifconfig   #此时可以看到eno16777736:1,因为DR2还未配置
    eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.4.116  netmask 255.255.255.0  broadcast 192.168.4.255
            inet6 fe80::20c:29ff:fe93:270f  prefixlen 64  scopeid 0x20<link>
            ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)
            RX packets 54318  bytes 5480463 (5.2 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 38301  bytes 3274990 (3.1 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    eno16777736:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.4.120  netmask 255.255.255.0  broadcast 0.0.0.0
            ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)

    eno16777736:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.4.121  netmask 255.255.255.0  broadcast 0.0.0.0
            ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)
[root@DR1 ~]# ipvsadm -ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  192.168.4.120:80 rr
      -> 192.168.4.118:80             Route   1      0          0         
      -> 192.168.4.119:80             Route   1      0          0         
    TCP  192.168.4.121:80 rr
      -> 192.168.4.118:80             Route   1      0          0         
      -> 192.168.4.119:80             Route   1      0          0

[root@DR2 ~]# vim /etc/keepalived/keepalived.conf   #修改DR2的配置文件,添加实例,配置服务器组
    ...
    vrrp_instance VIP_2 {
        state MASTER
        interface eno16777736
        virtual_router_id 2
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass UU**99^^
        }
        virtual_ipaddress {
            192.168.4.121/24 dev eno16777736 label eno16777736:1
        }
    }

    virtual_server_group ngxsrvs {
        192.168.4.120 80
        192.168.4.121 80
    }
    virtual_server group ngxsrvs {
        ...
    }
[root@DR2 ~]# systemctl restart keepalived.service   #重启服务
[root@DR2 ~]# ifconfig
    eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.4.117  netmask 255.255.255.0  broadcast 192.168.4.255
            inet6 fe80::20c:29ff:fe3d:a31b  prefixlen 64  scopeid 0x20<link>
            ether 00:0c:29:3d:a3:1b  txqueuelen 1000  (Ethernet)
            RX packets 67943  bytes 6314537 (6.0 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 23250  bytes 2153847 (2.0 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    eno16777736:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.4.121  netmask 255.255.255.0  broadcast 0.0.0.0
            ether 00:0c:29:3d:a3:1b  txqueuelen 1000  (Ethernet)
[root@DR2 ~]# ipvsadm -ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  192.168.4.120:80 rr
      -> 192.168.4.118:80             Route   1      0          0         
      -> 192.168.4.119:80             Route   1      0          0         
    TCP  192.168.4.121:80 rr
      -> 192.168.4.118:80             Route   1      0          0         
      -> 192.168.4.119:80             Route   1      0          0 

表达MySQL的主从复制架构成功

客户端测试:

[root@client ~]# for i in {1..20};do curl http://192.168.4.120;done
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
[root@client ~]# for i in {1..20};do curl http://192.168.4.121;done
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>

 

注1:MySQL的复制可以依照某些数据库或库中的默写表实行理并答复制,要想完结该意义,只需在其配置文件中加上以下配置:
Master:
binlog-do-db=db_name        只复制db_name数据库
binlog-ignore-db=db_name    不复制db_name数据库

注2:在Master上定义过滤规则,意味着,任何不关乎到该数据库相关的写操作都不会被记录到二进制日志中,由此不建议在Master上定义过滤规则,并且不建议binlog-do-db与binlog-ignore-db同时定义。

Slave:
replicate_do_db=db_name            只复制db_name数据库
replicate_ignore_db=db_name        不复制db_name数据库
replicate_do_table=tb_name        只复制tb_name表
replicate_ignore_table=tb_name        只复制tb_name表
replicate_wild_do_table=test%       
只复制以test为开端并且前面跟上任意字符的名字的表
replicate_wild_ignore_table=test_   
只复制以test为始发并且前边跟上任意单个字符的名字的表

注3:若是急需钦赐多少个db或table时,则只需将命令数十一回写入

**

② 、半协同复制**

鉴于Mysql的复制都以依照异步举办的,在卓殊规意况下不可能保证数据的中标复制,因而在mysql
5.5自此选取了来自google补丁,能够将Mysql的复制达成半齐声形式。所以必要为主服务器加载对应的插件。在Mysql的设置目录下的lib/plugin/目录中负有相应的插件semisync_master.so,semisync_slave.so

在Master和Slave的mysql命令行运维如下命令:

Master:
mysql> install plugin rpl_semi_sync_master soname
‘semisync_master.so’;
mysql> set global rpl_semi_sync_master_enabled = 1;
mysql> set global rpl_semi_sync_master_timeout = 1000;
mysql> show variables like ‘%semi%’;
+————————————+——-+
| Variable_name                      | Value |
+————————————+——-+
| rpl_semi_sync_master_enabled       | ON    |
| rpl_semi_sync_master_timeout       | 1000  |
| rpl_semi_sync_master_trace_level   | 32    |
| rpl_semi_sync_master_wait_no_slave | ON    |
+————————————+——-+

Slave:
mysql> install plugin rpl_semi_sync_slave soname
‘semisync_slave.so’;
mysql> set global rpl_semi_sync_slave_enabled = 1;
mysql> stop slave;
mysql> start slave;
mysql> show variables like ‘%semi%’;
+———————————+——-+
| Variable_name                   | Value |
+———————————+——-+
| rpl_semi_sync_slave_enabled     | ON    |
| rpl_semi_sync_slave_trace_level | 32    |
+———————————+——-+

检查半协助实行是或不是见效:
Master:
mysql> show global status like ‘rpl_semi%’;
+——————————————–+——-+
| Variable_name                              | Value |
+——————————————–+——-+
| Rpl_semi_sync_master_clients               | 1     |
| Rpl_semi_sync_master_net_avg_wait_time     | 0     |
| Rpl_semi_sync_master_net_wait_time         | 0     |
| Rpl_semi_sync_master_net_waits             | 0     |
| Rpl_semi_sync_master_no_times              | 0     |
| Rpl_semi_sync_master_no_tx                 | 0     |
| Rpl_semi_sync_master_status                | ON    |
| Rpl_semi_sync_master_timefunc_failures     | 0     |
| Rpl_semi_sync_master_tx_avg_wait_time      | 0     |
| Rpl_semi_sync_master_tx_wait_time          | 0     |
| Rpl_semi_sync_master_tx_waits              | 0     |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0     |
| Rpl_semi_sync_master_wait_sessions         | 0     |
| Rpl_semi_sync_master_yes_tx                | 0     |
+——————————————–+——-+
表明半协同成功。

让半七只功效在MySQL每一回运转都自动生效,在Master和Slave的my.cnf中编辑:
Master:
[mysqld]
rpl_semi_sync_master_enabled=1
rpl_semi_sync_master_timeout=1000     #1秒

Slave:
[mysqld]
rpl_semi_sync_slave_enabled=1

也可透过安装全局变量的艺术来设置是还是不是运行半齐声插件:
Master:
mysql> set global rpl_semi_sync_master_enabled=1
注销加载插件
mysql> uninstall plugin rpl_semi_sync_master;

Slave:
mysql> set global rpl_semi_sync_slave_enabled = 1;
mysql> uninstall plugin rpl_semi_sync_slave;

**

叁 、主主复制架构 壹 、在两台服务器上个别建立3个有所复制权限的用户; Master:*
mysql>grant replication client,replication slave on \
.* to
repl@172.16.4.12 identified by ‘135246’;
mysql>flush privileges;

Slave:
mysql>grant replication client,replication slave on *.* to
repl@172.16.4.11 identified by ‘135246’;
mysql>flush privileges;

② 、修改配置文件:
Master:
[mysqld]
server-id = 11
log-bin = mysql-bin
auto-increment-increment = 2
auto-increment-offset = 1
relay-log=mysql-relay
relay-log-index=mysql-relay.index

Slave:
[mysqld]
server-id = 12
log-bin = mysql-bin
auto-increment-increment = 2
auto-increment-offset = 2
relay-log=mysql-relay
relay-log-index=mysql-relay.index

叁 、假诺那时两台服务器均为新确立,且无其余写入操作,各服务器只需记下当前温馨二进制日志文件及事件地方,以之视作此外的服务器复制起首地点即可
Master:
mysql> show master status;
+——————+———-+————–+——————+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+——————+———-+————–+——————+
| mysql-bin.000004 |      360 |              |                  |
+——————+———-+————–+——————+

Slave:
mysql> show master status;
+——————+———-+————–+——————+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+——————+———-+————–+——————+
| mysql-bin.000005 |      107 |              |                  |
+——————+———-+————–+——————+

四 、各服务器接下去钦定对另一台服务器为祥和的主服务器即可:
Master:
mysql>change master to \
master_host=’172.16.4.12′,
master_user=’repl’,
master_password=’135246′,
master_log_file=’mysql-bin.000005′,
master_log_pos=107;

Slave:
mysql>change master to \
master_host=’172.16.4.11′,
master_user=’repl’,
master_password=’135246′,
master_log_file=’mysql-bin.000004′,
master_log_pos=360;

5、运维从服务器线程:
Master:
mysql>start slave;

Slave:
mysql>start slave;

到此主主架构已经成功!

发表评论

电子邮件地址不会被公开。 必填项已用*标注

网站地图xml地图