一.簡介 本文將介紹如何使用mysql-mmm搭建資料庫的高可用架構. 二.環境 伺服器 主機名 Ip Severed Mysql版本 系統 Master1 master1 192.168.4.10 10 5.6.15 Centos6.9 Master2 master2 192.168.4.11 1 ...
一.簡介
本文將介紹如何使用mysql-mmm搭建資料庫的高可用架構.
二.環境
伺服器 |
主機名 |
Ip |
Severed |
Mysql版本 |
系統 |
Master1 |
master1 |
192.168.4.10 |
10 |
5.6.15 |
Centos6.9 |
Master2 |
master2 |
192.168.4.11 |
11 |
5.6.15 |
|
Slave1 |
slave1 |
192.168.4.12 |
12 |
5.6.15 |
|
Slave2 |
slave2 |
192.168.4.13 |
13 |
5.6.15 |
|
Monitor |
monitor |
192.168.4.100 |
無 |
無 |
|
Client |
client |
192.168.4.120 |
無 |
5.6.15 |
|
虛擬IP
虛擬ip |
功能 |
描述 |
192.168.4.200 |
Write |
主用master寫入虛擬Ip |
192.168.4.201 |
read |
讀伺服器虛擬Ip |
192.168.4.202 |
Read |
讀伺服器虛擬Ip |
案例圖譜
三.mmm架構
伺服器角色
類型 |
服務進程 |
主要用途 |
管理節點 |
mmm-monitor |
負責所有的監控工作的監控守護進程,決定故障節點的移除或恢復。 |
資料庫節點 |
mmm-agent |
運行所在MySQL伺服器殤的代理守護進程,提供簡單遠程服務集、提供給監控節點(可用來更改只讀模式、複製的主伺服器等 ) |
核心軟體包及應用
軟體包名 |
包作用 |
Net-ARP-1.0.8.tgz |
分配虛擬ip |
mysql-mmm-2.2.1.tar.gz |
MySQL-MMM架構核心進程,安裝完成後即可啟動管理進程也可啟動客戶端進程。 |
四.部署集群基本結構
我們將部署集群的工作分為兩大塊,第一塊就是部署集群基礎環境。使用4台RHEL6伺服器,如下圖所示。其中192.168.4.10、192.168.4.11作為MySQL雙主伺服器,192.168.4.12、192.168.4.13作為主伺服器的從伺服器。
安裝伺服器時建議管理防火牆及SELINUX.
4.1 mysql伺服器的安裝
下麵我會介紹MySQL的安裝方式。本文將使用64位的RHEL 6操作系統,MySQL資料庫的版本是5.6.15。
訪問http://dev.mysql.com/downloads/mysql/,找到MySQL Community Server下載頁面,平臺選擇“Red Hat Enterprise Linux 6 / Oracle Linux 6”,然後選擇64位的bundle整合包下載,如下圖所示。
註意:下載MySQL軟體時需要以Oracle網站賬戶登錄,如果沒有請根據頁面提示先註冊一個(免費) 。
4.1.1 卸載系統自帶的mysql-server、mysql軟體包(如果有的話)
yum -y remove mysql-server mysql
4.1.2 釋放MySQL-bundle整合包
[root@master1 ~]#tar xvf MySQL-5.6.15-1.el6.x86_64.rpm-bundle.tar MySQL-shared-5.6.15-1.el6.x86_64.rpm //共用庫 MySQL-shared-compat-5.6.15-1.el6.x86_64.rpm //相容包 MySQL-server-5.6.15-1.el6.x86_64.rpm //服務端程式 MySQL-client-5.6.15-1.el6.x86_64.rpm //客戶端程式 MySQL-devel-5.6.15-1.el6.x86_64.rpm //庫和頭文件 MySQL-embedded-5.6.15-1.el6.x86_64.rpm //嵌入式版本 MySQL-test-5.6.15-1.el6.x86_64.rpm //測試包
4.1.3 安裝MySQL資料庫
[root@master1]# rpm -Uvh MySQL-*.rpm
4.1.4 啟動MySQL資料庫
[root@master1 ~]# service mysql start && chkconfig --list mysql Starting MySQL SUCCESS! mysql 0:關閉 1:關閉 2:啟用 3:啟用 4:啟用 5:啟用 6:關閉
4.1.5 MySQL密碼
在安裝完後會自動生成在root目錄下.mysql_secret文件內 查詢後可使用此密碼登錄Mysql
[root@master1 ~]# cat .mysql_secret # The random password set for the root user at Mon Jan 1 16:48:31 2001 (local time): kZ5j71cyZiKKhSeX // 密碼文件
4.1.6 登錄MySQL,並修改密碼 使用剛查到的密碼進行登錄
[root@master1 ~]# mysql -u root -p Enter password: mysql> SET PASSWORD FOR 'root'@'localhost'=PASSWORD('123456');
修改後再次登錄時既可以使用新密碼了。
按照上述方法 將4台伺服器均裝好MySQL。
4.2 部署雙主多從結構
1.資料庫授權(4台資料庫主機master1,master2,slave1,slave2執行以下操作)
部署主從同步只需要授權一個主從同步用戶即可,但是我們要部署MySQL-MMM架構,所以在這裡我們將MySQL-MMM所需用戶一併進行授權設置。再授權一個測試用戶,在架構搭建完成時測試使用。
mysql> grant replication slave on *.* to slaveuser@"%" identified by "pwd123"; Query OK, 0 rows affected (0.01 sec) //主從同步授權 mysql> grant replication client on *.* to monitor@"%" identified by "monitor"; Query OK, 0 rows affected (0.00 sec) //MMM所需架構用戶授權 mysql> grant replication client,process,super on *.* to agent@"%" identified by "agent"; Query OK, 0 rows affected (0.00 sec) //MMM所需架構用戶授權 mysql> grant all on *.* to root@"%" identified by "123456"; Query OK, 0 rows affected (0.00 sec) //測試用戶授權
2.開啟主資料庫binlog日誌、設置server_id(master1,master2)
master1設置
[root@master1 ~]# cat /etc/my.cnf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 server_id=10 //指定伺服器ID log_bin //啟用binlog日誌
log_slave_updates=1 //啟用鏈式複製 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [root@master1 ~]# [root@master1 ~]# service mysql restart //重啟MySQL服務 Shutting down MySQL.. [確定] Starting MySQL.. [確定] [root@master1 ~]# ls /var/lib/mysql/master1-bin* //查看binlog日誌是否生成 /var/lib/mysql/master1-bin.000001 /var/lib/mysql/master1-bin.index
master2設置:
[root@master2 mysql]# cat /etc/my.cnf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 server_id=11
log_slave_updates=1
log-bin [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [root@master2 mysql]# /etc/init.d/mysql restart Shutting down MySQL.. SUCCESS! Starting MySQL. SUCCESS! [root@master2 mysql]# ls /var/lib/mysql/master2-bin.* /var/lib/mysql/master2-bin.000001 /var/lib/mysql/master2-bin.000002 /var/lib/mysql/master2-bin.index
從庫設置serverid
slave1設置
[root@slave1 mysql]# cat /etc/my.cnf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 server_id=12 …
[root@slave1 ~]# service mysql restart
slave2設置
[root@slave2 mysql]# cat /etc/my.cnf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 server_id=13 … [root@slave2~]# service mysql restart
3.配置主從從從關係
配置master2、slave1、slave2成為master1的從伺服器
查看master1的master狀態
mysql> show master status\G *************************** 1. row *************************** File: master1-bin.000002 Position: 120 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 1 row in set (0.00 sec)
依照上面參數配置master2為master1的從伺服器
mysql> change master to -> master_host="192.168.4.10", -> master_user="slaveuser", -> master_password="pwd123", -> master_log_file="master1-bin.000002", -> master_log_pos=120; Query OK, 0 rows affected, 2 warnings (0.01 sec) mysql> start slave; Query OK, 0 rows affected (0.00 sec) mysql> show slave status\G Slave_IO_Running: Yes //IO節點正常 Slave_SQL_Running: Yes //SQL節點正常
用同樣的方法設置slave1以及slave2為master1的從伺服器
4.配置主主從從關係,將master1配置為master2的從伺服器
查看master2的master信息:
mysql> show master status \G *************************** 1. row *************************** File: master2-bin.000002 Position: 120 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 1 row in set (0.00 sec)
設置master1成為master2的從伺服器
mysql> change master to -> master_host="192.168.4.11", -> master_user="slaveuser", -> master_password="pwd123", -> master_log_file="master2-bin.000002", -> master_log_pos=120; Query OK, 0 rows affected, 2 warnings (0.01 sec) mysql> start slave ; Query OK, 0 rows affected (0.00 sec) mysql> show slave status \G *************************** 1. row *************************** Slave_IO_Running: Yes //IO節點正常 Slave_SQL_Running: Yes //SQL節點正常
5.測試主從架構是否成功
master1更新資料庫,查看其它主機是否成功 ,當所有主機分別在本機都能看到剛剛建立的資料庫db1,則正常。
mysql> create database db1; Query OK, 1 row affected (0.00 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | db1 | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.00 sec)
至此,基本環境構建完成。
五.MySQL-MMM架構部署
5.1 MMM集群方案
使用第4章的架構,192.168.4.10、192.168.4.11作為MySQL雙主伺服器,192.168.4.12、192.168.4.13作為主伺服器的從伺服器,添加192.168.4.100作為MySQL-MMM架構中管理監控伺服器,實現監控MySQL主從伺服器的工作狀態及決定故障節點的移除或恢復工作,架構搭建完成後使用客戶機192.168.4.120進行訪問,客戶機需要安裝MySQL-client軟體包。拓撲見圖2
5.2 步驟
實現此案例需要按照如下步驟進行。
步驟一:安裝MySQL-MMM
1.安裝依賴關係(MySQL集群內5台伺服器master1,master2,slave1,slave2,monitor)均需安裝
[root@master2 mysql]# yum -y install gcc* perl-Date-Manip perl-Date-Manip perl-Date-Manip perl-XML-DOM-XPath perl-XML-Parser perl-XML-RegExp rrdtool perl-Class-Singleton perl perl-DBD-MySQL perl-Params-Validate perl-MailTools perl-Time-HiRes perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker
2.安裝MySQL-MMM軟體依賴包(MySQL集群內5台伺服器master1,master2,slave1,slave2,monitor)均需安裝。
-
安裝Log-Log4perl 類
[root@master1 mysql-mmm]# rpm -ivh perl-Log-Log4perl-1.26-1.el6.rf.noarch.rpm warning: perl-Log-Log4perl-1.26-1.el6.rf.noarch.rpm: Header V3 DSA/SHA1 Signature, key ID 6b8d79e6: NOKEY error: Failed dependencies: perl(Test::More) >= 0.45 is needed by perl-Log-Log4perl-1.26-1.el6.rf.noarch [root@master1 mysql-mmm]# rpm -ivh perl-Log-Log4perl-1.26-1.el6.rf.noarch.rpm --force --nodeps
我在安裝過程中報了錯誤顯示nokey ,由於是安裝了舊版本GPGkeys造成,增加參數--force –nodeps強制安裝即可跳過
- 安裝Algorithm-Diff類
[root@master1 mysql-mmm]# tar -zxvf Algorithm-Diff-1.1902.tar.gz Algorithm-Diff-1.1902/ Algorithm-Diff-1.1902/diffnew.pl Algorithm-Diff-1.1902/t/ Algorithm-Diff-1.1902/t/oo.t Algorithm-Diff-1.1902/t/base.t Algorithm-Diff-1.1902/htmldiff.pl Algorithm-Diff-1.1902/lib/ Algorithm-Diff-1.1902/lib/Algorithm/ Algorithm-Diff-1.1902/lib/Algorithm/Diff.pm Algorithm-Diff-1.1902/lib/Algorithm/DiffOld.pm Algorithm-Diff-1.1902/META.yml Algorithm-Diff-1.1902/Changes Algorithm-Diff-1.1902/cdiff.pl Algorithm-Diff-1.1902/MANIFEST Algorithm-Diff-1.1902/diff.pl Algorithm-Diff-1.1902/Makefile.PL Algorithm-Diff-1.1902/README [root@master1 mysql-mmm]# cd Algorithm-Diff-1.1902 [root@master1 Algorithm-Diff-1.1902]# perl Makefile.PL Checking if your kit is complete... Looks good Writing Makefile for Algorithm::Diff [root@master1 Algorithm-Diff-1.1902]# make && make install
3.安裝Proc-Daemon類
root@master1 mysql-mmm]# tar -zxvf Proc-Daemon-0.03.tar.gz Proc-Daemon-0.03/ Proc-Daemon-0.03/t/ Proc-Daemon-0.03/t/00modload.t Proc-Daemon-0.03/t/01filecreate.t Proc-Daemon-0.03/README Proc-Daemon-0.03/Makefile.PL Proc-Daemon-0.03/Daemon.pm Proc-Daemon-0.03/Changes Proc-Daemon-0.03/MANIFEST [root@master1 mysql-mmm]# cd Proc-Daemon-0.03 [root@master1 Proc-Daemon-0.03]# perl Makefile.PL Checking if your kit is complete... Looks good Writing Makefile for Proc::Daemon [root@master1 Proc-Daemon-0.03]# make && make install cp Daemon.pm blib/lib/Proc/Daemon.pm Manifying blib/man3/Proc::Daemon.3pm Installing /usr/local/share/perl5/Proc/Daemon.pm Installing /usr/local/share/man/man3/Proc::Daemon.3pm Appending installation info to /usr/lib64/perl5/perllocal.pod [root@master1 Proc-Daemon-0.03]#
4.安裝Net-ARP虛擬IP分配工具:
[root@mysql-master1 ~]# gunzip Net-ARP-1.0.8.tgz [root@mysql-master1 ~]# tar xvf Net-ARP-1.0.8.tar .. .. [root@mysql-master1 ~]# cd Net-ARP-1.0.8 [root@mysql-master1 Net-ARP-1.0.8]# perl Makefile.PL Module Net::Pcap is required for make test! Checking if your kit is complete... Looks good Writing Makefile for Net::ARP [root@mysql-master1 Net-ARP-1.0.8]# make && make install .. .. [root@mysql-master1 Net-ARP-1.0.8]# cd [root@mysql-master1 ~]#
5. 安裝Mysql-MMM軟體包:
[root@mysql-master1 ~]# tar xvf mysql-mmm-2.2.1.tar.gz .. .. [root@mysql-master1 ~]# cd mysql-mmm-2.2.1 [root@mysql-master1 mysql-mmm-2.2.1]# make && make install .. .. [root@mysql-master1 mysql-mmm-2.2.1]#
步驟二:修改配置文件
- 修改公共配置文件
本案例中MySQL集群的5台伺服器(master1、master2、slave1、slave2、monitor)都需要配置,可以先配好一臺後使用scp複製。
root@master1 ~]# vim /etc/mysql-mmm/mmm_common.conf active_master_role writer <host default> cluster_interface eth0 //設置主從同步的用戶 pid_path /var/run/mmm_agentd.pid bin_path /usr/lib/mysql-mmm/ replication_user slaveuser //設置主從同步的用戶 replication_password pwd123 //設置主從同步用戶密碼 agent_user agent //mmm-agent控制資料庫用戶 agent_password agent //mmm-agent控制資料庫用戶密碼 </host> <host master1> //設置第一個主伺服器 ip 192.168.4.10 //master1 IP 地址 mode master peer master2 //指定另外一臺主伺服器 </host> <host master2> //指定另外一臺主伺服器 ip 192.168.4.11 mode master peer master1 </host> <host slave1> //設置第一臺從伺服器 ip 192.168.4.12 //slave1 IP 地址 mode slave //本段落配置的是slave伺服器 </host> <host slave2> ip 192.168.4.13 mode slave </host> <role writer> //設置寫入伺服器工作模式 hosts master1,master2 //提供寫的主伺服器 ips 192.168.4.200 //設置VIP地址 mode exclusive //排他模式 </role> <role reader> //設置讀取伺服器工作模式 hosts slave1,slave2 //提供讀的伺服器信息 ips 192.168.4.201,192.168.4.202 //多個虛擬IP mode balanced //均衡模式 </role>
2.修改管理主機配置文件(monitor主機配置)
[root@monitor ~]# vim /etc/mysql-mmm/mmm_mon.conf include mmm_common.conf <monitor> ip 192.168.4.100 //設置管理主機IP地址 pid_path /var/run/mmm_mond.pid bin_path /usr/lib/mysql-mmm/ status_path /var/lib/misc/mmm_mond.status ping_ips 192.168.4.10,192.168.4.11,192.168.4.12,192.168.4.13 //設置被監控資料庫 </monitor> <host default> monitor_user monitor //監控資料庫MySQL用戶 monitor_password monitor //監控資料庫MySQL用戶密碼 </host> debug 0 [root@monitor ~]#
3.修改客戶端配置文件
master1、master2、slave1、slave2,都要配置相應名稱
root@master1 /]# cat /etc/mysql-mmm/mmm_agent.conf include mmm_common.conf this master1 [root@master2 /]# cat /etc/mysql-mmm/mmm_agent.conf include mmm_common.conf this master2 [root@slave2 /]# cat /etc/mysql-mmm/mmm_agent.conf include mmm_common.conf this slave2 [root@slave2 /]# cat /etc/mysql-mmm/mmm_agent.conf include mmm_common.conf this slave2
六.MySQL-MMM架構使用
6.1.啟動MySQL-MMM架構
1.啟動mmm-agent
master1,master2,slave1,slave2均執行以下操作。
[root@master1 ~]# /etc/init.d/mysql-mmm-agent start Daemon bin: '/usr/sbin/mmm_agentd' Daemon pid: '/var/run/mmm_agentd.pid' Starting MMM Agent daemon... Ok
2.啟動mmm-monitor
[root@moitor ~]# /etc/init.d/mysql-mmm-monitor start Daemon bin: '/usr/sbin/mmm_mond' Daemon pid: '/var/run/mmm_mond.pid' Starting MMM Monitor daemon: Ok
6.2 設置集群中伺服器為online狀態。
控制命令只能在管理端Monitor上運行。開用命令查看當前個伺服器狀態。 預設所有伺服器為waiting狀態,如有異常,檢查各伺服器SELinux及iptabless
[root@localhost ~]# mmm_control show master1(192.168.4.10) master/AWAITING_RECOVERY. Roles: master2(192.168.4.11) master/AWAITING_RECOVERY. Roles: slave1(192.168.4.12) slave/ AWAITING_RECOVERY. Roles: slave2(192.168.4.13) slave/AWAITING_RECOVERY. Roles:
通過命令設置4台資料庫主機為online
[root@monitor ~]# mmm_control set_online master1 OK: State of 'master1' changed to ONLINE. Now you can wait some time and check its new roles! [root@monitor ~]# mmm_control set_online master2 OK: State of 'master2' changed to ONLINE. Now you can wait some time and check its new roles! [root@monitor ~]# mmm_control set_online slave1 OK: State of 'slave1' changed to ONLINE. Now you can wait some time and check its new roles! [root@monitor ~]# mmm_control set_online slave2 OK: State of 'slave2' changed to ONLINE. Now you can wait some time and check its new roles! [root@monitor ~]#
再次查看當前集群中各伺服器狀態
[root@monitor ~]# mmm_control show master1(192.168.4.10) master/ONLINE. Roles: writer(192.168.4.200) master2(192.168.4.11) master/ONLINE. Roles: slave1(192.168.4.12) slave/ONLINE. Roles: reader(192.168.4.201) slave2(192.168.4.13) slave/ONLINE. Roles: reader(192.168.4.202)
通過狀態,可以看到4台主機全是online狀態,寫入伺服器為master1,ip為虛擬ip192.168.4.200.從伺服器為slave1,slave2.
6.3 測試MySQL-MMM架構
客戶端安裝MySQL-client
[root@client ~]# tar xvf MySQL-5.6.15-1.el6.x86_64.rpm-bundle.tar .. .. [root@client ~]# rpm -ivh MySQL-client-5.6.15-1.el6.x86_64.rpm
MySQL-MMM虛擬IP訪問測試,同時可測試建立插入查看等功能。
[root@client /]# mysql -h192.168.4.200 -uroot -p123456 -e "show databases" Warning: Using a password on the command line interface can be insecure. +--------------------+ | Database | +--------------------+ | information_schema | | db1 | | db2 | | mysql | | performance_schema | | test | +--------------------+ [root@client /]#
6.4 主資料庫宕機測試
我們可以認為將主資料庫停用達到測試集群的目的。
[root@master1 ~]# /etc/init.d/mysql stop Shutting down MySQL.. SUCCESS! [root@master1 ~]#
此時我們查看monitor日誌可以看到詳細的檢測及切換過程。
017/10/24 01:37:07 WARN Check 'rep_backlog' on 'master1' is in unknown state! Message: UNKNOWN: Connect error (host = 192.168.4.10:3306, user = monitor)! Lost connection to MySQL server at 'reading initial communication packet', system error: 111 2017/10/24 01:37:07 WARN Check 'rep_threads' on 'master1' is in unknown state! Message: UNKNOWN: Connect error (host = 192.168.4.10:3306, user = monitor)! Lost connection to MySQL server at 'reading initial communication packet', system error: 111 2017/10/24 01:37:15 ERROR Check 'mysql' on 'master1' has failed for 10 seconds! Message: ERROR: Connect error (host = 192.168.4.10:3306, user = monitor)! Lost connection to MySQL server at 'reading initial communication packet', system error: 111 2017/10/24 01:37:16 FATAL State of host 'master1' changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK) 2017/10/24 01:37:16 INFO Removing all roles from host 'master1': 2017/10/24 01:37:16 INFO Removed role 'writer(192.168.4.200)' from host 'master1' 2017/10/24 01:37:16 INFO Orphaned role 'writer(192.168.4.200)' has been assigned to 'master2'
在monitor上再次查看資料庫伺服器狀態,可以發現此時master1已經為offline狀態,寫入伺服器及虛擬ip192.168.4.200已經變更為master2
[root@monitor ~]# mmm_control show master1(192.168.4.10) master/HARD_OFFLINE. Roles: master2(192.168.4.11) master/ONLINE. Roles: writer(192.168.4.200) slave1(192.168.4.12) slave/ONLINE. Roles: reader(192.168.4.201) slave2(192.168.4.13) slave/ONLINE. Roles: reader(192.168.4.202)
查看slave1.slave2的的從屬關係,主伺服器以變更為master2
mysql> show slave status \G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.4.11 Master_User: slaveuser Master_Port: 3306 Connect_Retry: 60 Master_Log_File: master2-bin.000002 Read_Master_Log_Pos: 211 Relay_Log_File: slave1-relay-bin.000002 Relay_Log_Pos: 285 Relay_Master_Log_File: master2-bin.000002 Slave_IO_Running: Yes Slave_SQL_Running: Yes
註意伺服器恢復後由offline狀態轉換為waiting狀態,但不會變為Online狀態,需手動啟用。
至此,我們的MySQL高可用集群已經部署完畢。
7.簡化版集群。
上面我們已經部署完了5台伺服器組成的MySQL集群,但根據各公司實際情況,可能訪問量並不需要如此多的伺服器進行集群化。而主從伺服器的形式又不能實現主備用之間的熱備。下麵我將上例做了更改,使用3台伺服器搭建集群。此例即不需要太多伺服器,又能實現資料庫的熱備份。
我們現在要做的就是對monitor配置的更改,首先調整被監控伺服器IP,修改MMM主配置文件。
[root@monitor ~]# cat /etc/mysql-mmm/mmm_mon.conf include mmm_common.conf <monitor> ip 192.168.4.100 pid_path /var/run/mmm_mond.pid bin_path /usr/lib/mysql-mmm/ status_path /var/lib/misc/mmm_mond.status ping_ips 192.168.4.10,192.168.4.11 //此處是被監控伺服器IP </monitor> <host default> monitor_user monitor monitor_password monitor </host> debug 0 [root@monitor ~]#
然後修改公共配置文件,註意master1,master2,monior保持一致,都要修改。
[root@monitor ~]# cat /etc/mysql-mmm/mmm_common.conf active_master_role writer <host default> cluster_interface eth0 pid_path /var/run/mmm_agentd.pid bin_path /usr/lib/mysql-mmm/ replication_user slaveuser replication_password pwd123 agent_user agent agent_password agent </host> <host master1> ip 192.168.4.10 mode master peer master2 </host> <host master2> ip 192.168.4.11 mode master peer master1 </host> <role writer> hosts master1,master2 ips 192.168.4.200 mode exclusive </role> <role reader> hosts master1,master2 ips 192.168.4.201,192.168.4.202 mode balanced </role> [root@monitor ~]#
配置完成後 ,其它配置與5台拓撲類似,啟用master1和master2 的 mmm進程 ,在 monitor伺服器啟用online模式後 查看mmm狀態
[root@monitor ~]# mmm_control show master1(192.168.4.10) master/ONLINE. Roles: reader(192.168.4.201), writer(192.168.4.200) master2(192.168.4.11) master/ONLINE. Roles: reader(192.168.4.202)
可以看到master1和master2都承擔讀的工作,而master1又單獨承擔寫的任務。下麵測試將master1的資料庫關閉看結果。
[root@monitor ~]# mmm_control show master1(192.168.4.10) master/HARD_OFFLINE. Roles: master2(192.168.4.11) master/ONLINE. Roles: reader(192.168.4.201), reader(192.168.4.202), writer(192.168.4.200<