PXC三節點安裝: node1:10.157.26.132 node2:10.157.26.133 node3:10.157.26.134 配置伺服器ssh登錄無密碼驗證 ssh-keygen實現三台主機之間相互免密鑰登錄,保證三台主機之間能ping通 1)在所有的主機上執行: # ssh-keyg ...
PXC三節點安裝: node1:10.157.26.132 node2:10.157.26.133 node3:10.157.26.134 配置伺服器ssh登錄無密碼驗證 ssh-keygen實現三台主機之間相互免密鑰登錄,保證三台主機之間能ping通 1)在所有的主機上執行: # ssh-keygen -t rsa 2)將所有機子上公鑰(id_rsa.pub)導到一個主機的/root/.ssh/authorized_keys文件中,然後將authorized_keys分別拷貝到所有主機上 cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys ssh 10.157.26.133 cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys ssh 10.157.26.134 cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys scp /root/.ssh/authorized_keys 10.157.26.133:/root/.ssh/authorized_keys scp /root/.ssh/authorized_keys 10.157.26.134:/root/.ssh/authorized_keys 測試:ssh 10.157.26.133/10.157.26.134 安裝依賴包: yum install -y git scons gcc gcc-c++ openssl check cmake bison boost-devel asio-devel libaio-devel ncurses-devel readline-devel pam-devel socat 若socat無法用yum安裝,可下載源碼包安裝 wget http://www.dest-unreach.org/socat/download/socat-1.7.3.2.tar.gz tar zxvf socat-1.7.3.2.tar.gz cd socat-1.7.3.2 ./configure make && make install 1、將二進位包解壓,添加mysql賬號,做軟連接【三個節點都要操作】: wget https://www.percona.com/downloads/Percona-XtraDB-Cluster-LATEST/Percona-XtraDB-Cluster-5.7.17-29.20/binary/tarball/Percona-XtraDB-Cluster-5.7.17-rel13-29.20.3.Linux.x86_64.ssl101.tar.gz mkdir /opt/mysql cd /opt/mysql tar zxvf /data/src/Percona-XtraDB-Cluster-5.7.17-rel13-29.20.3.Linux.x86_64.ssl101.tar.gz cd /usr/local ln -s /opt/mysql/Percona-XtraDB-Cluster-5.7.17-rel13-29.20.3.Linux.x86_64.ssl101/ mysql groupadd mysql useradd -M -g mysql -s /sbin/nologin/ -d /usr/local/mysql mysql 2、新建文件夾請授權【三個節點都要操作】 : mkdir -p /data/mysql/mysql_3306/{data,logs,tmp} mkdir -p /data/mysql/mysql_3306/logs/binlog chown -R mysql:mysql /data/mysql/ chown -R mysql:mysql /usr/local/mysql 3、配置文件my.cnf 132的配置文件: default_storage_engine=Innodb #pxc wsrep_provider = /usr/local/mysql/lib/libgalera_smm.so #庫文件位置 wsrep_cluster_address = gcomm://10.157.26.132,10.157.26.133,10.157.26.134 #集群中所有節點的ip wsrep_node_name = node132 #本節點的名字 wsrep_node_address = 10.157.26.132 #本節點的ip wsrep_cluster_name = pxc_sampson #集群名字 wsrep_sst_auth = sst:sampson #sst模式需要的用戶名和密碼 wsrep_sst_method = xtrabackup-v2 #採用什麼方式複製數據。還支持mysqldump,rsync wsrep_slave_threads = 2 # 開啟的複製線程數,建議cpu核數*2 ,解決apply_cb跟不上問題 pxc_strict_mode = ENFORCING #pxc嚴厲模式,還有DISABLED、PERMISSIVE、MASTER可選 innodb_autoinc_lock_mode = 2 #自增鎖的優化 wsrep_provider_options = "debug=1;gcache.size=1G" #打開調試模式 133的配置文件: default_storage_engine=Innodb #pxc wsrep_provider = /usr/local/mysql/lib/libgalera_smm.so wsrep_cluster_address = gcomm://10.157.26.132,10.157.26.133,10.157.26.134 wsrep_node_name = node133 wsrep_node_address = 10.157.26.133 wsrep_cluster_name = pxc_sampson wsrep_sst_auth = sst:sampson wsrep_sst_method = rsync wsrep_slave_threads = 2 pxc_strict_mode = ENFORCING innodb_autoinc_lock_mode = 2 wsrep_provider_options = "debug=1;gcache.size=1G" 134的配置文件: default_storage_engine=Innodb #pxc wsrep_provider = /usr/local/mysql/lib/libgalera_smm.so wsrep_cluster_address = gcomm://10.157.26.132,10.157.26.133,10.157.26.134 wsrep_node_name = node134 wsrep_node_address = 10.157.26.134 wsrep_cluster_name = pxc_sampson wsrep_sst_auth = sst:sampson wsrep_sst_method = rsync wsrep_slave_threads = 2 pxc_strict_mode = ENFORCING innodb_autoinc_lock_mode = 2 wsrep_provider_options = "debug=1;gcache.size=1G" 【註:本來所有節點的wsrep_sst_method均配置為xtrabackup-v2,但是添加第二個節點時報錯:WSREP_SST: [ERROR] Error while getting data from donor node: exit codes: 137 0,換成了rsync後,就沒有問題了,待驗證是否是xtrabackup的版本問題,我之前用的是2.4.7】 4、啟動節點1【132上執行】: /usr/local/mysql/bin/mysqld --defaults-file=/data/mysql/mysql_3306/my_3306.cnf --wsrep-new-cluster 【註:當node1啟動的時候,它會先嘗試加入一個已存在的集群,但是現在集群並不存在,pxc必須從0開始,所以node1的啟動必須加上命令--wsrep-new-cluster,用於新建一個新的集群。node1正常啟動之後,其他的node就可以使用平時的啟動方式,它們都會自動連接上primary node】 在error.log里看到 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 7) [Note] WSREP: Waiting for SST/IST to complete. [Note] WSREP: New cluster view: global state: f71affa6-2b55-11e7-b8db-6afbe908670d:7, view# 1: Primary, number of nodes: 1, my index: 0, protocol version 3 則啟動成功,登錄mysql -S /tmp/mysql_3306.sock -p 節點一密碼在error.log中: [root@dpstcmsweb00 ~]# cat /data/mysql/mysql_3306/logs/error.log |grep password 2017-05-09T02:46:25.724852Z 1 [Note] A temporary password is generated for root@localhost: worQi;aYF9eQ 登錄進去後修改root密碼:mysql>set password=password('mysql'); 主節點添加賬號: grant usage on *.* to 'pxc-monitor'@'%' identified by 'pxc-monitor'; grant all privileges on *.* to 'sst'@'%' identified by 'sampson'; 5、啟動剩下的兩個節點【133/134上執行】 : /usr/local/mysql/bin/mysqld --defaults-file=/data/mysql/mysql_3306/my_3306.cnf & 查看對應的error.log,能看到 [Note] WSREP: Shifting JOINER -> JOINED (TO: 7) [Note] WSREP: Member 1.0 (node3307) synced with group. [Note] WSREP: Shifting JOINED -> SYNCED (TO: 7) [Note] WSREP: Synchronized with group, ready for connections 則表示node啟動並加入cluster集群成功。 啟動成功後,直接使用節點1上的賬號密碼登錄即可,這裡是mysql -uroot -pmysql 6、查看節點個數: "root@localhost:mysql_3306.sock [(none)]>show global status like 'wsrep_cluster_size'; +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 3 | +--------------------+-------+ 1 row in set (0.00 sec) 7、查看集群狀態 "root@localhost:mysql_3306.sock [(none)]>show global status like 'wsrep%'; +------------------------------+----------------------------------------------------------+ | Variable_name | Value | +------------------------------+----------------------------------------------------------+ | wsrep_local_state_uuid | bed19806-3465-11e7-85af-731d83552ec6 | | wsrep_protocol_version | 7 | | wsrep_last_committed | 4 | | wsrep_replicated | 3 | | wsrep_replicated_bytes | 732 | | wsrep_repl_keys | 3 | | wsrep_repl_keys_bytes | 93 | | wsrep_repl_data_bytes | 447 | | wsrep_repl_other_bytes | 0 | | wsrep_received | 36 | | wsrep_received_bytes | 3494 | | wsrep_local_commits | 0 | | wsrep_local_cert_failures | 0 | | wsrep_local_replays | 0 | | wsrep_local_send_queue | 0 | | wsrep_local_send_queue_max | 1 | | wsrep_local_send_queue_min | 0 | | wsrep_local_send_queue_avg | 0.000000 | | wsrep_local_recv_queue | 0 | | wsrep_local_recv_queue_max | 2 | | wsrep_local_recv_queue_min | 0 | | wsrep_local_recv_queue_avg | 0.027778 | | wsrep_local_cached_downto | 2 | | wsrep_flow_control_paused_ns | 0 | | wsrep_flow_control_paused | 0.000000 | | wsrep_flow_control_sent | 0 | | wsrep_flow_control_recv | 0 | | wsrep_flow_control_interval | [ 173, 173 ] | | wsrep_flow_control_status | OFF | | wsrep_cert_deps_distance | 1.000000 | | wsrep_apply_oooe | 0.000000 | | wsrep_apply_oool | 0.000000 | | wsrep_apply_window | 1.000000 | | wsrep_commit_oooe | 0.000000 | | wsrep_commit_oool | 0.000000 | | wsrep_commit_window | 1.000000 | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | wsrep_cert_index_size | 1 | | wsrep_cert_bucket_count | 22 | | wsrep_gcache_pool_size | 2932 | | wsrep_causal_reads | 0 | | wsrep_cert_interval | 0.000000 | | wsrep_ist_receive_status | | | wsrep_incoming_addresses | 10.157.26.134:3306,10.157.26.132:3306,10.157.26.133:3306 | | wsrep_desync_count | 0 | | wsrep_evs_delayed | | | wsrep_evs_evict_list | | | wsrep_evs_repl_latency | 0/0/0/0/0 | | wsrep_evs_state | OPERATIONAL | | wsrep_gcomm_uuid | 5665b42e-3467-11e7-94ff-9e77d0294b5e | | wsrep_cluster_conf_id | 17 | | wsrep_cluster_size | 3 | | wsrep_cluster_state_uuid | bed19806-3465-11e7-85af-731d83552ec6 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_local_bf_aborts | 0 | | wsrep_local_index | 1 | | wsrep_provider_name | Galera | | wsrep_provider_vendor | Codership Oy <[email protected]> | | wsrep_provider_version | 3.20(r) | | wsrep_ready | ON | +------------------------------+----------------------------------------------------------+ 62 rows in set (0.00 sec) 註: wsrep_cluster_status: 1.OPEN: 節點啟動成功,嘗試連接到集群,如果失敗則根據配置退出或創建新的集群 2.PRIMARY: 節點處於集群pc中,嘗試從集群中選取donor進行數據同步 3.JOINER: 節點處於等待接收/接收數據文件狀態,數據傳輸完成後在本地載入數據 4.JOINED: 節點完成數據同步工作,嘗試保持和集群進度一致 5.SYNCED:節點正常提供服務:數據的讀寫,集群數據的同步,新加入節點的sst請求 6.DONOR(貢獻數據者):節點處於為新節點準備或傳輸集群全量數據狀態,對客戶端不可用。 PXC相關文檔將持續更新。。。