實驗目的:使用corosync v1 + pacemaker部署httpd高可用服務(+NFS)。本實驗使用Centos 6.8系統,FileSystem資源伺服器,NA1節點1,NA2節點2,VIP192.168.94.222目錄結構:(好煩啊,佈局一塌糊塗)1、corosync v1 + pac... ...
實驗目的:使用corosync v1 + pacemaker部署httpd高可用服務(+NFS)。
本實驗使用Centos 6.8系統,FileSystem資源伺服器,NA1節點1,NA2節點2,VIP192.168.94.222
目錄結構:(好煩啊,佈局一塌糊塗)
1、corosync v1 + pacemaker 基礎安裝
2、pacemaker管理工具crmsh安裝
3、資源管理配置
4、創建資源基本介紹
5、創建一個VIP資源
6、創建一個httpd資源
7、資源約束
8、模擬故障集群轉換
9、httpd服務高可用測試
10、創建nfs文件資源
11、高可用集群測試
1、corosync v1 + pacemaker 基礎安裝
參考
corosync v1 + pacemaker高可用集群部署(一)基礎安裝
2、pacemaker管理工具crmsh安裝
從pacemaker 1.1.8開始,crm sh 發展成一個獨立項目,pacemaker中不再提供,
說明我們安裝好pacemaker後,是不會有crm這個命令行模式的資源管理器的。還有很多其它的管理工具,本次實驗我們使用crmsh工具進行管理。
這裡使用 crmsh-3.0.0-6.1.noarch.rpm
crmsh安裝:NA1 & NA2
首次安裝,它會提示我們需要如下幾個依賴包,我們在網上找相應的rpm包安裝即可。
[root@na1 ~]# rpm -ivh crmsh-3.0.0-6.1.noarch.rpm warning: crmsh-3.0.0-6.1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 17280ddf: NOKEY error: Failed dependencies: crmsh-scripts >= 3.0.0-6.1 is needed by crmsh-3.0.0-6.1.noarch python-dateutil is needed by crmsh-3.0.0-6.1.noarch python-parallax is needed by crmsh-3.0.0-6.1.noarch redhat-rpm-config is needed by crmsh-3.0.0-6.1.noarch [root@na1 ~]#這兩個使用rpm包安裝
rpm -ivh crmsh-scripts-3.0.0-6.1.noarch.rpm rpm -ivh python-parallax-1.0.1-28.1.noarch.rpm這兩個使用yum安裝即可
yum install python-dateutil* -y yum install redhat-rpm-config* -y安裝完成後,在安裝crmsh
rpm -ivh crmsh-3.0.0-6.1.noarch.rpm
3、資源管理配置
crm會自動同步節點上資源的配置情況,只需在一個節點上進行配置即可,不需要手動複製同步。
NA1
配置方式說明
crm有兩種配置方式,一種是批處理模式,另外一種是交互模式
批處理(在命令行輸入命令)
[root@na1 ~]# crm ls cibstatus help site cd cluster quit end script verify exit ra maintenance bye ? ls node configure back report cib resource up status corosync options history交互模式(進入crm(live)#,進行命令操作,可進行ls cd cd..等基礎操作 )
[root@na1 ~]# crm crm(live)# ls cibstatus help site cd cluster quit end script verify exit ra maintenance bye ? ls node configure back report cib resource up status corosync options history crm(live)#
初始配置檢查
在crm交互模式下配置資源後,需要先檢查配置是否有錯誤,然後在進行命令提交
在配置前我們先檢查一下
[root@na1 ~]# crm crm(live)# configure crm(live)configure# verify ERROR: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity Errors found during check: config not valid crm(live)configure#
此錯誤是因為我們目前沒有STONITH設備,因此它會報錯,我們暫時給它關閉。
property stonith-enabled=false
命令提交
crm(live)configure# commit crm(live)configure#
查看當前配置
crm(live)configure# show node na1.server.com node na2.server.com property cib-bootstrap-options: \ have-watchdog=false \ dc-version=1.1.18-3.el6-bfe4e80420 \ cluster-infrastructure="classic openais (with plugin)" \ expected-quorum-votes=2 \ stonith-enabled=false crm(live)configure#查看資源狀態
crm(live)# status Stack: classic openais (with plugin) Current DC: na1.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Sun May 24 21:38:15 2020 Last change: Sun May 24 21:37:10 2020 by root via cibadmin on na1.server.com 2 nodes configured (2 expected votes) 0 resources configured Online: [ na1.server.com na2.server.com ] No resources crm(live)#
4、創建資源基本介紹
基礎資源 primitive,使用help 命令 可以查看幫助
語法結構
primitive <rsc> {[<class>:[<provider>:]]<type>|@<template>} [description=<description>] [[params] attr_list] [meta attr_list] [utilization attr_list] [operations id_spec] [op op_type [<attribute>=<value>...] ...] attr_list :: [$id=<id>] [<score>:] [rule...] <attr>=<val> [<attr>=<val>...]] | $id-ref=<id> id_spec :: $id=<id> | $id-ref=<id> op_type :: start | stop | monitor
簡單介紹
primitive 資源名稱 資源類別:資源代理的提供程式:資源代理名稱 資源代理類別: lsb, ocf, stonith, service 資源代理的提供程式: 例如 heartbeat , pacemaker 資源代理名稱:即resource agent, 如:IPaddr2,httpd, mysql params-- 設置實例屬性,傳遞實參(實際不這麼叫) meta--元屬性, 是可以為資源添加的選項。它們告訴 CRM 如何處理特定資源。 其餘的用到的時候在簡單說下
創建一個資源首先看看它屬於什麼類別
#查看當前支持的類別 crm(live)ra# classes lsb ocf / .isolation heartbeat pacemaker service stonith #查看代理名稱屬於哪個程式 crm(live)ra# providers IPaddr heartbeat #列出這個類別下的所有資源代理類型 crm(live)ra# list service # 查看IPaddr代理的信息,也就是幫助,查看此資源該怎麼創建 crm(live)ra# info ocf:heartbeat:IPaddr
5、創建一個VIP資源
查看一下IPaddr的幫助,*的參數為必選項,其餘的可根據具體要求進行設置。
info ocf:heartbeat:IPaddr
Parameters (*: required, []: default): ip* (string): IPv4 or IPv6 address The IPv4 (dotted quad notation) or IPv6 address (colon hexadecimal notation) example IPv4 "192.168.1.1". example IPv6 "2001:db8:DC28:0:0:FC57:D4C8:1FFF". nic (string): Network interface The base network interface on which the IP address will be brought online. If left empty, the script will try and determine this from the routing table.
創建VIP資源,在configure下
crm(live)configure# primitive VIP ocf:heartbeat:IPaddr params ip=192.168.94.222 crm(live)configure# verify crm(live)configure# commit crm(live)configure#
查看資源狀態(VIP啟動在na1上)
crm(live)# status Stack: classic openais (with plugin) Current DC: na1.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Sun May 24 22:12:05 2020 Last change: Sun May 24 22:11:15 2020 by root via cibadmin on na1.server.com 2 nodes configured (2 expected votes) 1 resource configured Online: [ na1.server.com na2.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na1.server.com crm(live)#
6、創建一個httpd資源
配置httpd服務(開機不啟動)
NA1 [root@na1 ~]# chkconfig httpd off [root@na1 ~]# service httpd stop 停止 httpd: [確定] [root@na1 ~]# echo "na1.server.com" >> /var/www/html/index.html [root@na1 ~]# NA2 [root@na2 ~]# chkconfig httpd off [root@na2 ~]# service httpd stop 停止 httpd: [確定] [root@na2 ~]# echo "na2.server.com" >> /var/www/html/index.html [root@na2 ~]#
創建httpd資源
crm(live)configure# primitive httpd service:httpd httpd crm(live)configure# verify crm(live)configure# commit crm(live)configure#
查看資源狀態
crm(live)# status Stack: classic openais (with plugin) Current DC: na1.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Sun May 24 22:22:47 2020 Last change: Sun May 24 22:17:35 2020 by root via cibadmin on na1.server.com 2 nodes configured (2 expected votes) 2 resources configured Online: [ na1.server.com na2.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na1.server.com httpd (service:httpd): Started na2.server.com crm(live)#
我們發現VIP在na1上,httpd在na2上,這可是不行的。corosync會將資源進行平均分配,因此我們需要對資源進行約束。
7、資源約束
資源約束用以指定在哪些群集節點上運行資源,以何種順序裝載資源,以及特定資源依賴於哪些其它資源。
pacemaker共給我們提供了三種資源約束方法:
1)Resource Location(資源位置):定義資源可以、不可以或儘可能在哪些節點上運行;
2)Resource Collocation(資源排列):排列約束用以定義集群資源可以或不可以在某個節點上同時運行;
3)Resource Order(資源順序):順序約束定義集群資源在節點上啟動的順序;1、資源優先在哪個節點上運行;2、2個資源在一起運行,或者不能在一起運行;3、啟動順序,先運行那個,在運行那個。
資源排列
這裡我們使用collocation將VIP和httpd資源綁定在一起,讓它們必須在同一個節點上允許。
#查看幫助,我們參考案例就行 crm(live)configure# help collocation Example: colocation never_put_apache_with_dummy -inf: apache dummy colocation c1 inf: A ( B C ) -inf 不能在一起,inf:必須在一起。中間never_put_apache_with_dummy這個是名字,方便識記的。進行約束
crm(live)configure# collocation vip_with_httpd inf: VIP httpd crm(live)configure# verify crm(live)configure# commit crm(live)configure#查看資源狀態(都跑在na2上去了。我們約束它,希望它留在na1上)
crm(live)# status Stack: classic openais (with plugin) Current DC: na1.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Sun May 24 22:38:55 2020 Last change: Sun May 24 22:37:44 2020 by root via cibadmin on na1.server.com 2 nodes configured (2 expected votes) 2 resources configured Online: [ na1.server.com na2.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na2.server.com httpd (service:httpd): Started na2.server.com crm(live)#
位置約束
約束資源在na1上:Location(資源位置):
查看幫助(我們發現有個數字,數字的意思是指定類似於一個分數的意思,數字越大,越優先。集群會選擇分數高的節點去允許,預設分數是0)
Examples: location conn_1 internal_www 100: node1 location conn_1 internal_www \ rule 50: #uname eq node1 \ rule pingd: defined pingd location conn_2 dummy_float \ rule -inf: not_defined pingd or pingd number:lte 0 # never probe for rsc1 on node1 location no-probe rsc1 resource-discovery=never -inf: node1約束資源(這裡我們約束VIP就可以了,以為之前已經對VIP和httpd進行了約束,在一起、在一起、在一起)
crm(live)configure# location vip_httpd_prefer_na1 VIP 100: na1.server.com crm(live)configure# verify crm(live)configure# commit我們查看資源狀態
都運行在NA1節點上了。
crm(live)# status Stack: classic openais (with plugin) Current DC: na1.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Sun May 24 22:48:22 2020 Last change: Sun May 24 22:48:10 2020 by root via cibadmin on na1.server.com 2 nodes configured (2 expected votes) 2 resources configured Online: [ na1.server.com na2.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na1.server.com httpd (service:httpd): Started na1.server.com crm(live)#
8、模擬故障集群轉換
主備切換
crm node下,是針對節點的操作。有個standby,將自己改為備,online上線。
crm(live)node# --help bye exit maintenance show utilization -h cd fence online standby ? clearstate help quit status attribute delete list ready status-attr back end ls server up crm(live)node#進行下線&上線
#na1下線 crm(live)# node standby crm(live)# status Stack: classic openais (with plugin) Current DC: na1.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Sun May 24 22:56:18 2020 Last change: Sun May 24 22:56:15 2020 by root via crm_attribute on na1.server.com 2 nodes configured (2 expected votes) 2 resources configured Node na1.server.com: standby Online: [ na2.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na2.server.com httpd (service:httpd): Started na2.server.com # na1上線 crm(live)# node online crm(live)# status Stack: classic openais (with plugin) Current DC: na1.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Sun May 24 22:56:29 2020 Last change: Sun May 24 22:56:27 2020 by root via crm_attribute on na1.server.com 2 nodes configured (2 expected votes) 2 resources configured Online: [ na1.server.com na2.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na1.server.com httpd (service:httpd): Started na1.server.com crm(live)#資源會自動切換到na1上,之前定義了約束分數,100分,所以會再次回來。
物理故障
我們把na1的服務停掉,在na2上查看狀態
[root@na2 ~]# crm status Stack: classic openais (with plugin) Current DC: na2.server.com (version 1.1.18-3.el6-bfe4e80420) - partition WITHOUT quorum Last updated: Sun May 24 23:20:29 2020 Last change: Sun May 24 22:56:27 2020 by root via crm_attribute on na1.server.com 2 nodes configured (2 expected votes) 2 resources configured Online: [ na2.server.com ] OFFLINE: [ na1.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Stopped httpd (service:httpd): Stopped [root@na2 ~]#
我們發現我們的na2線上,但是服務卻是關閉的。
Quorum介紹
集群節點的票據存活數要大於等於Quorum,集群才能正常工作。
在集群票據數為奇數時,Quorum值的(票據數+1)/2。
當票據數數為偶數時,Quorum的值則為 (票據數/2)+1
我們是2個節點,Quorum的值為2,因此我們至少要存活2個或2個以上的節點,我們關閉了一個,因此集群就不能正常工作了。
這對於2個節點來說,不太友好。我們可以使用
crm configure property no-quorum-policy=ignore
對quorum不能滿足的集群狀態進行忽略
查看資源狀態(資源服務正常啟動)
[root@na2 ~]# crm configure property no-quorum-policy=ignore [root@na2 ~]# crm status Stack: classic openais (with plugin) Current DC: na2.server.com (version 1.1.18-3.el6-bfe4e80420) - partition WITHOUT quorum Last updated: Sun May 24 23:27:02 2020 Last change: Sun May 24 23:26:58 2020 by root via cibadmin on na2.server.com 2 nodes configured (2 expected votes) 2 resources configured Online: [ na2.server.com ] OFFLINE: [ na1.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na2.server.com httpd (service:httpd): Started na2.server.com [root@na2 ~]#
9、httpd服務高可用測試
現在是na2線上,資源運行在na2上,NA1宕機,我們訪問VIP
我們把na1的服務開啟
[root@na1 ~]# service corosync start Starting Corosync Cluster Engine (corosync): [確定] [root@na1 ~]# crm status Stack: classic openais (with plugin) Current DC: na2.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Sun May 24 23:32:36 2020 Last change: Sun May 24 23:26:58 2020 by root via cibadmin on na2.server.com 2 nodes configured (2 expected votes) 2 resources configured Online: [ na1.server.com na2.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na1.server.com httpd (service:httpd): Started na1.server.com [root@na1 ~]#訪問VIP
httpd服務實現高可用。
10、創建nfs文件資源
NFS服務測試
nfs伺服器配置省略
NA1 NA2 節點關閉selinux
nfs資源:192.168.94.131
[root@filesystem ~]# exportfs /file/web 192.168.0.0/255.255.0.0 [root@filesystem ~]#測試掛載
[root@na1 ~]# mkdir /mnt/web [root@na1 ~]# mount -t nfs 192.168.94.131:/file/web /mnt/web [root@na1 ~]# cat /mnt/web/index.html <h1>this is nfs server</h1> [root@na1 ~]# umount /mnt/web/ [root@na1 ~]#
創建nfs文件資源
查看幫助(Filesystem有三個必選參數)
crm(live)ra# info ocf:heartbeat:Filesystem Parameters (*: required, []: default): device* (string): block device The name of block device for the filesystem, or -U, -L options for mount, or NFS mount specification. directory* (string): mount point The mount point for the filesystem. fstype* (string): filesystem type The type of filesystem to be mounted.創建nfs資源
crm(live)configure# primitive nfs ocf:heartbeat:Filesystem params device=192.168.94.131:/file/web directory=/var/www/html fstype=nfs crm(live)configure# verify crm(live)configure# commit查看狀態(發現它在na2上,它應該要和httpd在一起。我們進行位置約束)
crm(live)# status Stack: classic openais (with plugin) Current DC: na2.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Sun May 24 23:48:09 2020 Last change: Sun May 24 23:44:24 2020 by root via cibadmin on na1.server.com 2 nodes configured (2 expected votes) 3 resources configured Online: [ na1.server.com na2.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na1.server.com httpd (service:httpd): Started na1.server.com nfs (ocf::heartbeat:Filesystem): Started na2.server.com crm(live)#
排列、排序約束
之前提到的約束,有3大功能,有一個order排序的。
httpd 和 nfs ,應該是nfs先啟動,然後在啟動httpd。
我們進行httpd和nfs排列約束、排序約束。
crm(live)configure# colocation httpd_with_nfs inf: httpd nfs # 先啟動nfs,在啟動httpd crm(live)configure# order nfs_first Mandatory: nfs httpd # 先啟動httpd 在啟動vip crm(live)configure# order httpd_first Mandatory: httpd VIP crm(live)configure# verify crm(live)configure# commit crm(live)configure#
11、高可用集群測試
查看狀態
crm(live)# status Stack: classic openais (with plugin) Current DC: na2.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Mon May 25 00:00:23 2020 Last change: Sun May 24 23:58:47 2020 by root via cibadmin on na1.server.com 2 nodes configured (2 expected votes) 3 resources configured Online: [ na1.server.com na2.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na1.server.com httpd (service:httpd): Started na1.server.com nfs (ocf::heartbeat:Filesystem): Started na1.server.com crm(live)#訪問VIP
NA1關閉服務、NA2查看狀態
[root@na2 ~]# crm status Stack: classic openais (with plugin) Current DC: na2.server.com (version 1.1.18-3.el6-bfe4e80420) - partition WITHOUT quorum Last updated: Mon May 25 00:01:42 2020 Last change: Sun May 24 23:58:47 2020 by root via cibadmin on na1.server.com 2 nodes configured (2 expected votes) 3 resources configured Online: [ na2.server.com ] OFFLINE: [ na1.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na2.server.com httpd (service:httpd): Started na2.server.com nfs (ocf::heartbeat:Filesystem): Started na2.server.com [root@na2 ~]#訪問VIP
查看配置文件
[root@na2 ~]# crm configure show node na1.server.com \ attributes standby=off node na2.server.com primitive VIP IPaddr \ params ip=192.168.94.222 primitive httpd service:httpd \ params httpd primitive nfs Filesystem \ params device="192.168.94.131:/file/web" directory="/var/www/html" fstype=nfs order httpd_first Mandatory: httpd VIP colocation httpd_with_nfs inf: httpd nfs order nfs_first Mandatory: nfs httpd location vip_httpd_prefer_na1 VIP 100: na1.server.com colocation vip_with_httpd inf: VIP httpd property cib-bootstrap-options: \ have-watchdog=false \ dc-version=1.1.18-3.el6-bfe4e80420 \ cluster-infrastructure="classic openais (with plugin)" \ expected-quorum-votes=2 \ stonith-enabled=false \ no-quorum-policy=ignore [root@na2 ~]#查看xml配置
crm configure show xml另外一個,如果中途配置錯誤了怎麼辦,
[root@na2 ~]# crm crm(live)# configure # 可以直接對配置文件進行編輯 crm(live)configure# edit node na1.server.com \ attributes standby=off node na2.server.com primitive VIP IPaddr \ params ip=192.168.94.222 primitive httpd service:httpd \
讀書和健身總有一個在路上