搭建PCS-HA高可用集群
一.更换yum源
[root@master ~]# rm -rf /eyc/yum.repo.d/CentOS-*
由于我的源几乎都不可用了 所以全部干掉。
[root@master ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://repo.huaweicloud.com/repository/conf/CentOS-8-reg.repo
下载CentOS-8-Vault的源(因为官方CentOS的源已不可用)
因为源里未配置HighAvailability的源 所以增加以下内容:
[root@master ~]# vim /etc/yum.repos.d/CentOS-Base.repo
在文件最后增加以下内容»
1
2
3
4
5
| [HighAvailability]
name=CentOS-$releasever - pcs - repo.huaweicloud.com
baseurl=https://repo.huaweicloud.com/centos-vault/8.5.2111/HighAvailability/$basearch/os/
gpgcheck=1
gpgkey=https://repo.huaweicloud.com/centos/RPM-GPG-KEY-CentOS-Official
|
清除原有yum缓存。
[root@master ~]# yum clean all
刷新缓存
[root@master ~]# yum makecache
二.安装所需软件
安装nginx
[root@master ~]# yum install nginx
安装mysql
[root@master ~]# yum install mysql-server.x86_64
安装php
[root@master ~]# yum install php-*
安装pcs
[root@master ~]# yum install pcs
安装时间同步服务
[root@master ~]# yum install -y chrony
三.防火墙和SELinux设置
注意 防火墙的两种方式 选择其中一种即可
1.关闭防火墙
[root@master ~]# systemctl stop firewalld.service
禁止防火墙开机自启
[root@master ~]# systemctl Disabled firewalld
2.开放端口
1
2
3
4
| [root@master ~]# firewall-cmd --zone=public --add-port=80/tcp --permanent
[root@master ~]# firewall-cmd --zone=public --add-port=443/tcp --permanent
[root@master ~]# firewall-cmd --zone=public --add-port=8080/tcp --permanent
[root@master ~]# firewall-cmd --zone=public --add-port=22/tcp --permanent
|
开放端口后需要重启防火墙
[root@master ~]# systemctl restart firewalld.service
关闭selinux
1
2
3
| [root@master ~]# setenforce 0
[root@master ~]# vim /etc/selinux/config
SELINUX=enforcing改成SELINUX=disabled
|
四.网络配置
本条需要在master+backup1+backup2+backup3+backup4 单独进行设置!!!
外部网络 192.168.2.0/24 #对应网卡名称–ens160
内部网路 192.168.5.0/24 #对应网卡名称–ens224
专用网络 192.168.10.0/24 #对应网卡名称–ens256
fip= 192.168.10.222 #对应网卡名称–ens256
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
| master
192.168.2.100
192.168.5.100
192.168.10.100
fip= 192.168.10.222
buckup1
192.168.2.110
192.168.5.110
192.168.10.110
fip= 192.168.10.222
buckup2
192.168.2.120
192.168.5.120
192.168.10.120
fip= 192.168.10.222
buckup3
192.168.2.130
192.168.5.120
192.168.10.130
fip= 192.168.10.222
buckup4
192.168.9.140
192.168.5.120
192.168.10.140
fip= 192.168.10.222
|
五.设置hostnaem和配置hosts
本条需要在master+backup1+backup2+backup3+backup4 对应 执行!!!
1
2
3
4
5
| [root@master ~]# hostnamectl set-hostname master
[root@backup1 ~]# hostnamectl set-hostname backup1
[root@backup2 ~]# hostnamectl set-hostname backup2
[root@backup3 ~]# hostnamectl set-hostname backup3
[root@backup4 ~]# hostnamectl set-hostname backup4
|
[root@master ~]# vim /etc/hosts
#本条需要在master+backup1+backup2+backup3+backup4分别执行!!!
每个主机上分别写入以下条目:
1
2
3
4
5
| 192.168.10.100 master
192.168.10.110 backup1
192.168.10.120 backup2
192.168.10.130 backup3
192.168.10.140 backup4
|
六.同步时间
本条需要在master+backup1+backup2+backup3+backup4分别执行!!!
启动服务
[root@master ~]# systemctl start chronyd
设为系统自动启动
[root@master ~]# systemctl enable chronyd
编辑一下配置文件
[root@master ~]# vim /etc/chrony.conf
1
2
3
4
5
| #pool 2.centos.pool.ntp.org iburst
#注释掉pool 2.centos.pool.ntp.org iburst
#增加下两条内容 保存并退出
server ntp.aliyun.com iburst
server cn.ntp.org.cn iburst
|
重新加载配置
[root@master ~]# systemctl restart chronyd.service
验证
[root@master ~]# chronyc sources -v
七.关闭服务
关闭所需服务(需要进行高可用的服务需使用pcm进行调用 不可使用系统自动运行 所以被监控的服务要全部关掉,并禁止开机自启)
本条需要在master+backup1+backup2+backup3+backup4分别执行!!!
1
2
3
4
5
6
7
| [root@master ~]# systemctl stop nginx.service
[root@master ~]# systemctl stop php-fpm.service
[root@master ~]# systemctl stop mysqld.service
[root@master ~]# systemctl disable nginx.service
[root@master ~]# systemctl disable php-fpm.service
[root@master ~]# systemctl disable mysqld.service
|
开启pcsd服务并设置为开机自启
本条需要在master+backup1+backup2+backup3+backup4分别执行!!!
[root@master ~]# systemctl enable pcsd
[root@master ~]# systemctl start pcsd
设置ha所使用的密码
本条需要在master+backup1+backup2+backup3+backup4分别执行!!!
1
2
3
| [root@master ~]# passwd hacluster
>>998877
>>998877
|
八.创建集群认证
集群配置操,只需要在一个节点执行命令 #本次实例使用主服务器进行配置-master
[root@master ~]# pcs host auth master backup1 backup2 backup3 backup4
#需要包含所有服务器(master+backup1+backup2+backup3+backup4)
九.创建集群
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
| [root@master ~]# pcs cluster setup hacluser01 master addr=192.168.10.100 backup1 addr=192.168.10.110 backup2 addr=192.168.10.120 backup3 addr=192.168.10.130 backup4 addr=192.168.10.140
Destroying cluster on hosts: 'backup1', 'backup2', 'backup3', 'backup4', 'master'...
master: Successfully destroyed cluster
backup3: Successfully destroyed cluster
backup1: Successfully destroyed cluster
backup4: Successfully destroyed cluster
backup2: Successfully destroyed cluster
Requesting remove 'pcsd settings' from 'backup1', 'backup2', 'backup3', 'backup4', 'master'
master: successful removal of the file 'pcsd settings'
backup1: successful removal of the file 'pcsd settings'
backup2: successful removal of the file 'pcsd settings'
backup3: successful removal of the file 'pcsd settings'
backup4: successful removal of the file 'pcsd settings'
Sending 'corosync authkey', 'pacemaker authkey' to 'backup1', 'backup2', 'backup3', 'backup4', 'master'
backup2: successful distribution of the file 'corosync authkey'
backup2: successful distribution of the file 'pacemaker authkey'
backup1: successful distribution of the file 'corosync authkey'
backup1: successful distribution of the file 'pacemaker authkey'
master: successful distribution of the file 'corosync authkey'
master: successful distribution of the file 'pacemaker authkey'
backup3: successful distribution of the file 'corosync authkey'
backup3: successful distribution of the file 'pacemaker authkey'
backup4: successful distribution of the file 'corosync authkey'
backup4: successful distribution of the file 'pacemaker authkey'
Sending 'corosync.conf' to 'backup1', 'backup2', 'backup3', 'backup4', 'master'
master: successful distribution of the file 'corosync.conf'
backup2: successful distribution of the file 'corosync.conf'
backup1: successful distribution of the file 'corosync.conf'
backup3: successful distribution of the file 'corosync.conf'
backup4: successful distribution of the file 'corosync.conf'
Cluster has been successfully set up.
|
十.启动集群节点和设置开机启动
1
2
3
4
5
6
| [root@master ~]# pcs cluster start --all
backup3: Starting Cluster...
backup4: Starting Cluster...
backup1: Starting Cluster...
backup2: Starting Cluster...
master: Starting Cluster...
|
十一.禁用stonith设备
[root@master ~]# pcs property set stonith-enabled=false
十二.添加fip #需要注意ip和网卡名称
[root@master ~]# pcs resource create fip ocf:heartbeat:IPaddr ip=192.168.10.222 cidr_netmask=24 nic=ens256 iflabel=ens256 op monitor interval=30s
十三.添加应用资源
1
2
3
| [root@master ~]# pcs resource create Nginx service:nginx
[root@master ~]# pcs resource create MySQL service:mysqld
[root@master ~]# pcs resource create PHP-FPM service:php-fpm
|
十四.配置资源约束
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| [root@master ~]# pcs constraint location fip prefers master
[root@master ~]# pcs constraint location Nginx prefers master
[root@master ~]# pcs constraint location PHP-FPM prefers master
[root@master ~]# pcs constraint location MySQL prefers master
[root@master ~]# pcs constraint location fip prefers backup1=100
[root@master ~]# pcs constraint location Nginx prefers backup1=100
[root@master ~]# pcs constraint location PHP-FPM prefers backup1=100
[root@master ~]# pcs constraint location MySQL prefers backup1=100
[root@master ~]# pcs constraint location fip prefers backup2=80
[root@master ~]# pcs constraint location Nginx prefers backup2=80
[root@master ~]# pcs constraint location PHP-FPM prefers backup2=80
[root@master ~]# pcs constraint location MySQL prefers backup2=80
[root@master ~]# pcs constraint location fip prefers backup3=60
[root@master ~]# pcs constraint location Nginx prefers backup3=60
[root@master ~]# pcs constraint location PHP-FPM prefers backup3=60
[root@master ~]# pcs constraint location MySQL prefers backup3=60
[root@master ~]# pcs constraint location fip prefers backup4=40
[root@master ~]# pcs constraint location Nginx prefers backup4=40
[root@master ~]# pcs constraint location PHP-FPM prefers backup4=40
[root@master ~]# pcs constraint location MySQL prefers backup4=40
|
#验证约束优先值
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
| [root@master ~]# crm_simulate -sL
[ backup1 backup2 backup3 backup4 master ]
Nginx (service:nginx): Started master
MySQL (service:mysqld): Started master
PHP-FPM (service:php-fpm): Started master
fip (ocf::heartbeat:IPaddr): Started master
pcmk__native_allocate: Nginx allocation score on backup1: 100
pcmk__native_allocate: Nginx allocation score on backup2: 80
pcmk__native_allocate: Nginx allocation score on backup3: 60
pcmk__native_allocate: Nginx allocation score on backup4: 40
pcmk__native_allocate: Nginx allocation score on master: INFINITY
pcmk__native_allocate: MySQL allocation score on backup1: 100
pcmk__native_allocate: MySQL allocation score on backup2: 80
pcmk__native_allocate: MySQL allocation score on backup3: 60
pcmk__native_allocate: MySQL allocation score on backup4: 40
pcmk__native_allocate: MySQL allocation score on master: INFINITY
pcmk__native_allocate: PHP-FPM allocation score on backup1: 100
pcmk__native_allocate: PHP-FPM allocation score on backup2: 80
pcmk__native_allocate: PHP-FPM allocation score on backup3: 60
pcmk__native_allocate: PHP-FPM allocation score on backup4: 40
pcmk__native_allocate: PHP-FPM allocation score on master: INFINITY
pcmk__native_allocate: fip allocation score on backup1: 100
pcmk__native_allocate: fip allocation score on backup2: 80
pcmk__native_allocate: fip allocation score on backup3: 60
pcmk__native_allocate: fip allocation score on backup4: 40
pcmk__native_allocate: fip allocation score on master: INFINITY
|
十五.查看集群状态
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| [root@master ~]# pcs status
Cluster name: hacluser01
Cluster Summary:
* Stack: corosync
* Current DC: backup1 (version 2.1.0-8.el8-7c3f660707) - partition with quorum
* Last updated: Sat Mar 5 00:33:04 2022
* Last change: Sat Mar 5 00:29:47 2022 by root via cibadmin on master
* 5 nodes configured
* 4 resource instances configured
Node List:
* Online: [ backup1 backup2 backup3 backup4 master ]
Full List of Resources:
* Nginx (service:nginx): Started master
* MySQL (service:mysqld): Started master
* PHP-FPM (service:php-fpm): Started master
* fip (ocf::heartbeat:IPaddr): Started master
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
|
十六.验证阶段#验证当服务down后 会不会自动切换
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| [root@master ~]# pcs node standby master
[root@master ~]# pcs node unstandby master
[root@master ~]# pcs node standby backup1
[root@master ~]# pcs node unstandby backup1
[root@master ~]# pcs node standby backup2
[root@master ~]# pcs node unstandby backup2
[root@master ~]# pcs node standby backup3
[root@master ~]# pcs node unstandby backup3
[root@master ~]# pcs node standby backup4
[root@master ~]# pcs node unstandby backup4
|
十七.管理群集
1.查看群集状态
1
2
3
| [root@master ~]# pcs status
[root@master ~]# pcs status cluster
[root@master ~]# pcs status corosync
|
2.启动群集
[root@master ~]# pcs cluster start --all
3.停止群集
[root@master ~]# pcs cluster stop [master]
4.将节点置为后备状态
1
2
| [root@master ~]# pcs node standby master
[root@master ~]# pcs node unstandby master
|
5.拉起应用
1
2
3
4
| [root@master ~]# pcs resource cleanup MySQL
[root@master ~]# pcs resource cleanup Nginx
[root@master ~]# pcs resource cleanup PHP-FPM
[root@master ~]# pcs resource cleanup fip
|
6.检验资源 score 值
[root@master ~]# crm_simulate -sL
7.删除群集,[–all]同时恢复corosync.conf文件
[root@master ~]# pcs cluster destroy [--all]
8.清除指定资源的状态与错误计数
[root@master ~]# pcs resource cleanup fip
9.清除Fence资源的状态与错误计数
[root@master ~]# pcs stonith cleanup vmware-fencing fip