0155.K kubeadm安装高可用K8S集群(1/2)
(1/2)使用kubeadm通过kubeadm-config.yaml文件部署多主多从kubernetes集群,并使用haproxy和keepalived提供的VIP实现master高可用。
0. ENV
0.1 软件版本
CentOS 7.6;
docker 20.10.14;
kubernetes 1.23.2(kubelet kubeadm kubectl)。
0.2 高可用架构图
0.3 集群配置
官方最低要求:
一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux 发行版以及一些不提供包管理器的发行版提供通用的指令;
每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响你应用的运行内存);
2 CPU 核或更多;
集群中的所有机器的网络彼此均能相互连接(公网和内网都可以);
开启机器上的某些端口,如6443;
禁用交换分区。为了保证 kubelet 正常工作,你必须 禁用交换分区。
按官方参考,建议最低2台(2C/2G/50G),进行Kubernetes集群搭建。
本次采用5台主机,实现高可用集群。3台control plane node(master),2台worker node,每台机器采用16C/32G/500G进行配置。
序号 | IP | 主机名 | 备注 |
1 | 192.168.80.135 | rh-master01 | 主节点1 |
2 | 192.168.80.136 | rh-master02 | 主节点2 |
3 | 192.168.80.137 | rh-master03 | 主节点3 |
4 | 192.168.80.138 | rh-node01 | worker节点1 |
5 |
192.168.80.139 | rh-node02 |
worker节点2 |
6 | 192.168.80.134 | rh-master-lb | VIP,提供多台主节 点负载均衡功能 |
如无特殊说明,操作在所有节点操作。
1. 环境初始化
1) 检查操作系统版本
此方式下安装kubernetes集群要求CentOS版本要在7.5或以上
[ ]
CentOS Linux release 7.6.1810 (Core)
2) 主机名解析
修改每台主机名,如:
hostnamectl set-hostname rh-master01
为方便后面集群节点间的直接调用,在这里配置主机名解析,企业中推荐使用DNS服务器。
使用空格缩进
tee -a /etc/hosts <<- EOF
192.168.80.134 rh-master-lb.rundba.com rh-master-lb
192.168.80.135 rh-master01.rundba.com rh-master01
192.168.80.136 rh-master02.rundba.com rh-master02
192.168.80.137 rh-master03.rundba.com rh-master03
192.168.80.138 rh-node01.rundba.com rh-node01
192.168.80.139 rh-node02.rundba.com rh-node02
EOF
3) 时间同步
K8S要求集群中的时间节点必须一致,这里使用chronyd进行同步,生产环境建议使用自己的时间服务器。
查看时区:
[ ]
Time zone: Asia/Shanghai (CST, +0800)
如果时区不对,设置为东八区:
timedatectl set-timezone Asia/Shanghai
安装chrony软件
yum -y install chrony
[root@rh-master01 ~]# grep server /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
启动服务
systemctl enable chronyd --now
写入硬件
hwclock -w
查看当前时间和硬件时间
date && hwclock -r
查看系统时间同步,服务前有^*表示为当前同步的ntp server
[ ]
210 Number of sources = 2
.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| / '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* ntp1.aliyun.com 3 6 37 14 +1144us[+1669us] +/- 44ms
^+ ntp2.aliyun.com 3 6 37 14 -1946us[-1946us] +/- 44ms
4) 禁用iptables和firewalld服务
K8S和docker在运行中会产生大量的iptables规则,为了不让系统规则跟他们混淆,直接关闭系统的规则。
systemctl disable firewalld --now
5) 禁用selinux
selinux是linux系统下的一个安全服务,建议关闭。
临时关闭selinux
[ ]
Enforcing
[ ]
[ ]
Permissive
修改配置文件永久关闭selinux,需要重启生效
[ ]
查看配置文件确认已修改
[root@rh-master01 ~]# grep SELINUX= /etc/selinux/config
# SELINUX= can take one of these three values:
SELINUX=disabled
6) 禁用swap分区
临时关闭swap,需要重启
[root@rh-master01 ~]# swapoff -a
[root@rh-master01 ~]# swapon -s
[root@rh-master01 ~]# free -h
total used free shared buff/cache available
Mem: 31G 281M 30G 11M 235M 30G
Swap: 0B 0B 0B
永久关闭swap
同时vi /etc/fstab文件中注释掉swap一行
#/dev/mapper/centos-swap swap swap defaults 0 0
内核中关闭(sysctl.conf配置后k8s.conf不需要配置)
echo vm.swappiness=0 >> /etc/sysctl.conf
sysctl -p
7) 将桥接的IPv4流量传递到iptables的链
在每个节点上将桥接的IPv4流量传递到iptables的链:
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
# 加载br_netfilter模块
[ ]
# 查看是否加载
[ ]
br_netfilter 22256 0
bridge 151336 1 br_netfilter
# 生效
sysctl --system
8) 开启ipvs
在kubernetes中service有两种代理模型,一种是基于iptables,另一种是基于ipvs的。ipvs的性能要高于iptables的,但是如果要使用它,需要手动载入ipvs模块。
在每个节点安装ipset和ipvsadm:
yum -y install ipset ipvsadm
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
授权、运行、检查是否加载:
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
检查是否加载:
lsmod | grep -e ipvs -e nf_conntrack_ipv4
9) 所有节点配置limit
临时生效:
ulimit -SHn 65536
永久生效:
cat /etc/security/limits.conf << EOF
# k8s add
* soft nofile 65536
* hard nofile 65536
* soft nproc 4096
* hard nproc 4096
* soft memlock unlimited
* hard memlock unlimited
EOF
10) 在rh-master01节点设置免密钥登录到其他节点
在rh-master01节点生成配置文件和整数,并传输到其他节点上。
# 遇到输入,直接Enter即可
ssh-keygen -t rsa
将公钥传输到其它主机
for i in rh-master01 rh-master02 rh-master03 rh-node01 rh-node02; do ssh-copy-id -i .ssh/id_rsa.pub $i;done
验证到其它主机ssh等效性配置正常
[rootfor i in rh-master01 rh-master02 rh-master03 rh-node01 rh-node02; do ssh $i hostname;done -master01 ~]#
rh-master01
rh-master02
rh-master03
rh-node01
rh-node02
11) 所有节点升级系统并重启(非必须)
所有节点升级系统并重启,此处没有升级内核:
yum -y --exclude=kernel* update && reboot
12) 内核配置(非必须)
12.1) 查看默认的内核
查看默认的内核:
[root@rh-master01 ~]# uname -r
3.10.0-957.el7.x86_64
12.2) 升级内核配置
CentOS7需要升级内核到4.18+。
a) 在 CentOS 7 上启用 ELRepo 仓库
[root@rh-master01 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
安装elrepo-release
[root@rh-master01 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
Retrieving http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
Retrieving http://elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
Preparing... ################################# [100%]
Updating / installing...
1:elrepo-release-7.0-4.el7.elrepo ################################# [100%]
b) 仓库启用后,你可以使用下面的命令列出可用的内核相关包
[root@rh-master01 ~]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* elrepo-kernel: mirrors.tuna.tsinghua.edu.cn
elrepo-kernel | 3.0 kB 00:00:00
elrepo-kernel/primary_db | 2.1 MB 00:00:01
Available Packages
elrepo-release.noarch 7.0-5.el7.elrepo elrepo-kernel
kernel-lt.x86_64 5.4.188-1.el7.elrepo elrepo-kernel
kernel-lt-devel.x86_64 5.4.188-1.el7.elrepo elrepo-kernel
kernel-lt-doc.noarch 5.4.188-1.el7.elrepo elrepo-kernel
kernel-lt-headers.x86_64 5.4.188-1.el7.elrepo elrepo-kernel
kernel-lt-tools.x86_64 5.4.188-1.el7.elrepo elrepo-kernel
kernel-lt-tools-libs.x86_64 5.4.188-1.el7.elrepo elrepo-kernel
kernel-lt-tools-libs-devel.x86_64 5.4.188-1.el7.elrepo elrepo-kernel
kernel-ml.x86_64 5.17.1-1.el7.elrepo elrepo-kernel
kernel-ml-devel.x86_64 5.17.1-1.el7.elrepo elrepo-kernel
kernel-ml-doc.noarch 5.17.1-1.el7.elrepo elrepo-kernel
kernel-ml-headers.x86_64 5.17.1-1.el7.elrepo elrepo-kernel
kernel-ml-tools.x86_64 5.17.1-1.el7.elrepo elrepo-kernel
kernel-ml-tools-libs.x86_64 5.17.1-1.el7.elrepo elrepo-kernel
kernel-ml-tools-libs-devel.x86_64 5.17.1-1.el7.elrepo elrepo-kernel
perf.x86_64 5.17.1-1.el7.elrepo elrepo-kernel
python-perf.x86_64 5.17.1-1.el7.elrepo elrepo-kernel
c) 安装最新的主线稳定内核
[root@rh-master01 ~]# yum -y --enablerepo=elrepo-kernel install kernel-ml
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.ustc.edu.cn
* elrepo: mirrors.tuna.tsinghua.edu.cn
* elrepo-kernel: mirrors.tuna.tsinghua.edu.cn
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
elrepo | 3.0 kB 00:00:00
elrepo/primary_db | 435 kB 00:00:01
Resolving Dependencies
--> Running transaction check
---> Package kernel-ml.x86_64 0:5.17.1-1.el7.elrepo will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=======================================================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================================================
Installing:
kernel-ml x86_64 5.17.1-1.el7.elrepo elrepo-kernel 56 M
Transaction Summary
=======================================================================================================================================================================
Install 1 Package
Total download size: 56 M
Installed size: 255 M
Downloading packages:
kernel-ml-5.17.1-1.el7.elrepo.x86_64.rpm | 56 MB 00:00:20
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
Installing : kernel-ml-5.17.1-1.el7.elrepo.x86_64 1/1
Verifying : kernel-ml-5.17.1-1.el7.elrepo.x86_64 1/1
Installed:
kernel-ml.x86_64 0:5.17.1-1.el7.elrepo
Complete!
d) 设置 GRUB 默认的内核版本
vim /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=0 #将GRUB_DEFAULT=saved修改为0,GRUB初始化页面的第一个内核将作为默认内核
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rhgb quiet"
GRUB_DISABLE_RECOVERY="true"
e) 重新创建内核配置
[root -master01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.17.1-1.el7.elrepo.x86_64
Found initrd image: /boot/initramfs-5.17.1-1.el7.elrepo.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-957.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-957.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-798f08d79bbe4b428bfb56ea5272a098
Found initrd image: /boot/initramfs-0-rescue-798f08d79bbe4b428bfb56ea5272a098.img
done
f) 重启机器应用最新内核
reboot
g) 查看最新内核版本
[root@rh-master01 ~]# uname -sr
Linux 5.17.1-1.el7.elrepo.x86_64
2. 每个节点安装Docker、kubeadm、kubelet和kubectl
2.1 安装Docker
1) 安装Docker
使用阿里云docker镜像
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
查看支持的docker的镜像版本:
yum list docker-ce --showduplicates
安装最新版本:
yum -y install docker-ce
--假如需要安装指定版本,请指定版本号:
yum install -y docker-ce-20.10.13-3.el7 docker-ce-cli-20.10.13-3.el7 containerd.io
启动docker服务并设置为开机启动
systemctl enable docker && systemctl restart docker
查看版本
docker version
systemctl start docker
docker info
2) 设置Docker镜像加速器
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://du3ia00u.mirror.aliyuncs.com"],
"live-restore": true,
"log-driver":"json-file",
"log-opts": {"max-size":"500m", "max-file":"3"},
"storage-driver": "overlay2"
}
EOF
URL也可更换为:
https://b9pmyelo.mirror.aliyuncs.com
重新加载服务变更
sudo systemctl daemon-reload
重启docker服务
sudo systemctl restart docker
2.2 添加阿里云的YUM软件源
由于kubernetes的镜像源在国外,非常慢,这里切换成国内的阿里云镜像源:
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2.3 安装kubeadm、kubelet和kubectl
1) 查看当前支持哪些版本
yum list kubelet --showduplicates | sort -r
2) 安装kubelet kubeadm kubectl
安装最新版本,截止2022-03-30最新版本为1.23.5(本次不安装最新版本)
yum install -y kubelet kubeadm kubectl
本次指定版本号部署,后续进行升级:
[root@rh-master01 ~]# yum -y install kubelet-1.23.2 kubeadm-1.23.2 kubectl-1.23.2
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.ustc.edu.cn
* elrepo: mirrors.tuna.tsinghua.edu.cn
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.23.2-0 will be installed
--> Processing Dependency: kubernetes-cni >= 0.8.6 for package: kubeadm-1.23.2-0.x86_64
---> Package kubectl.x86_64 0:1.23.2-0 will be installed
---> Package kubelet.x86_64 0:1.23.2-0 will be installed
--> Running transaction check
---> Package kubernetes-cni.x86_64 0:0.8.7-0 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=======================================================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================================================
Installing:
kubeadm x86_64 1.23.2-0 kubernetes 9.0 M
kubectl x86_64 1.23.2-0 kubernetes 9.5 M
kubelet x86_64 1.23.2-0 kubernetes 21 M
Installing for dependencies:
kubernetes-cni x86_64 0.8.7-0 kubernetes 19 M
Transaction Summary
=======================================================================================================================================================================
Install 3 Packages (+1 Dependent package)
Total download size: 58 M
Installed size: 261 M
Downloading packages:
(1/4): 467629e304b29edc810caf1284ed9e0f7f32066b99aa08e1f8438e3814d45b1a-kubeadm-1.23.2-0.x86_64.rpm | 9.0 MB 00:00:16
(2/4): 3573b1aa29bf52185d789ec7ba9835307211b3d4b70c92c0ad96423c0ce1aa4f-kubectl-1.23.2-0.x86_64.rpm | 9.5 MB 00:00:16
(3/4): db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm | 19 MB 00:00:31
(4/4): 0714477a6941499ce3d594cd8e0c440493770bd25b36efdd4ec88eadff25c2ea-kubelet-1.23.2-0.x86_64.rpm | 21 MB 00:00:34
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 1.1 MB/s | 58 MB 00:00:50
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : kubernetes-cni-0.8.7-0.x86_64 1/4
Installing : kubelet-1.23.2-0.x86_64 2/4
Installing : kubectl-1.23.2-0.x86_64 3/4
Installing : kubeadm-1.23.2-0.x86_64 4/4
Verifying : kubectl-1.23.2-0.x86_64 1/4
Verifying : kubelet-1.23.2-0.x86_64 2/4
Verifying : kubeadm-1.23.2-0.x86_64 3/4
Verifying : kubernetes-cni-0.8.7-0.x86_64 4/4
Installed:
kubeadm.x86_64 0:1.23.2-0 kubectl.x86_64 0:1.23.2-0 kubelet.x86_64 0:1.23.2-0
Dependency Installed:
kubernetes-cni.x86_64 0:0.8.7-0
Complete!
3) docker和kubelet cgroup driver设置
Kubernetes推荐使用systemd来代替cgroupfs,为了实现Docker使用的cgroup driver和kubelet使用的cgroup driver一致,建议修改"/etc/sysconfig/kubelet"文件的内容:
vim /etc/sysconfig/kubelet #编辑配置文件修改内容如下
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
设置为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动:
systemctl enable kubelet
查看docker当前使用的cgroup
[ ]
Cgroup Driver: systemd
2.4 高可用组件安装
注意:如果不是高可用集群,haproxy和keepalived无需安装。
1) rh-master01、rh-master02、rh-master03节点通过yum安装HAProxy和keepAlived
yum -y install keepalived haproxy
2) rh-master01、rh-master02、rh-master03节点配置HAProxy
备份配置文件:
cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.orig
修改配置文件:
vim /etc/haproxy/haproxy.cfg #将原内容更换为如下内容
global
maxconn 2000
16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend monitor-in
bind *:33305
mode http
option httplog
/monitor
listen stats
bind *:8006
mode http
stats enable
stats hide-version
stats uri /stats
stats refresh 30s
stats realm Haproxy\ Statistics
stats auth admin:admin
frontend rh-master
bind 0.0.0.0:16443
bind 127.0.0.1:16443
mode tcp
option tcplog
inspect-delay 5s
default_backend rh-master
backend rh-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
# 下面的配置根据实际情况修改
server rh-master01 192.168.80.135:6443 check
server rh-master02 192.168.80.136:6443 check
server rh-master03 192.168.80.137:6443 check
3) rh-master01配置Keepalived
备份配置文件:
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.orig
修改配置文件:
vim /etc/keepalived/keepalived.conf #将原内容更换为如下内容
! Configuration File for keepalived
global_defs {
## 标识本节点的字条串,通常为 hostname
router_id rh-master01
script_user root
enable_script_security
}
## 检测脚本
## keepalived 会定时执行脚本并对脚本执行的结果进行分析,动态调整 vrrp_instance 的优先级。如果脚本执行结果为 0,并且 weight 配置的值大于 0,则优先级相应的增加。如果脚本执行结果非 0,并且 weight配置的值小于 0,则优先级相应的减少。其他情况,维持原本配置的优先级,即配置文件中 priority 对应的值。
#vrrp_script chk_apiserver {
# script "/etc/keepalived/check_apiserver.sh"
# # 每2秒检查一次
# interval 2
# # 一旦脚本执行成功,权重减少5
# weight -5
# fall 3
# rise 2
#}
## 定义虚拟路由,VI_1 为虚拟路由的标示符,自己定义名称
vrrp_instance VI_1 {
## 主节点为 MASTER,对应的备份节点为 BACKUP
state MASTER
## 绑定虚拟 IP 的网络接口,与本机 IP 地址所在的网络接口相同
interface ens33
# 主机的IP地址
mcast_src_ip 192.168.80.135
# 虚拟路由id
virtual_router_id 100
## 节点优先级,值范围 0-254,MASTER 要比 BACKUP 高
priority 150
## 优先级高的设置 nopreempt 解决异常恢复后再次抢占的问题
nopreempt
## 组播信息发送间隔,所有节点设置必须一样,默认 1s
advert_int 2
## 设置验证信息,所有节点必须一致
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
## 虚拟 IP 池, 所有节点设置必须一样
virtual_ipaddress {
## 虚拟 ip,可以定义多个
192.168.80.134/24
}
track_script {
chk_apiserver
}
}
4) rh-master02配置Keepalived
备份配置文件:
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.orig
修改配置文件:
vim /etc/keepalived/keepalived.conf #将原内容更换为如下内容
! Configuration File for keepalived
global_defs {
router_id rh-master02
script_user root
enable_script_security
}
#vrrp_script chk_apiserver {
# script "/etc/keepalived/check_apiserver.sh"
# interval 2
# weight -5
# fall 3
# rise 2
#}
vrrp_instance VI_1 {
state BACKUP
interface ens33
mcast_src_ip 192.168.80.136
virtual_router_id 100
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.80.134/24
}
track_script {
chk_apiserver
}
}
5) rh-master03配置Keepalived
备份配置文件:
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.orig
修改配置文件:
vim /etc/keepalived/keepalived.conf #将原内容更换为如下内容
! Configuration File for keepalived
global_defs {
router_id rh-master03
script_user root
enable_script_security
}
#vrrp_script chk_apiserver {
# script "/etc/keepalived/check_apiserver.sh"
# interval 2
# weight -5
# fall 3
# rise 2
#}
vrrp_instance VI_1 {
state BACKUP
interface ens33
mcast_src_ip 192.168.80.137
virtual_router_id 100
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.80.134/24
}
track_script {
chk_apiserver
}
}
6) 在rh-master01、rh-master02、rh-master03上新建监控脚本,并设置权限(keepalived.conf中未开启vrrp_script chk_apiserver,则不需要配置)
新建脚本:
vim /etc/keepalived/check_apiserver.sh
err=0
for k in $(seq 1 5)
do
check_code=$(pgrep kube-apiserver)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 5
continue
else
err=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
赋予执行权限:
chmod u+x /etc/keepalived/check_apiserver.sh
7) 在rh-master01、rh-master02、rh-master03上启动haproxy和keepalived
systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived
测试VIP(虚拟IP):
[ ]
PING 192.168.80.134 (192.168.80.134) 56(84) bytes of data.
64 bytes from 192.168.80.134: icmp_seq=1 ttl=64 time=0.077 ms
64 bytes from 192.168.80.134: icmp_seq=2 ttl=64 time=0.053 ms
旨在交流,不足之处,还望抛砖。
往期推荐