vlambda博客
学习文章列表

0155.K kubeadm安装高可用K8S集群(1/2)

(1/2)使用kubeadm通过kubeadm-config.yaml文件部署多主多从kubernetes集群,并使用haproxy和keepalived提供的VIP实现master高可用。


0155.K kubeadm安装高可用K8S集群(1/2)

0. ENV

0155.K kubeadm安装高可用K8S集群(1/2)



0.1 软件版本

CentOS 7.6;

docker 20.10.14;

kubernetes 1.23.2(kubelet kubeadm kubectl)。


0.2 高可用架构图



0155.K kubeadm安装高可用K8S集群(1/2)


0.3 集群配置

官方最低要求:

  • 一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux 发行版以及一些不提供包管理器的发行版提供通用的指令;

  •   每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响你应用的运行内存);

  •   2 CPU 核或更多;

  •   集群中的所有机器的网络彼此均能相互连接(公网和内网都可以);

  •   开启机器上的某些端口,如6443;

  •   禁用交换分区。为了保证 kubelet 正常工作,你必须 禁用交换分区。


按官方参考,建议最低2台(2C/2G/50G),进行Kubernetes集群搭建。


本次采用5台主机,实现高可用集群。3台control plane node(master),2台worker node,每台机器采用16C/32G/500G进行配置。

序号 IP 主机名 备注
1 192.168.80.135 rh-master01 主节点1
2 192.168.80.136 rh-master02 主节点2
3 192.168.80.137 rh-master03 主节点3
4 192.168.80.138 rh-node01 worker节点1

5

192.168.80.139 rh-node02
worker节点2
6 192.168.80.134 rh-master-lb

VIP,提供多台主节

点负载均衡功能


如无特殊说明,操作在所有节点操作。



0155.K kubeadm安装高可用K8S集群(1/2)
0155.K kubeadm安装高可用K8S集群(1/2)

1. 环境初始化

0155.K kubeadm安装高可用K8S集群(1/2)



1) 检查操作系统版本

此方式下安装kubernetes集群要求CentOS版本要在7.5或以上

[root@rh-master01 ~]# cat /etc/redhat-releaseCentOS Linux release 7.6.1810 (Core)


2) 主机名解析

修改每台主机名,如:

hostnamectl set-hostname rh-master01


为方便后面集群节点间的直接调用,在这里配置主机名解析,企业中推荐使用DNS服务器。

使用空格缩进

tee -a /etc/hosts <<- EOF192.168.80.134 rh-master-lb.rundba.com rh-master-lb192.168.80.135 rh-master01.rundba.com rh-master01192.168.80.136 rh-master02.rundba.com rh-master02192.168.80.137 rh-master03.rundba.com rh-master03192.168.80.138 rh-node01.rundba.com rh-node01192.168.80.139 rh-node02.rundba.com rh-node02EOF


3) 时间同步

K8S要求集群中的时间节点必须一致,这里使用chronyd进行同步,生产环境建议使用自己的时间服务器。

查看时区:

[root@rh-master01 ~]# timedatectl | grep "Time zone" Time zone: Asia/Shanghai (CST, +0800)


如果时区不对,设置为东八区:

timedatectl set-timezone Asia/Shanghai


安装chrony软件

yum -y install chrony


[root@rh-master01 ~]# grep server /etc/chrony.conf# Use public servers from the pool.ntp.org project.#server 0.centos.pool.ntp.org iburst#server 1.centos.pool.ntp.org iburst#server 2.centos.pool.ntp.org iburst#server 3.centos.pool.ntp.org iburstserver ntp1.aliyun.com iburstserver ntp2.aliyun.com iburst


启动服务

systemctl enable chronyd --now


写入硬件

hwclock -w


查看当前时间和硬件时间

date && hwclock -r


查看系统时间同步,服务前有^*表示为当前同步的ntp server

[root@rh-master01 ~]# chronyc sources -v210 Number of sources = 2
.-- Source mode '^' = server, '=' = peer, '#' = local clock. / .- Source state '*' = current synced, '+' = combined , '-' = not combined,| / '?' = unreachable, 'x' = time may be in error, '~' = time too variable.|| .- xxxx [ yyyy ] +/- zzzz|| Reachability register (octal) -. | xxxx = adjusted offset,|| Log2(Polling interval) --. | | yyyy = measured offset,|| \ | | zzzz = estimated error.|| | | \MS Name/IP address Stratum Poll Reach LastRx Last sample ===============================================================================^* ntp1.aliyun.com 3 6 37 14 +1144us[+1669us] +/- 44ms^+ ntp2.aliyun.com 3 6 37 14 -1946us[-1946us] +/- 44ms


4) 禁用iptables和firewalld服务

K8S和docker在运行中会产生大量的iptables规则,为了不让系统规则跟他们混淆,直接关闭系统的规则。

systemctl disable firewalld --now


5) 禁用selinux

selinux是linux系统下的一个安全服务,建议关闭。

临时关闭selinux

[root@rh-master01 ~]# getenforce Enforcing[root@rh-master01 ~]# setenforce 0[root@rh-master01 ~]# getenforce Permissive


修改配置文件永久关闭selinux,需要重启生效

[root@rh-master01 ~]# sed -i 's/=enforcing/=disabled/g' /etc/selinux/config


查看配置文件确认已修改

[root@rh-master01 ~]# grep SELINUX= /etc/selinux/config# SELINUX= can take one of these three values:SELINUX=disabled


6) 禁用swap分区

临时关闭swap,需要重启

[root@rh-master01 ~]# swapoff -a[root@rh-master01 ~]# swapon -s[root@rh-master01 ~]# free -h total used free shared buff/cache availableMem: 31G 281M 30G 11M 235M 30GSwap: 0B 0B 0B


永久关闭swap

同时vi /etc/fstab文件中注释掉swap一行

#/dev/mapper/centos-swap swap swap defaults 0 0


内核中关闭(sysctl.conf配置后k8s.conf不需要配置)

echo vm.swappiness=0 >> /etc/sysctl.confsysctl -p


7) 将桥接的IPv4流量传递到iptables的链

在每个节点上将桥接的IPv4流量传递到iptables的链:

cat > /etc/sysctl.d/k8s.conf << EOFnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1vm.swappiness = 0EOF


# 加载br_netfilter模块

[root@rh-master01 ~]# modprobe br_netfilter


# 查看是否加载

[root@rh-master01 ~]# lsmod | grep br_netfilterbr_netfilter 22256 0bridge 151336 1 br_netfilter


# 生效

sysctl --system


8) 开启ipvs

在kubernetes中service有两种代理模型,一种是基于iptables,另一种是基于ipvs的。ipvs的性能要高于iptables的,但是如果要使用它,需要手动载入ipvs模块。


在每个节点安装ipset和ipvsadm:

yum -y install ipset ipvsadmcat > /etc/sysconfig/modules/ipvs.modules <<EOF#!/bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4EOF


授权、运行、检查是否加载:

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4


检查是否加载:

lsmod | grep -e ipvs -e nf_conntrack_ipv4


9) 所有节点配置limit

临时生效:

ulimit -SHn 65536


永久生效:

cat >> /etc/security/limits.conf << EOF# k8s add* soft nofile 65536* hard nofile 65536* soft nproc 4096* hard nproc 4096* soft memlock unlimited* hard memlock unlimitedEOF


10) 在rh-master01节点设置免密钥登录到其他节点

在rh-master01节点生成配置文件和整数,并传输到其他节点上。

# 遇到输入,直接Enter即可

ssh-keygen -t rsa


将公钥传输到其它主机

for i in rh-master01 rh-master02 rh-master03 rh-node01 rh-node02; do ssh-copy-id -i .ssh/id_rsa.pub $i;done


验证到其它主机ssh等效性配置正常

[root@rh-master01 ~]# for i in rh-master01 rh-master02 rh-master03 rh-node01 rh-node02; do ssh $i hostname;donerh-master01rh-master02rh-master03rh-node01rh-node02


11) 所有节点升级系统并重启(非必须)

所有节点升级系统并重启,此处没有升级内核:

yum -y --exclude=kernel* update && reboot


12) 内核配置(非必须)

12.1) 查看默认的内核

查看默认的内核:

[root@rh-master01 ~]# uname -r3.10.0-957.el7.x86_64


12.2) 升级内核配置

CentOS7需要升级内核到4.18+。

a) 在 CentOS 7 上启用 ELRepo 仓库

[root@rh-master01 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org


安装elrepo-release

[root@rh-master01 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpmRetrieving http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpmRetrieving http://elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpmPreparing... ################################# [100%]Updating / installing... 1:elrepo-release-7.0-4.el7.elrepo ################################# [100%]


b) 仓库启用后,你可以使用下面的命令列出可用的内核相关包

[root@rh-master01 ~]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list availableLoaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * elrepo-kernel: mirrors.tuna.tsinghua.edu.cnelrepo-kernel | 3.0 kB 00:00:00 elrepo-kernel/primary_db | 2.1 MB 00:00:01 Available Packageselrepo-release.noarch 7.0-5.el7.elrepo elrepo-kernelkernel-lt.x86_64 5.4.188-1.el7.elrepo elrepo-kernelkernel-lt-devel.x86_64 5.4.188-1.el7.elrepo elrepo-kernelkernel-lt-doc.noarch 5.4.188-1.el7.elrepo elrepo-kernelkernel-lt-headers.x86_64 5.4.188-1.el7.elrepo elrepo-kernelkernel-lt-tools.x86_64 5.4.188-1.el7.elrepo elrepo-kernelkernel-lt-tools-libs.x86_64 5.4.188-1.el7.elrepo elrepo-kernelkernel-lt-tools-libs-devel.x86_64 5.4.188-1.el7.elrepo elrepo-kernelkernel-ml.x86_64 5.17.1-1.el7.elrepo elrepo-kernelkernel-ml-devel.x86_64 5.17.1-1.el7.elrepo elrepo-kernelkernel-ml-doc.noarch 5.17.1-1.el7.elrepo elrepo-kernelkernel-ml-headers.x86_64 5.17.1-1.el7.elrepo elrepo-kernelkernel-ml-tools.x86_64 5.17.1-1.el7.elrepo elrepo-kernelkernel-ml-tools-libs.x86_64 5.17.1-1.el7.elrepo elrepo-kernelkernel-ml-tools-libs-devel.x86_64 5.17.1-1.el7.elrepo elrepo-kernelperf.x86_64 5.17.1-1.el7.elrepo elrepo-kernelpython-perf.x86_64 5.17.1-1.el7.elrepo elrepo-kernel


c) 安装最新的主线稳定内核

[root@rh-master01 ~]# yum -y --enablerepo=elrepo-kernel install kernel-mlLoaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: mirrors.ustc.edu.cn * elrepo: mirrors.tuna.tsinghua.edu.cn * elrepo-kernel: mirrors.tuna.tsinghua.edu.cn * extras: mirrors.aliyun.com * updates: mirrors.aliyun.comelrepo | 3.0 kB 00:00:00 elrepo/primary_db | 435 kB 00:00:01 Resolving Dependencies--> Running transaction check---> Package kernel-ml.x86_64 0:5.17.1-1.el7.elrepo will be installed--> Finished Dependency Resolution
Dependencies Resolved======================================================================================================================================================================= Package Arch Version Repository Size=======================================================================================================================================================================Installing: kernel-ml x86_64 5.17.1-1.el7.elrepo elrepo-kernel 56 M
Transaction Summary=======================================================================================================================================================================Install 1 Package
Total download size: 56 MInstalled size: 255 MDownloading packages:kernel-ml-5.17.1-1.el7.elrepo.x86_64.rpm | 56 MB 00:00:20 Running transaction checkRunning transaction testTransaction test succeededRunning transactionWarning: RPMDB altered outside of yum. Installing : kernel-ml-5.17.1-1.el7.elrepo.x86_64 1/1 Verifying : kernel-ml-5.17.1-1.el7.elrepo.x86_64 1/1
Installed: kernel-ml.x86_64 0:5.17.1-1.el7.elrepo
Complete!


d) 设置 GRUB 默认的内核版本

vim /etc/default/grubGRUB_TIMEOUT=5GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"GRUB_DEFAULT=0 #将GRUB_DEFAULT=saved修改为0,GRUB初始化页面的第一个内核将作为默认内核GRUB_DISABLE_SUBMENU=trueGRUB_TERMINAL_OUTPUT="console"GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rhgb quiet"GRUB_DISABLE_RECOVERY="true"


e) 重新创建内核配置

[root@rh-master01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfgGenerating grub configuration file ...Found linux image: /boot/vmlinuz-5.17.1-1.el7.elrepo.x86_64Found initrd image: /boot/initramfs-5.17.1-1.el7.elrepo.x86_64.imgFound linux image: /boot/vmlinuz-3.10.0-957.el7.x86_64Found initrd image: /boot/initramfs-3.10.0-957.el7.x86_64.imgFound linux image: /boot/vmlinuz-0-rescue-798f08d79bbe4b428bfb56ea5272a098Found initrd image: /boot/initramfs-0-rescue-798f08d79bbe4b428bfb56ea5272a098.imgdone


f) 重启机器应用最新内核

reboot


g) 查看最新内核版本

[root@rh-master01 ~]# uname -srLinux 5.17.1-1.el7.elrepo.x86_64



0155.K kubeadm安装高可用K8S集群(1/2)
0155.K kubeadm安装高可用K8S集群(1/2)

2. 每个节点安装Docker、kubeadm、kubelet和kubectl



2.1 安装Docker

1) 安装Docker

使用阿里云docker镜像

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo


查看支持的docker的镜像版本:

yum list docker-ce --showduplicates


安装最新版本:

yum -y install docker-ce


--假如需要安装指定版本,请指定版本号:

yum install -y docker-ce-20.10.13-3.el7 docker-ce-cli-20.10.13-3.el7 containerd.io


启动docker服务并设置为开机启动

systemctl enable docker && systemctl restart docker


查看版本

docker versionsystemctl start dockerdocker info


2) 设置Docker镜像加速器

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'{ "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": ["https://du3ia00u.mirror.aliyuncs.com"], "live-restore": true, "log-driver":"json-file", "log-opts": {"max-size":"500m", "max-file":"3"}, "storage-driver": "overlay2"}EOF


URL也可更换为:

https://b9pmyelo.mirror.aliyuncs.com


重新加载服务变更

sudo systemctl daemon-reload


重启docker服务

sudo systemctl restart docker


2.2 添加阿里云的YUM软件源

由于kubernetes的镜像源在国外,非常慢,这里切换成国内的阿里云镜像源:

cat > /etc/yum.repos.d/kubernetes.repo << EOF[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF


2.3 安装kubeadm、kubelet和kubectl

1) 查看当前支持哪些版本

yum list kubelet --showduplicates | sort -r


2) 安装kubelet kubeadm kubectl

安装最新版本,截止2022-03-30最新版本为1.23.5(本次不安装最新版本)

yum install -y kubelet kubeadm kubectl


本次指定版本号部署,后续进行升级:

[root@rh-master01 ~]# yum -y install kubelet-1.23.2 kubeadm-1.23.2 kubectl-1.23.2Loaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: mirrors.ustc.edu.cn * elrepo: mirrors.tuna.tsinghua.edu.cn * extras: mirrors.aliyun.com * updates: mirrors.aliyun.comResolving Dependencies--> Running transaction check---> Package kubeadm.x86_64 0:1.23.2-0 will be installed--> Processing Dependency: kubernetes-cni >= 0.8.6 for package: kubeadm-1.23.2-0.x86_64---> Package kubectl.x86_64 0:1.23.2-0 will be installed---> Package kubelet.x86_64 0:1.23.2-0 will be installed--> Running transaction check---> Package kubernetes-cni.x86_64 0:0.8.7-0 will be installed--> Finished Dependency Resolution
Dependencies Resolved======================================================================================================================================================================= Package Arch Version Repository Size=======================================================================================================================================================================Installing: kubeadm x86_64 1.23.2-0 kubernetes 9.0 M kubectl x86_64 1.23.2-0 kubernetes 9.5 M kubelet x86_64 1.23.2-0 kubernetes 21 MInstalling for dependencies: kubernetes-cni x86_64 0.8.7-0 kubernetes 19 M
Transaction Summary=======================================================================================================================================================================Install 3 Packages (+1 Dependent package)
Total download size: 58 MInstalled size: 261 MDownloading packages:(1/4): 467629e304b29edc810caf1284ed9e0f7f32066b99aa08e1f8438e3814d45b1a-kubeadm-1.23.2-0.x86_64.rpm | 9.0 MB 00:00:16 (2/4): 3573b1aa29bf52185d789ec7ba9835307211b3d4b70c92c0ad96423c0ce1aa4f-kubectl-1.23.2-0.x86_64.rpm | 9.5 MB 00:00:16 (3/4): db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm | 19 MB 00:00:31 (4/4): 0714477a6941499ce3d594cd8e0c440493770bd25b36efdd4ec88eadff25c2ea-kubelet-1.23.2-0.x86_64.rpm | 21 MB 00:00:34 -----------------------------------------------------------------------------------------------------------------------------------------------------------------------Total 1.1 MB/s | 58 MB 00:00:50 Running transaction checkRunning transaction testTransaction test succeededRunning transaction Installing : kubernetes-cni-0.8.7-0.x86_64 1/4 Installing : kubelet-1.23.2-0.x86_64 2/4 Installing : kubectl-1.23.2-0.x86_64 3/4 Installing : kubeadm-1.23.2-0.x86_64 4/4 Verifying : kubectl-1.23.2-0.x86_64 1/4 Verifying : kubelet-1.23.2-0.x86_64 2/4 Verifying : kubeadm-1.23.2-0.x86_64 3/4 Verifying : kubernetes-cni-0.8.7-0.x86_64 4/4
Installed: kubeadm.x86_64 0:1.23.2-0 kubectl.x86_64 0:1.23.2-0 kubelet.x86_64 0:1.23.2-0
Dependency Installed: kubernetes-cni.x86_64 0:0.8.7-0
Complete!


3) docker和kubelet cgroup driver设置

Kubernetes推荐使用systemd来代替cgroupfs,为了实现Docker使用的cgroup driver和kubelet使用的cgroup driver一致,建议修改"/etc/sysconfig/kubelet"文件的内容:

vim /etc/sysconfig/kubelet #编辑配置文件修改内容如下KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"KUBE_PROXY_MODE="ipvs"


设置为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动:

systemctl enable kubelet


查看docker当前使用的cgroup

[root@rh-master01 ~]# docker info | grep -i "Cgroup Driver" Cgroup Driver: systemd


2.4 高可用组件安装

注意:如果不是高可用集群,haproxy和keepalived无需安装。

1) rh-master01、rh-master02、rh-master03节点通过yum安装HAProxy和keepAlived

yum -y install keepalived haproxy


2) rh-master01、rh-master02、rh-master03节点配置HAProxy

备份配置文件:

cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.orig


修改配置文件:

vim /etc/haproxy/haproxy.cfg #将原内容更换为如下内容global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitor listen stats bind *:8006 mode http stats enable stats hide-version stats uri /stats stats refresh 30s stats realm Haproxy\ Statistics stats auth admin:admin frontend rh-master bind 0.0.0.0:16443 bind 127.0.0.1:16443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend rh-master backend rh-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 # 下面的配置根据实际情况修改 server rh-master01 192.168.80.135:6443 check server rh-master02 192.168.80.136:6443 check server rh-master03 192.168.80.137:6443 check


3) rh-master01配置Keepalived

备份配置文件:

cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.orig


修改配置文件:

vim /etc/keepalived/keepalived.conf #将原内容更换为如下内容! Configuration File for keepalivedglobal_defs { ## 标识本节点的字条串,通常为 hostname router_id rh-master01 script_user root enable_script_security }

## 检测脚本## keepalived 会定时执行脚本并对脚本执行的结果进行分析,动态调整 vrrp_instance 的优先级。如果脚本执行结果为 0,并且 weight 配置的值大于 0,则优先级相应的增加。如果脚本执行结果非 0,并且 weight配置的值小于 0,则优先级相应的减少。其他情况,维持原本配置的优先级,即配置文件中 priority 对应的值。#vrrp_script chk_apiserver {# script "/etc/keepalived/check_apiserver.sh"# # 每2秒检查一次# interval 2# # 一旦脚本执行成功,权重减少5# weight -5# fall 3 # rise 2#}

## 定义虚拟路由,VI_1 为虚拟路由的标示符,自己定义名称vrrp_instance VI_1 { ## 主节点为 MASTER,对应的备份节点为 BACKUP state MASTER ## 绑定虚拟 IP 的网络接口,与本机 IP 地址所在的网络接口相同 interface ens33 # 主机的IP地址 mcast_src_ip 192.168.80.135 # 虚拟路由id virtual_router_id 100 ## 节点优先级,值范围 0-254,MASTER 要比 BACKUP 高 priority 150 ## 优先级高的设置 nopreempt 解决异常恢复后再次抢占的问题 nopreempt ## 组播信息发送间隔,所有节点设置必须一样,默认 1s advert_int 2 ## 设置验证信息,所有节点必须一致 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } ## 虚拟 IP 池, 所有节点设置必须一样 virtual_ipaddress { ## 虚拟 ip,可以定义多个 192.168.80.134/24 } track_script { chk_apiserver }}


4) rh-master02配置Keepalived

备份配置文件:

cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.orig


修改配置文件:

vim /etc/keepalived/keepalived.conf #将原内容更换为如下内容! Configuration File for keepalivedglobal_defs { router_id rh-master02 script_user root enable_script_security }#vrrp_script chk_apiserver {# script "/etc/keepalived/check_apiserver.sh"# interval 2# weight -5# fall 3 # rise 2#}
vrrp_instance VI_1 { state BACKUP interface ens33 mcast_src_ip 192.168.80.136 virtual_router_id 100 priority 100 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.80.134/24 } track_script { chk_apiserver }}


5) rh-master03配置Keepalived

备份配置文件:

cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.orig


修改配置文件:

vim /etc/keepalived/keepalived.conf #将原内容更换为如下内容! Configuration File for keepalivedglobal_defs { router_id rh-master03 script_user root enable_script_security }#vrrp_script chk_apiserver {# script "/etc/keepalived/check_apiserver.sh" # interval 2# weight -5# fall 3 # rise 2#} vrrp_instance VI_1 { state BACKUP interface ens33 mcast_src_ip 192.168.80.137 virtual_router_id 100 priority 100 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.80.134/24 } track_script { chk_apiserver }}


6) 在rh-master01、rh-master02、rh-master03上新建监控脚本,并设置权限(keepalived.conf中未开启vrrp_script chk_apiserver,则不需要配置)

新建脚本:

vim /etc/keepalived/check_apiserver.sh#!/bin/bash
err=0for k in $(seq 1 5)do check_code=$(pgrep kube-apiserver) if [[ $check_code == "" ]]; then err=$(expr $err + 1) sleep 5 continue else err=0 break fidone if [[ $err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1else exit 0fi


赋予执行权限:

chmod u+x /etc/keepalived/check_apiserver.sh


7) 在rh-master01、rh-master02、rh-master03上启动haproxy和keepalived

systemctl daemon-reloadsystemctl enable --now haproxysystemctl enable --now keepalived


测试VIP(虚拟IP):

[root@rh-master01 ~]# ping 192.168.80.134 -c 2 #该IP默认配置在master01上PING 192.168.80.134 (192.168.80.134) 56(84) bytes of data.64 bytes from 192.168.80.134: icmp_seq=1 ttl=64 time=0.077 ms64 bytes from 192.168.80.134: icmp_seq=2 ttl=64 time=0.053 ms



- 未完待续 -



旨在交流,不足之处,还望抛砖。




 


往期推荐