K8s二进制安装(k8s1.17.4集群+keepalive-haproxy高可用)
一:K8s二进制安装部署篇
一:K8s简介
Kubernetes(简称k8s)是Google在2014年6月开源的一个容器集群管理系统,使用Go语言开发,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效,Kubernetes提供了资源调度、部署管理、服务发现、扩容缩容、监控,维护等一整套功能。,努力成为跨主机集群的自动部署、扩展以及运行应用程序容器的平台。它支持一系列容器工具, 包括Docker等。
二:kubernetes环境准备与参数配置
1、安装版本
本文档是kubernetes1.17.4二进制版本
2、环境介绍
名称 | 配置 |
---|---|
Kubernetes版本 | v1.17.4 |
Cluster Pod CIDR | 10.244.0.0/16 |
Cluster Service CIDR | 10.250.0.0/24 |
kubernetes service | 10.250.0.1 |
dns service | 10.250.0.10 |
VIP | 192.168.10.50 |
3、环境准备
主机名 | IP | 组件 | 配置 |
---|---|---|---|
k8s-master1 | 192.168.10.51 | etcd、apiserver、controller-manager、schedule、kube-proxy、kubelet、docker-ce | 2核 2G |
k8s-master2 | 192.168.10.52 | etcd、apiserver、controller-manager、schedule、kube-proxy、kubelet、docker-ce | 2核 2G |
k8s-master3 | 192.168.10.53 | etcd、apiserver、controller-manager、schedule、kube-proxy、kubelet、docker-ce | 2核 2G |
k8s-node1 | 192.168.10.60 | kube-proxy、kubelet、docker-ce | 2核 2G |
4、环境变量
export VIP=192.168.10.50
export MASTER1_HOSTNAME=k8s-master1
export MASTER1_IP=192.168.10.51
export MASTER2_HOSTNAME=k8s-master2
export MASTER2_IP=192.168.10.52
export MASTER3_HOSTNAME=k8s-master3
export MASTER3_IP=192.168.10.53
export NODE1_HOSTNAME=k8s-node1
export NODE1_IP=192.168.10.60
5、hosts解析(所有节点执行)
cat >> /etc/hosts << EOF
192.168.10.51 k8s-master1
192.168.10.52 k8s-master2
192.168.10.53 K8s-master3
192.168.10.60 k8s-node1
EOF
6、免密登陆(k8s-master1节点执行)
yum -y install expect
ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa
for i in 192.168.10.51 192.168.10.52 192.168.10.53 192.168.10.60 k8s-master1 k8s-master2 k8s-master3 k8s-node1;do
expect -c "
spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i
expect {
\"*yes/no*\" {send \"yes\r\"; exp_continue}
\"*password*\" {send \"Bscadmin@8037\r\"; exp_continue}
\"*Password*\" {send \"Bscadmin@8037\r\";}
} "
done
7、ntp配置(所有节点执行)
方法一:
#所有节点安装 yum install chrony libibverbs -y
#k8s-master1节点
vim /etc/chrony.conf
#注释,添加下面2行
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server k8s-master1 iburst
allow 192.168.10.0/24
保存退出
重启服务
systemctl enable chronyd.service
systemctl restart chronyd.service
#其他节点
vim /etc/chrony.conf
#注释,添加下面1行
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server k8s-master1 iburst
保存退出
重启服务
systemctl enable chronyd.service
systemctl restart chronyd.service
#在所有节点验证
chronyc sources
方法二:
yum -y install ntp
systemctl enable ntpd
systemctl start ntpd
ntpdate -u time1.aliyun.com
hwclock --systohc
timedatectl set-timezone Asia/Shanghai
8、批量修改主机名(k8s-master1节点执行)
ssh 192.168.10.51 "hostnamectl set-hostname k8s-master1" &&
ssh 192.168.10.52 "hostnamectl set-hostname k8s-master2" &&
ssh 192.168.10.53 "hostnamectl set-hostname k8s-master3" &&
ssh 192.168.10.60 "hostnamectl set-hostname k8s-node1"
1、测试通信:(K8s-master1节点执行)
for i in k8s-master1 k8s-master2 k8s-master3 k8s-node1 ; do ssh root@$i "hostname";done
9、升级内核(所有节点执行)
内核包我已下载,直接上传安装,所有节点执行
安装
yum install -y kernel-ml-4.16.13-1.el7.elrepo.x86_64
配置从新的内核启动
grub2-set-default 0
重新生成grub2.cfg 使虚拟机使用新内核启动
grub2-mkconfig -o /boot/grub2/grub.cfg
10、调整内核参数(所有节点执行)
cat > /etc/sysctl.conf <<EOF
net.bridge.bridge-nf-call-iptables=1 #开启网桥模式
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1 #开启转发
vm.swappiness=0 #关闭swap
vm.overcommit_memory=1 #不检查物理内存是否够用
vm.panic_on_oom=0 #开启OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
然后执行sysctl -p 使配置的内核响应参数生效,执行可能会报很多内核参数不存在,这是正常现象,需要安装某些软件并重启系统,加载新的内核
11、关闭防火墙、selinux、swap、NetworkManager(所有节点执行)
systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
iptables -P FORWARD ACCEPT
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
systemctl stop NetworkManager
systemctl disable NetworkManager
12、修改资源限制(所有节点执行)
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536" >> /etc/security/limits.conf
echo "* hard nproc 65536" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf
13、常用软件安装(所有节点执行)
yum -y install bridge-utils chrony ipvsadm ipset sysstat conntrack libseccomp wget tcpdump screen vim nfs-utils bind-utils wget socat telnet sshpass net-tools sysstat lrzsz yum-utils device-mapper-persistent-data lvm2 tree nc lsof strace nmon iptraf iftop rpcbind mlocate ipvsadm vim unzip
14、加载内核ipvs模块(所有节点执行)
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
15、配置日志
centos7以后,引导方式改为了systemd,所以会有两个日志系统同时工作只保留一个日志,建议使用journal
mkdir /var/log/journal
mkdir -p /etc/systemd/journald.conf.d
cat /etc/systemd/journald.conf.d/99-prophet.conf
[Journal]
Storage=persistent
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
SystemMaxUse=10G
SystemMaxFileSize=200M
MaxRetentionSec=2week
ForwardToSyslog=no
systemctl restart systemd-journald
16、安装包
由于很多软件包需要科学上网才能下载,这里我提供了必要的安装包。
链接:https://pan.baidu.com/s/1U1eo_322Npb33GzNYAjQAw
提取码:0wyd
三:安装docker-ce
1、添加yum源(所有节点执行)
建议先把/etc/yum.repos.d/内容清空
tee /etc/yum.repos.d/docker-ce.repo <<-'EOF'
[aliyun-docker-ce]
name=aliyun-docker-ce
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/
enable=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
EOF
2、安装docker-ce
yum -y install docker-ce
3、重启docker并设置为开机自启
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
4、配置镜像地址以及加速器
cat > /etc/docker/daemon.json <<-EOF
{
"registry-mirrors": [
"https://jzphj83k.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com"
],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
注:
registry-mirrors 为镜像加速器地址。
native.cgroupdriver=systemd 表示使用的 cgroup 驱动为 systemd(k8s 使用此方式),默认为 cgroupfs。
上诉所有操作执行完后,重启主机,验证内核是否升级成功
四:证书
可使用openssl或cfssl,本示例使用cfssl
需要了解的知识点,由于证书是个很复杂且有独立的机构运维的一门技术,我们只需要了解即可。
PKI基础概念
什么是pki:
公开密钥基础建设(英语:Public Key Infrastructure,缩写:PKI),又称公开密钥基础架构、公钥基础建设、公钥基础设施、公开密码匙基础建设或公钥基础架构,是一组由硬件、软件、参与者、管理政策与流程组成的基础架构,其目的在于创造、管理、分配、使用、存储以及撤销数字证书。(节选维基百科)
PKI是借助CA(权威数字证书颁发/认证机构)将用户的个人身份跟公开密钥链接在一起,它能够确保每个用户身份的唯一性,这种链接关系是通过注册和发布过程实现,并且根据担保级别,链接关系可能由CA和各种软件或在人为监督下完成。PKI用来确定链接关系的这一角色称为RA(Registration Authority, 注册管理中心),RA能够确保公开密钥和个人身份链接,可以防抵赖,防篡改。在微软的公钥基础建设下,RA又被称为CA,目前大多数称为CA。
PKI组成要素:
从上面可以得知PKI的几个主要组成要素,用户(使用PKI的人或机构),认证机构(CA,颁发证书的人或机构),仓库(保存证书的数据库)等。
非对称加密:
本文提到的密钥均为非对称加密,有公钥和私钥之分,并且他们总是成对出现,它们的特点就是其中一个加密的数据,只能使用另一个解密,即使它本身也无法解密,也就是说公钥加密的,私钥可以解密,私钥加密的,公钥可以解密。
证书签名请求CSR:
它是向CA机构申请数字证书时使用的请求文件,这里的CSR不是证书,而向权威证书颁发机构获得签名证书的申请,当CA机构颁发的证书过期时,你可以使用相同的CSR来申请新的证书,此时key不变。
数字签名:
数字签名就是“非对称加密+摘要算法”,其目的不是为了加密,而是为了防抵赖或者他们篡改数据。其核心思想是:比如A要给B发送数据,A先用摘要算法得到数据的指纹,然后用A的私钥加密指纹,加密后的指纹就是A的签名,B收到数据和A的签名后,也用同样的摘要算法计算指纹,然后用A公开的公钥解密签名,比较两个指纹,如果相同,说明数据没有被篡改,确实是A发过来的数据。假设C想改A发给B的数据来欺骗B,因为篡改数据后指纹会变,要想跟A的签名里面的指纹一致,就得改签名,但由于没有A的私钥,所以改不了,如果C用自己的私钥生成一个新的签名,B收到数据后用A的公钥根本就解不开。(来源于网络)
数字证书格式:
数字证书格式有很多,比如.pem,.cer或者.crt等。
PKI工作流程:
CA机构,可以颁发证书。证书订阅人,首先去申请一个证书,为了申请这个证书,需要去登记,告诉它,我是谁,我属于哪个组织,到了登记机构,再通过CSR,发送给CA中心,CA中心通过验证通过之后 ,会颁发一对公钥和私钥,并且公钥会在CA中心存一份;证书订阅人拿到证书以后,部署在服务器;
当用户访问我们的web服务器时,它会请求我们的证书,服务器会把公钥发送给我们的客户端,客户端会去验证我们证书的合法性,客户端是如何验证证书是否有效呢?CA中心会把过期证书放在CRL服务器上面 ,这个CRL服务会把所有过期的证书形成一条链条,所以他的性能非常的差,所以又推出了OCSP程序,OCSP可以就一个证书进行查询,它是否过期,浏览器可以直接去查询OCSP响应程序,但OCSP响应程序效率还不是很高,最终往往我们会把web服务器如nginx有一个ocsp开关,当我们打开这个开关以后,会有nginx服务器主动的去ocsp服务器去查询,这样大量的客户端直接从web服务器就可以直接获取到证书是否有效。
1、cfssl介绍
cfssl是使用go编写,由CloudFlare开源的一款PKI/TLS工具。主要程序有cfssl,是CFSSL的命令行工具,cfssljson用来从cfssl程序获取JSON输出,并将证书,密钥,CSR和bundle写入文件中。
1、安装cfssl(k8s-master1节点执行)
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
cfssl version
3、生成证书(以下所有操作,均在k8s-master1节点执行)
1、生成etcd证书
1、创建生成证书和临时存放证书的目录
创建生成证书和临时存放证书的目录
mkdir /root/ssl/{etcd,kubernetes} -p
进入etcd目录
cd /root/ssl/etcd/
2、创建用来生成CA文件的JSON配置文件
此CA文件只用与etcd的证书
cat << EOF | tee ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"etcd": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth", #表示client可以对使用该ca对server提供的证书进行验证
"client auth" #表示server可以使用该ca对client提供的证书进行验证
]
}
}
}
}
EOF
3、创建用来生成CA证书签名请求(CSR)的JSON配置文件
cat << EOF | tee ca-csr.json
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
4、生成CA证书和私钥
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
输出内容:略
查看生成的CA证书和私钥
5、创建etcd证书请求
cat << EOF | tee etcd-csr.json
{
"CN": "etcd",
"hosts": [
"192.168.10.51",
"192.168.10.52",
"192.168.10.53"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
6、生成etcd证书和私钥
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=etcd etcd-csr.json | cfssljson -bare etcd
输出内容:略
查看生成的所有etcd证书
2、生成kubernetes组件证书
1、切换到kubernetes组件证书申请和存放目录
cd /root/ssl/kubernetes/
2、新建CAP配置文件,用于kubernetes集群的组件核admin角色
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"expiry": "8760h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
3、新建CA凭证签发请求文件
cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"hosts": [
"127.0.0.1",
"192.168.10.50",
"192.168.10.51",
"192.168.10.52",
"192.168.10.53"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "China",
"L": "Beijing",
"O": "Kubernetes",
"OU": "Beijing",
"ST": "Beijing"
}
]
}
EOF
生成CA凭证和私钥
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
输出信息:略
查看创建的证书和私钥
4、client与server凭证
1、创建admin client 凭证签发请求文件
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "China",
"L": "Beijing",
"O": "system:masters",
"OU": "Kubernetes",
"ST": "Beijing"
}
]
}
EOF
2、创建admin client 凭证和私钥
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
输出信息:略
查看生成的文件
5、生成kubelet客户端凭证
kubernetes使用special-purpose authorization mode(被称作 Node Authorizer) 授权来自kubelet的API请求,为了通过Node Authorizer的授权,kubelet 必须使用一个署名为system:node:的凭证来证明它属system:nodes用户组。为每台节点(包括master节点)创建凭证和私钥。(再次提醒,证书环节均在k8s-master01上操作,且提前执行环境变量)
export VIP=192.168.10.50
export MASTER1_HOSTNAME=k8s-master1
export MASTER1_IP=192.168.10.51
export MASTER2_HOSTNAME=k8s-master2
export MASTER2_IP=192.168.10.52
export MASTER3_HOSTNAME=k8s-master3
export MASTER3_IP=192.168.10.53
export NODE1_HOSTNAME=k8s-node1
export NODE1_IP=192.168.10.60
1、创建master1节点的凭证签发请求文件
cat << EOF | tee k8s-master1-csr.json
{
"CN": "system:node:${MASTER1_HOSTNAME}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "China",
"L": "Beijing",
"O": "system:nodes",
"OU": "Kubernetes",
"ST": "Beijing"
}
]
}
EOF
生成master1节点的证书和私钥
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${MASTER1_HOSTNAME},${MASTER1_IP} \
-profile=kubernetes \
k8s-master1-csr.json | cfssljson -bare k8s-master1
输出信息:略
输出的文件
2、创建master2节点的凭证签发请求文件
cat << EOF | tee k8s-master2-csr.json
{
"CN": "system:node:${MASTER2_HOSTNAME}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "China",
"L": "Beijing",
"O": "system:nodes",
"OU": "Kubernetes",
"ST": "Beijing"
}
]
}
EOF
生成master2节点的证书和私钥
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${MASTER2_HOSTNAME},${MASTER2_IP} \
-profile=kubernetes \
k8s-master2-csr.json | cfssljson -bare k8s-master2
输出信息:略
输出的文件
3、创建master3节点的凭证签发请求文件
cat << EOF | tee k8s-master3-csr.json
{
"CN": "system:node:${MASTER3_HOSTNAME}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "China",
"L": "Beijing",
"O": "system:nodes",
"OU": "Kubernetes",
"ST": "Beijing"
}
]
}
EOF
生成master3节点的证书和私钥
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${MASTER3_HOSTNAME},${MASTER3_IP} \
-profile=kubernetes \
k8s-master3-csr.json | cfssljson -bare k8s-master3
输出信息:略
输出的文件
4、创建k8s-node1节点的凭证签发请求文件
cat << EOF | tee k8s-node1-csr.json
{
"CN": "system:node:${NODE1_HOSTNAME}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "China",
"L": "Beijing",
"O": "system:nodes",
"OU": "Kubernetes",
"ST": "Beijing"
}
]
}
EOF
生成k8s-node1节点的证书和私钥
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${NODE1_HOSTNAME},${NODE1_IP} \
-profile=kubernetes \
k8s-node1-csr.json | cfssljson -bare k8s-node1
输出信息:略
输出文件
3、创建master组件需要的证书
1、创建kube-controller-manager客户端凭证
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "China",
"L": "Beijing",
"O": "system:kube-controller-manager",
"OU": "Kubernetes",
"ST": "Beijing"
}
]
}
EOF
生成证书和私钥
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
输出信息:略
输出文件
2、创建kube-proxy客户端凭证
cat <<EOF |tee kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "China",
"L": "Beijing",
"O": "system:node-proxier",
"OU": "Kubernetes",
"ST": "Beijing"
}
]
}
EOF
生成证书和私钥
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
输出信息:略
输出文件
3、创建kube-scheduler凭证签发
cat <<EOF | tee kube-scheduler-csr.json
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "China",
"L": "Beijing",
"O": "system:kube-scheduler",
"OU": "Kubernetes",
"ST": "Beijing"
}
]
}
EOF
生成证书
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
输出信息:略
输出文件
4、创建kubernetes 证书
1、创建kubernetes API Server 凭证签发请求文件
cat <<EOF | tee kubernetes-csr.json
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.10.50",
"192.168.10.51",
"192.168.10.52",
"192.168.10.53",
"10.250.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "China",
"L": "Beijing",
"O": "Kubernetes",
"OU": "Kubernetes",
"ST": "Beijing"
}
]
}
EOF
生成kubernetes API Server 凭证与私钥
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
输出信息:略
输出文件
5、Service Account 证书
1、创建凭证签发
cat > service-account-csr.json <<EOF
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "China",
"L": "Beijing",
"O": "Kubernetes",
"OU": "Kubernetes",
"ST": "Beijing"
}
]
}
EOF
生成证书和私钥
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account
输出信息:略
输出文件
五:拷贝etcd证书
1、拷贝etcd证书到相应节点的相应目录
1、创建etcd目录
for host in k8s-master1 k8s-master2 k8s-master3;do
echo "---$host---"
ssh root@$host "mkdir /usr/local/etcd/{bin,ssl,data,json,src} -p";
done
2、拷贝etcd证书
# cd /root/ssl/etcd
拷贝方法一:
scp etcd-key.pem etcd.pem ca.pem ca-key.pem k8s-master1:/usr/local/etcd/ssl/
scp etcd-key.pem etcd.pem ca.pem ca-key.pem k8s-master2:/usr/local/etcd/ssl/
scp etcd-key.pem etcd.pem ca.pem ca-key.pem k8s-master3:/usr/local/etcd/ssl/
拷贝方法二:
for host in k8s-master1 k8s-master2 k8s-master3; \
do
echo "--- $host---"
scp -r *.pem $host:/usr/local/etcd/ssl/
done
六:配置和生成kubernetes配置文件
本章节将主要介绍创建kubeconfig配置文件,他们是kubernetes客户端与API Server 认证与鉴权的保证。
kubectl是kubernetes命令行客户端,一般情况集群都开启了TLS认证,kubectl或其它客户端每次与集群kube-apiserver交互都少不了身份验证,目前有两种常用认证方式,使用证书和token,这两种方式也是最通用的方式,本节简单说下kubectl客户端如何使用证书的认证方式访问集群。
使用证书的方式,一般情况下我们需要创建一个kubeconfig配置文件,这个文件用来组织有关集群、用户、命名空间和身份认证机制的信息。kubectl使用kubeconfig配置文件来查找选择集群所需信息,并且集群kube-apiserver进行通信,kubectl默认查到${HOME}/.kube目录下面的config文件,当然也可以通过设置KUBECONFIG环境变量或者在命令行使用–kubeconfig参数指定kubeconfig配置文件。
1、客户端认证配置
本节将会创建用于 kube-proxy kube-controll-manager kube-scheduler 和kubelet 的kubeconfig文件
2、kubelet配置文件
为了确保Node Authorizer授权 kubelet配置文件中的客户端证书必须匹配Node名字
Node名字还使用生成证书那一节配置的环境变量
为所有节点创建kubeconfig配置(都在k8s-master1节点操作)
生成配置文件所在的目录就在上一节生成kubernetes组件所在的目录
3、安装kubectl
这里一次性将所有需要的软件都安装上
wget --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.17.4/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.17.4/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.17.4/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.17.4/bin/linux/amd64/kubectl"
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
#或者一次性下载
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md#downloads-for-v1174 #见下图
#解压缩
tar -zxvf /root/kubernetes-server-linux-amd64-17.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
#拷贝文件至另外两个master节点
for host in k8s-master2 k8s-master3; do
echo "---$host---"
scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $host:/usr/local/bin/
done
#拷贝文件至node节点
scp /usr/local/bin/kube{let,-proxy} k8s-node1:/usr/local/bin/
4、创建配置文件
1、创建k8s-master1节点的配置文件
cd /root/ssl/kubernetes/
#不用环境变量的方法
kubectl config set-cluster kubernetes-training \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${VIP}:8443 \
--kubeconfig=k8s-master1.kubeconfig
kubectl config set-credentials system:node:k8s-master1 \
--client-certificate=k8s-master1.pem \
--client-key=k8s-master1-key.pem \
--embed-certs=true \
--kubeconfig=k8s-master1.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-training \
--user=system:node:k8s-master1 \
--kubeconfig=k8s-master1.kubeconfig
kubectl config use-context default --kubeconfig=k8s-master1.kubeconfig
#使用环境变量的方法
kubectl config set-cluster kubernetes-training \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${VIP}:8443 \
--kubeconfig=${MASTER1_HOSTNAME}.kubeconfig
kubectl config set-credentials system:node:${MASTER1_HOSTNAME} \
--client-certificate=${MASTER1_HOSTNAME}.pem \
--client-key=${MASTER1_HOSTNAME}-key.pem \
--embed-certs=true \
--kubeconfig=${MASTER1_HOSTNAME}.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-training \
--user=system:node:${MASTER1_HOSTNAME} \
--kubeconfig=${MASTER1_HOSTNAME}.kubeconfig
kubectl config use-context default --kubeconfig=${MASTER1_HOSTNAME}.kubeconfig
输出文件
2、创建k8s-master2节点的配置文件
kubectl config set-cluster kubernetes-training \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${VIP}:8443 \
--kubeconfig=k8s-master2.kubeconfig
kubectl config set-credentials system:node:k8s-master2 \
--client-certificate=k8s-master2.pem \
--client-key=k8s-master2-key.pem \
--embed-certs=true \
--kubeconfig=k8s-master2.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-training \
--user=system:node:k8s-master2 \
--kubeconfig=k8s-master2.kubeconfig
kubectl config use-context default --kubeconfig=k8s-master2.kubeconfig
输出文件
3、创建k8s-master3节点的配置文件
kubectl config set-cluster kubernetes-training \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${VIP}:8443 \
--kubeconfig=k8s-master3.kubeconfig
kubectl config set-credentials system:node:k8s-master3 \
--client-certificate=k8s-master3.pem \
--client-key=k8s-master3-key.pem \
--embed-certs=true \
--kubeconfig=k8s-master3.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-training \
--user=system:node:k8s-master3 \
--kubeconfig=k8s-master3.kubeconfig
kubectl config use-context default --kubeconfig=k8s-master3.kubeconfig
输出文件
4、k8s-node1节点配置文件
kubectl config set-cluster kubernetes-training \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${VIP}:8443 \
--kubeconfig=k8s-node1.kubeconfig
kubectl config set-credentials system:node:k8s-node1 \
--client-certificate=k8s-node1.pem \
--client-key=k8s-node1-key.pem \
--embed-certs=true \
--kubeconfig=k8s-node1.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-training \
--user=system:node:k8s-node1 \
--kubeconfig=k8s-node1.kubeconfig
kubectl config use-context default --kubeconfig=k8s-node1.kubeconfig
输出文件
5、kube-proxy配置文件
1、为kube-proxy服务生成kubeconfig配置文件
kubectl config set-cluster kubernetes-training \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${VIP}:8443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-training \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
输出文件
6、kube-controller-manager配置文件
1、为kube-controller-manager服务生成kubeconfig配置文件
kubectl config set-cluster kubernetes-training \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${VIP}:8443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-training \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
输出文件
7、kube-scheduler配置文件
1、为kube-scheduler服务生成kubeconfig配置文件
kubectl config set-cluster kubernetes-training \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${VIP}:8443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-training \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
输出文件
8、Admin配置文件
1、为admin生成kubeconfig配置文件
kubectl config set-cluster kubernetes-training \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${VIP}:8443 \
--kubeconfig=admin.kubeconfig
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-training \
--user=admin \
--kubeconfig=admin.kubeconfig
kubectl config use-context default --kubeconfig=admin.kubeconfig
生成文件
扩展知识:
kubectl config set-cluster kubernetes-training \ #设置集群信息
--certificate-authority=ca.pem \ #设置集群的根证书路径
--embed-certs=true \ #将–certificate-authority根证书写入到kubeconfig中
--server=https://${VIP}:8443 \ #指定访问集群的socket
--kubeconfig=${MASTER1_HOSTNAME}.kubeconfig #kubeconfig配置文件路径
kubectl config set-credentials system:node:${MASTER1_HOSTNAME} \ #设置客户端参数
--client-certificate=${MASTER1_HOSTNAME}.pem \ #指定kubectl使用的证书路径
--client-key=${MASTER1_HOSTNAME}-key.pem \ #指定kubectl使用的私钥路径
--embed-certs=true \ #将kubectl使用的证书和私钥写入到kubeconfig中
--kubeconfig=${MASTER1_HOSTNAME}.kubeconfig #kubeconfig配置文件路径
kubectl config set-context default \ #设置上下文信息
--cluster=kubernetes-training \ #配置使用哪个集群信息,set-cluster中设置的
--user=system:node:${MASTER1_HOSTNAME} \ #配置使用哪个客户端,set-credentials中设置的
--kubeconfig=${MASTER1_HOSTNAME}.kubeconfig #kubeconfig配置文件路径
kubectl config use-context default --kubeconfig=${MASTER1_HOSTNAME}.kubeconfig
步骤 | 配置选项 | 选项说明 |
---|---|---|
1. 设置集群信息 | set-cluster | kubectl config 设置集群信息时使用 |
–certificate-authority | 设置集群的根证书路径 | |
–embed-certs | 将–certificate-authority根证书写入到kubeconfig中 | |
–server | 指定访问集群的socket | |
–kubeconfig | kubeconfig配置文件路径 | |
2. 设置客户端参数 | set-credentials | kubectl config 设置客户端认证信息 |
–client-certificate | 指定kubectl使用的证书路径 | |
–client-key | 指定kubectl使用的私钥路径 | |
–embed-certs | 将kubectl使用的证书和私钥写入到kubeconfig中 | |
–kubeconfig | kubeconfig配置文件路径 | |
3. 设置上下文信息 | set-context | kubectl config 设置上下文参数 |
–cluster | 配置使用哪个集群信息,set-cluster中设置的 | |
–user | 配置使用哪个客户端,set-credentials中设置的 | |
–kubeconfig | kubeconfig配置文件路径 | |
4. kubeconfig中使用哪个上下文 | use-context | kubectl config 设置使用哪个上下文信息 |
七:配置和生成密钥
Kubernetes 存储了集群状态、应用配置和密钥等很多不同的数据。而 Kubernetes 也支持集群数据的加密存储。
本章节将会创建加密密钥以及一个用于加密 Kubernetes Secrets 的 加密配置文件。
所有操作在k8s-master1上执行
1、加密密钥
1、建立加密密钥
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
2、加密配置文件
1、生成名为 encryption-config.yaml 的加密配置文件
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
3、分发证书文件
将 kubelet 与 kube-proxy kubeconfig 配置文件复制到每个 worker 节点上
1、创建配置文件目录
for host in k8s-master1 k8s-master2 k8s-master3 k8s-node1 ; do ssh root@$host "mkdir -p \
/opt/cni/bin \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
/var/run/kubernetes " ; done
2、将 admin、kube-controller-manager 与 kube-scheduler kubeconfig 配置文件复制到每个控制节点上
一部分:
分发文件方法一;
scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml \
kube-controller-manager.kubeconfig kube-scheduler.kubeconfig k8s-master1:/var/lib/kubernetes/
scp admin.kubeconfig k8s-master1:~/
scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml \
kube-controller-manager.kubeconfig kube-scheduler.kubeconfig k8s-master2:/var/lib/kubernetes/
scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml \
kube-controller-manager.kubeconfig kube-scheduler.kubeconfig k8s-master3:/var/lib/kubernetes/
分发文件方法二;
for NODE in k8s-master1 k8s-master2 k8s-master3; do
echo "-----$NODE------"
scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml \
kube-controller-manager.kubeconfig kube-scheduler.kubeconfig $NODE:/var/lib/kubernetes/;
done
二部分:
scp k8s-master1-key.pem k8s-master1.pem k8s-master1:/var/lib/kubelet/
scp k8s-master1.kubeconfig k8s-master1:/var/lib/kubelet/kubeconfig
scp kube-proxy.kubeconfig k8s-master1:/var/lib/kube-proxy/kubeconfig
scp kube-proxy.pem k8s-master1:/var/lib/kube-proxy/
scp kube-proxy-key.pem k8s-master1:/var/lib/kube-proxy/
scp kube-controller-manager-key.pem k8s-master1:/var/lib/kubernetes/kube-controller-manager-key.pem
scp kube-controller-manager.pem k8s-master1:/var/lib/kubernetes/
scp kube-scheduler.pem k8s-master1:/var/lib/kubernetes/kube-scheduler.pem
scp kube-scheduler-key.pem k8s-master1:/var/lib/kubernetes/
scp k8s-master2-key.pem k8s-master2.pem k8s-master2:/var/lib/kubelet/
scp k8s-master2.kubeconfig k8s-master2:/var/lib/kubelet/kubeconfig
scp kube-proxy.kubeconfig k8s-master2:/var/lib/kube-proxy/kubeconfig
scp kube-proxy-key.pem k8s-master2:/var/lib/kube-proxy/
scp kube-proxy.pem k8s-master2:/var/lib/kube-proxy/
scp kube-controller-manager-key.pem k8s-master2:/var/lib/kubernetes/kube-controller-manager-key.pem
scp kube-controller-manager.pem k8s-master2:/var/lib/kubernetes/
scp kube-scheduler.pem k8s-master2:/var/lib/kubernetes/kube-scheduler.pem
scp kube-scheduler-key.pem k8s-master2:/var/lib/kubernetes/
scp k8s-master3-key.pem k8s-master3.pem k8s-master3:/var/lib/kubelet/
scp k8s-master3.kubeconfig k8s-master3:/var/lib/kubelet/kubeconfig
scp kube-proxy.kubeconfig k8s-master3:/var/lib/kube-proxy/kubeconfig
scp kube-proxy-key.pem k8s-master3:/var/lib/kube-proxy/
scp kube-proxy.pem k8s-master3:/var/lib/kube-proxy/
scp kube-controller-manager-key.pem k8s-master3:/var/lib/kubernetes/kube-controller-manager-key.pem
scp kube-controller-manager.pem k8s-master3:/var/lib/kubernetes/
scp kube-scheduler.pem k8s-master3:/var/lib/kubernetes/kube-scheduler.pem
scp kube-scheduler-key.pem k8s-master3:/var/lib/kubernetes/
scp ca.pem k8s-node1:/var/lib/kubernetes/
scp k8s-node1-key.pem k8s-node1.pem k8s-node1:/var/lib/kubelet/
scp k8s-node1.kubeconfig k8s-node1:/var/lib/kubelet/kubeconfig
scp kube-proxy.kubeconfig k8s-node1:/var/lib/kube-proxy/kubeconfig
scp kube-proxy-key.pem k8s-node1:/var/lib/kube-proxy/
scp kube-proxy.pem k8s-node1:/var/lib/kube-proxy/
八:部署etcd集群
kubernetes的组件都是无状态的,所有集群的状态都存储在etcd集群中
etcd目录结构
软件包解压目录
/usr/local/etcd/src/
ssl证书申请文件
/usr/local/etcd/json
ssl证书文件
/usr/local/etcd/ssl/
可执行文件
/usr/local/etcd/bin/
工作目录
/usr/local/etcd/data/
etcd的目录在生成证书的最后已经全部创建 并且证书文件也全部拷贝完成
1、下载etcd二进制包并解压
cd
方法一:
wget https://github.com/etcd-io/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-amd64.tar.gz
tar -zxvf etcd-v3.4.3-linux-amd64.tar.gz -C /usr/local/etcd/src/
方法二:
https://github.com/etcd-io/etcd/releases?after=v3.2.28 #见下图
下载后上传解压
tar -zxvf etcd-v3.4.3-linux-amd64.tar.gz -C /usr/local/etcd/src/
2、复制可执行文件到所有master
复制方法一;
scp -p /usr/local/etcd/src/etcd-v3.4.1-linux-amd64/etcd k8s-master1:/usr/local/etcd/bin/
scp -p /usr/local/etcd/src/etcd-v3.4.1-linux-amd64/etcdctl k8s-master1:/usr/local/etcd/bin/
scp -p /usr/local/etcd/src/etcd-v3.4.1-linux-amd64/etcd k8s-master2:/usr/local/etcd/bin/
scp -p /usr/local/etcd/src/etcd-v3.4.1-linux-amd64/etcdctl k8s-master2:/usr/local/etcd/bin/
scp -p /usr/local/etcd/src/etcd-v3.4.1-linux-amd64/etcd k8s-master3:/usr/local/etcd/bin/
scp -p /usr/local/etcd/src/etcd-v3.4.1-linux-amd64/etcdctl k8s-master2:/usr/local/etcd/bin/
复制方法二;
for host in k8s-master1 k8s-master2 k8s-master3; do
echo "----$host----"
scp -p /usr/local/etcd/src/etcd-v3.4.1-linux-amd64/etcd* $host:/usr/local/etcd/bin/;
ssh $host "ln -s /usr/local/etcd/bin/* /usr/bin/";
done
3、设置etcd系统服务文件
在/etc/systemd/system/目录下导入文件etcd.service
配置master节点的配置文件:
配置master1的etcd
cat << EOF | tee /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
Restart=on-failure
LimitNOFILE=65536
ExecStart=/usr/local/etcd/bin/etcd \\
--name=etcd00 \\
--data-dir=/usr/local/etcd/data/ \\
--listen-peer-urls=https://$MASTER1_IP:2380 \\
--listen-client-urls=https://$MASTER1_IP:2379,https://127.0.0.1:2379 \\
--advertise-client-urls=https://$MASTER1_IP:2379 \\
--initial-advertise-peer-urls=https://$MASTER1_IP:2380 \\
--initial-cluster=etcd00=https://$MASTER1_IP:2380,etcd01=https://$MASTER2_IP:2380,etcd02=https://$MASTER3_IP:2380 \\
--initial-cluster-token=etcd-cluster \\
--initial-cluster-state=new \\
--cert-file=/usr/local/etcd/ssl/etcd.pem \\
--key-file=/usr/local/etcd/ssl/etcd-key.pem \\
--peer-cert-file=/usr/local/etcd/ssl/etcd.pem \\
--peer-key-file=/usr/local/etcd/ssl/etcd-key.pem \\
--trusted-ca-file=/usr/local/etcd/ssl/ca.pem \\
--peer-trusted-ca-file=/usr/local/etcd/ssl/ca.pem
[Install]
WantedBy=multi-user.target
EOF
配置master2的etcd
cat << EOF | tee /etc/systemd/system/etcd2.service
[Unit]
Description=Etcd Server
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
Restart=on-failure
LimitNOFILE=65536
ExecStart=/usr/local/etcd/bin/etcd \\
--name=etcd01 \\
--data-dir=/usr/local/etcd/data/ \\
--listen-peer-urls=https://$MASTER2_IP:2380 \\
--listen-client-urls=https://$MASTER2_IP:2379,https://127.0.0.1:2379 \\
--advertise-client-urls=https://$MASTER2_IP:2379 \\
--initial-advertise-peer-urls=https://$MASTER2_IP:2380 \\
--initial-cluster=etcd00=https://$MASTER1_IP:2380,etcd01=https://$MASTER2_IP:2380,etcd02=https://$MASTER3_IP:2380 \\
--initial-cluster-token=etcd-cluster \\
--initial-cluster-state=new \\
--cert-file=/usr/local/etcd/ssl/etcd.pem \\
--key-file=/usr/local/etcd/ssl/etcd-key.pem \\
--peer-cert-file=/usr/local/etcd/ssl/etcd.pem \\
--peer-key-file=/usr/local/etcd/ssl/etcd-key.pem \\
--trusted-ca-file=/usr/local/etcd/ssl/ca.pem \\
--peer-trusted-ca-file=/usr/local/etcd/ssl/ca.pem
[Install]
WantedBy=multi-user.target
EOF
scp /etc/systemd/system/etcd2.service k8s-master2:/etc/systemd/system/etcd.service
rm -rf /etc/systemd/system/etcd2.service
配置master3的etcd
cat << EOF | tee /etc/systemd/system/etcd3.service
[Unit]
Description=Etcd Server
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
Restart=on-failure
LimitNOFILE=65536
ExecStart=/usr/local/etcd/bin/etcd \\
--name=etcd02 \\
--data-dir=/usr/local/etcd/data/ \\
--listen-peer-urls=https://$MASTER3_IP:2380 \\
--listen-client-urls=https://$MASTER3_IP:2379,https://127.0.0.1:2379 \\
--advertise-client-urls=https://$MASTER3_IP:2379 \\
--initial-advertise-peer-urls=https://$MASTER3_IP:2380 \\
--initial-cluster=etcd00=https://$MASTER1_IP:2380,etcd01=https://$MASTER2_IP:2380,etcd02=https://$MASTER3_IP:2380 \\
--initial-cluster-token=etcd-cluster \\
--initial-cluster-state=new \\
--cert-file=/usr/local/etcd/ssl/etcd.pem \\
--key-file=/usr/local/etcd/ssl/etcd-key.pem \\
--peer-cert-file=/usr/local/etcd/ssl/etcd.pem \\
--peer-key-file=/usr/local/etcd/ssl/etcd-key.pem \\
--trusted-ca-file=/usr/local/etcd/ssl/ca.pem \\
--peer-trusted-ca-file=/usr/local/etcd/ssl/ca.pem
[Install]
WantedBy=multi-user.target
EOF
scp /etc/systemd/system/etcd3.service k8s-master3:/etc/systemd/system/etcd.service
rm -rf /etc/systemd/system/etcd3.service
4、所有节点启动etcd服务并设置为开机自启
启动服务方法一(每台单独执行);
systemctl daemon-reload && systemctl enable etcd.service && systemctl restart etcd.service &
启动服务方法二(k8s-master1执行);
for NODE in k8s-master1 k8s-master2 k8s-master3; do
echo "--- $NODE ---"
ssh $NODE "systemctl daemon-reload"
ssh $NODE "systemctl enable --now etcd" &
done
wait
如果有某一台节点启动失败 请手动启动
systemctl daemon-reload
systemctl restart etcd.service
for NODE in k8s-master1 k8s-master2 k8s-master3; do
echo "--- $NODE ---"
ssh $NODE "systemctl daemon-reload"
ssh $NODE "systemctl start etcd" &
done
wait
5、检查etcd集群状态
etcdctl \
--cacert=/usr/local/etcd/ssl/ca.pem \
--cert=/usr/local/etcd/ssl/etcd.pem \
--key=/usr/local/etcd/ssl/etcd-key.pem \
--endpoints="https://192.168.10.53:2379,\
https://192.168.10.51:2379,https://192.168.10.52:2379" endpoint health
全部节点为healthy 表明etcd集群搭建成功
扩展知识:
# cat /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
Restart=on-failure #可以实现在失败状态下的自动重启该服务
LimitNOFILE=65536
ExecStart=/usr/local/etcd/bin/etcd \ #执行文件路径
--name=etcd00 \ #节点名称,如果–initial-cluster-state=new这个值为new,哪么–name的参数值必须位于–initial-cluster列表中
--data-dir=/usr/local/etcd/data/ \ #指定节点的数据存储目录
--listen-peer-urls=https://192.168.10.51:2380 \ #与集群其它成员之间的通信地址
--listen-client-urls=https://192.168.10.51:2379,https://127.0.0.1:2379 \ #监听本地端口,对外提供服务的地址
--advertise-client-urls=https://192.168.10.51:2379 \ #客户端URL,用于通告集群的其余部分
--initial-advertise-peer-urls=https://192.168.10.51:2380 \ #通告给集群其它节点,本地的对等URL地址
--initial-cluster=etcd00=https://192.168.10.51:2380,etcd01=https://192.168.10.52:2380,etcd02=https://192.168.10.53:2380 \ #集群中的所有信息节点
--initial-cluster-token=etcd-cluster \ #集群的token,整个集群中保持一致
--initial-cluster-state=new \ #初始化集群状态,默认为new
--cert-file=/usr/local/etcd/ssl/etcd.pem \ #客户端与服务器之间TLS证书文件的路径
--key-file=/usr/local/etcd/ssl/etcd-key.pem \ #客户端与服务器之间TLS密钥文件的路径
--peer-cert-file=/usr/local/etcd/ssl/etcd.pem \ #对等服务器TLS证书文件的路径
--peer-key-file=/usr/local/etcd/ssl/etcd-key.pem \ #对等服务器TLS密钥文件的路径
--trusted-ca-file=/usr/local/etcd/ssl/ca.pem \ #签名client证书的CA证书,用于验证client证书
--peer-trusted-ca-file=/usr/local/etcd/ssl/ca.pem #对等服务器CA证书路径
[Install]
WantedBy=multi-user.target
配置选项 | 选项说明 |
---|---|
wal | 存放预写式日志,文件中记录了整个数据变化的全部历程,数据的修改在提交前,都要先写入到WAL中 |
data-dir | 指定节点的数据存储目录(包括:节点ID、集群ID、集群初始化配置、Snapshot文件等),如果未指定,会写在当前目录 |
wal-dir | 存放预写式日志,文件中记录了整个数据变化的全部历程,数据的修改在提交前,都要先写入到WAL中,如果未指定,写在–data-dir目录下面 |
name | 节点名称,如果–initial-cluster-state=new这个值为new,哪么–name的参数值必须位于–initial-cluster列表中 |
cert-file | 客户端与服务器之间TLS证书文件的路径 |
key-file | 客户端与服务器之间TLS密钥文件的路径 |
trusted-ca-file | 签名client证书的CA证书,用于验证client证书 |
peer-cert-file | 对等服务器TLS证书文件的路径 |
peer-key-file | 对等服务器TLS密钥文件的路径 |
peer-client-cert-auth | 启用对等客户端证书验证 |
client-cert-auth | 启用客户端验证 |
listen-peer-urls | 与集群其它成员之间的通信地址 |
initial-advertise-peer-urls | 通告给集群其它节点,本地的对等URL地址 |
listen-client-urls | 监听本地端口,对外提供服务的地址 |
advertise-client-urls | 客户端URL,用于通告集群的其余部分 |
initial-cluster-token | 集群的token,整个集群中保持一致 |
initial-cluster | 集群中的所有信息节点 |
initial-cluster-state | 初始化集群状态,默认为new |
auto-compaction-mode | 配置基于时间的三种模式 |
auto-compaction-retention | 设置保留历史时间为1小时 |
max-request-bytes | 服务器将接受的最大客户端请求大小 |
quota-backend-bytes | 当后端大小超过给定的配额时,报警 |
heartbeat-interval | 心跳间隔的时间,单位毫秒 |
election-timeout | 选举超时时间,单位毫秒 |
九:部署keepalived+HAProxy
1、所有master安装haproxy+keeplived
for NODE in k8s-master1 k8s-master2 k8s-master3; do
echo "--- $NODE---"
ssh $NODE 'yum install haproxy keepalived -y' &
done
2、安装完检查所有master
for NODE in k8s-master1 k8s-master2 k8s-master3;do
echo "--- $NODE ---"
ssh $NODE "rpm -qa|grep haproxy"
ssh $NODE "rpm -qa|grep keepalived"
done
3、在k8s-master1修改配置文件,并分发给其他master
1、haproxy配置文件修改
cat << EOF | tee /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
defaults
mode tcp
log global
retries 3
timeout connect 10s
timeout client 1m
timeout server 1m
frontend kubernetes
bind *:8443
mode tcp
option tcplog
default_backend kubernetes-apiserver
backend kubernetes-apiserver
mode tcp
balance roundrobin
server k8s-master1 192.168.10.51:6443 check maxconn 2000
server k8s-master2 192.168.10.52:6443 check maxconn 2000
server k8s-master3 192.168.10.53:6443 check maxconn 2000
EOF
注:在最后一行修改或者添加我们的master节点,端口默认是8443、这里更改了默认端口
2、keeplived配置文件修改
cat << EOF | tee /etc/keepalived/keepalived.conf
global_defs {
router_id k8s-master1 #每个节点修改为不通,我这边用主机名来区别
}
vrrp_script check_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 3
fall 10
timeout 9
rise 2
}
vrrp_instance VI_1 {
state MASTER #备服务器上改为BACKUP
interface ens33 #改为自己的接口
virtual_router_id 51
priority 100 #备服务器上改为小于100的数字,90,80
advert_int 1
mcast_src_ip 192.168.10.51 #本机IP
nopreempt
authentication {
auth_type PASS
auth_pass 1111
}
unicast_peer {
192.168.10.52 #除本机外其余两个master的IP节点
192.168.10.53
}
virtual_ipaddress {
192.168.10.50 #虚拟vip,自己设定
}
track_script {
check_haproxy
}
}
EOF
3、添加keeplived健康检查脚本
cat > /etc/keepalived/check_haproxy.sh <<EOF
#!/bin/bash
A=\`ps -C haproxy --no-header | wc -l\`
if [ \$A -eq 0 ];then
systemctl stop keepalived
fi
EOF
chmod +x /etc/keepalived/check_haproxy.sh
4、分发文件给其他master
for NODE in k8s-master1 k8s-master2 k8s-master3; do
echo "--- $NODE ---"
scp -r /etc/haproxy/haproxy.cfg $NODE:/etc/haproxy/
scp -r /etc/keepalived/keepalived.conf $NODE:/etc/keepalived/
scp -r /etc/keepalived/check_haproxy.sh $NODE:/etc/keepalived/
done
分发后k8s-master2配置如下,也可以直接去master2上执行:
cat << EOF | tee /etc/keepalived/keepalived.conf
global_defs {
router_id k8s-master2 #每个节点修改为不通,我这边用主机名来区别
}
vrrp_script check_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 3
fall 10
timeout 9
rise 2
}
vrrp_instance VI_1 {
state BACKUP #备服务器上改为BACKUP
interface ens33 #改为自己的接口
virtual_router_id 51
priority 90 #备服务器上改为小于100的数字,90,80
advert_int 1
mcast_src_ip 192.168.10.52 #本机IP
nopreempt
authentication {
auth_type PASS
auth_pass 1111
}
unicast_peer {
192.168.10.51 #除本机外其余两个master的IP节点
192.168.10.53
}
virtual_ipaddress {
192.168.10.50 #虚拟vip,自己设定
}
track_script {
check_haproxy
}
}
EOF
分发后k8s-master3配置如下,也可以直接去master3上执行:
cat << EOF | tee /etc/keepalived/keepalived.conf
global_defs {
router_id k8s-master3 #每个节点修改为不通,我这边用主机名来区别
}
vrrp_script check_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 3
fall 10
timeout 9
rise 2
}
vrrp_instance VI_1 {
state BACKUP #备服务器上改为BACKUP
interface ens33 #改为自己的接口
virtual_router_id 51
priority 80 #备服务器上改为小于100的数字,90,80
advert_int 1
mcast_src_ip 192.168.10.53 #本机IP
nopreempt
authentication {
auth_type PASS
auth_pass 1111
}
unicast_peer {
192.168.10.51 #除本机外其余两个master的IP节点
192.168.10.52
}
virtual_ipaddress {
192.168.10.50 #虚拟vip,自己设定
}
track_script {
check_haproxy
}
}
EOF
如果vip没起来就是keepalived没起来,就每个节点上去restart下keepalived或者确认下配置文件/etc/keepalived/keepalived.conf里网卡名和ip是否注入成功
for NODE in k8s-master1 k8s-master2 k8s-master3; do
echo "--- $NODE ---"
ssh $NODE 'systemctl enable --now haproxy keepalived'
ssh $NODE 'systemctl restart haproxy keepalived'
done
systemctl status keepalived
systemctl status haproxy
十:部署master节点
1、使kubectl命令可以使用table键
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
2、配置audit-policy.yaml
cat > /var/lib/kubernetes/audit-policy.yaml <<EOF
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
# The following requests were manually identified as high-volume and low-risk, so drop them.
- level: None
resources:
- group: ""
resources:
- endpoints
- services
- services/status
users:
- 'system:kube-proxy'
verbs:
- watch
- level: None
resources:
- group: ""
resources:
- nodes
- nodes/status
userGroups:
- 'system:nodes'
verbs:
- get
- level: None
namespaces:
- kube-system
resources:
- group: ""
resources:
- endpoints
users:
- 'system:kube-controller-manager'
- 'system:kube-scheduler'
- 'system:serviceaccount:kube-system:endpoint-controller'
verbs:
- get
- update
- level: None
resources:
- group: ""
resources:
- namespaces
- namespaces/status
- namespaces/finalize
users:
- 'system:apiserver'
verbs:
- get
# Don't log HPA fetching metrics.
- level: None
resources:
- group: metrics.k8s.io
users:
- 'system:kube-controller-manager'
verbs:
- get
- list
# Don't log these read-only URLs.
- level: None
nonResourceURLs:
- '/healthz*'
- /version
- '/swagger*'
# Don't log events requests.
- level: None
resources:
- group: ""
resources:
- events
# node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes
- level: Request
omitStages:
- RequestReceived
resources:
- group: ""
resources:
- nodes/status
- pods/status
users:
- kubelet
- 'system:node-problem-detector'
- 'system:serviceaccount:kube-system:node-problem-detector'
verbs:
- update
- patch
- level: Request
omitStages:
- RequestReceived
resources:
- group: ""
resources:
- nodes/status
- pods/status
userGroups:
- 'system:nodes'
verbs:
- update
- patch
# deletecollection calls can be large, don't log responses for expected namespace deletions
- level: Request
omitStages:
- RequestReceived
users:
- 'system:serviceaccount:kube-system:namespace-controller'
verbs:
- deletecollection
# Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
# so only log at the Metadata level.
- level: Metadata
omitStages:
- RequestReceived
resources:
- group: ""
resources:
- secrets
- configmaps
- group: authentication.k8s.io
resources:
- tokenreviews
# Get repsonses can be large; skip them.
- level: Request
omitStages:
- RequestReceived
resources:
- group: ""
- group: admissionregistration.k8s.io
- group: apiextensions.k8s.io
- group: apiregistration.k8s.io
- group: apps
- group: authentication.k8s.io
- group: authorization.k8s.io
- group: autoscaling
- group: batch
- group: certificates.k8s.io
- group: extensions
- group: metrics.k8s.io
- group: networking.k8s.io
- group: policy
- group: rbac.authorization.k8s.io
- group: scheduling.k8s.io
- group: settings.k8s.io
- group: storage.k8s.io
verbs:
- get
- list
- watch
# Default level for known APIs
- level: RequestResponse
omitStages:
- RequestReceived
resources:
- group: ""
- group: admissionregistration.k8s.io
- group: apiextensions.k8s.io
- group: apiregistration.k8s.io
- group: apps
- group: authentication.k8s.io
- group: authorization.k8s.io
- group: autoscaling
- group: batch
- group: certificates.k8s.io
- group: extensions
- group: metrics.k8s.io
- group: networking.k8s.io
- group: policy
- group: rbac.authorization.k8s.io
- group: scheduling.k8s.io
- group: settings.k8s.io
- group: storage.k8s.io
# Default level for all other requests.
- level: Metadata
omitStages:
- RequestReceived
EOF
1、拷贝配置文件到其他master节点
for host in k8s-master2 k8s-master3; do
echo "---$host---"
scp /var/lib/kubernetes/audit-policy.yaml $host:/var/lib/kubernetes/audit-policy.yaml
done
3、生成kube-apiserver.service配置启动文件
k8s API Server提供了k8s各类资源对象(pod,RC,Service等)的增删改查及watch等HTTP Rest接口,是整个系统的数据总线和数据中心。
kubernetes API Server的功能:
提供了集群管理的REST API接口(包括认证授权、数据校验以及集群状态变更);
提供其他模块之间的数据交互和通信的枢纽(其他模块通过API Server查询或修改数据,只有API Server才直接操作etcd);
是资源配额控制的入口;
拥有完备的集群安全机制.
kube-apiserver工作原理图:
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=$VIP \\
--default-not-ready-toleration-seconds=360 \\
--default-unreachable-toleration-seconds=360 \\
--feature-gates=DynamicAuditing=true \\
--max-mutating-requests-inflight=2000 \\
--max-requests-inflight=4000 \\
--default-watch-cache-size=200 \\
--delete-collection-workers=2 \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--etcd-cafile=/usr/local/etcd/ssl/ca.pem \\
--etcd-certfile=/usr/local/etcd/ssl/etcd.pem \\
--etcd-keyfile=/usr/local/etcd/ssl/etcd-key.pem \\
--etcd-servers=https://$MASTER1_IP:2379,https://$MASTER2_IP:2379,https://$MASTER3_IP:2379 \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--insecure-port=0 \\
--audit-dynamic-configuration \\
--audit-log-maxage=15 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-truncate-enabled \\
--audit-log-path=/var/log/audit.log \\
--audit-policy-file=/var/lib/kubernetes/audit-policy.yaml \\
--profiling \\
--anonymous-auth=false \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-bootstrap-token-auth \\
--requestheader-allowed-names="aggregator" \\
--requestheader-client-ca-file=/var/lib/kubernetes/ca.pem \\
--requestheader-extra-headers-prefix="X-Remote-Extra-" \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--authorization-mode=Node,RBAC \\
--runtime-config=api/all=true \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \\
--allow-privileged=true \\
--apiserver-count=3 \\
--event-ttl=168h \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--kubelet-timeout=10s \\
--proxy-client-cert-file=/var/lib/kube-proxy/kube-proxy.pem \\
--proxy-client-key-file=/var/lib/kube-proxy/kube-proxy-key.pem \\
--service-cluster-ip-range=10.250.0.0/16 \\
--service-node-port-range=30000-32767 \\
--logtostderr=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
1、拷贝配置文件到另外节点
scp /etc/systemd/system/kube-apiserver.service k8s-master2:/etc/systemd/system/
scp /etc/systemd/system/kube-apiserver.service k8s-master3:/etc/systemd/system/
扩展知识:
配置选项 | 选项说明 |
---|---|
–advertise-address | 向集群成员通知 apiserver 消息的 IP 地址,这个地址必须能够被集群中其他成员访问,如果 IP 地址为空,将会使用 --bind-address,如果未指定 --bind-address,将会使用主机的默认接口地址 |
–default-not-ready-toleration-seconds | 表示 notReady状态的容忍度秒数:默认情况下,NoExecute 被添加到尚未具有此容忍度的每个 Pod 中 |
–default-unreachable-toleration-seconds | 表示 unreachable状态的容忍度秒数:默认情况下,NoExecute 被添加到尚未具有此容忍度的每个 Pod 中 |
–feature-gates=DynamicAuditing=true | 用于实验性质的特性开关组,每个key=value表示 |
–max-mutating-requests-inflight=2000 | 在给定时间内进行中可变请求的最大数量,当超过该值时,服务将拒绝所有请求,0 值表示没有限制(默认值 200) |
–max-requests-inflight=4000 | 在给定时间内进行中不可变请求的最大数量,当超过该值时,服务将拒绝所有请求,0 值表示没有限制。(默认值 400) |
–default-watch-cache-size=200 | 默认监视缓存大小,0 表示对于没有设置默认监视大小的资源,将禁用监视缓存 |
–delete-collection-workers=2 | 用于 DeleteCollection 调用的工作者数量,这被用于加速 namespace 的清理( 默认值 1) |
–encryption-provider-config | 将Secret数据加密存储到etcd中的配置文件 |
–etcd-cafile | 用于etcd 通信的 SSL CA 文件 |
–etcd-certfile | 用于 etcd 通信的的 SSL 证书文件 |
–etcd-keyfile | 用于 etcd 通信的 SSL 密钥文件 |
–etcd-servers | 连接的 etcd 服务器列表 , 形式为(scheme://ip:port),使用逗号分隔 |
–bind-address | 监听 --seure-port 的 IP 地址,被关联的接口必须能够被集群其它节点和 CLI/web 客户端访问,如果为空,则将使用所有接口(0.0.0.0) |
–secure-port=6443 | 用于监听具有认证授权功能的 HTTPS 协议的端口,默认值是6443 |
–tls-cert-file | 包含用于 HTTPS 的默认 x509 证书的文件,(如果有 CA 证书,则附加于 server 证书之后),如果启用了 HTTPS 服务,并且没有提供 --tls-cert-file 和 --tls-private-key-file,则将为公共地址生成一个自签名的证书和密钥并保存于 /var/run/kubernetes 目录中 |
–tls-private-key-file | 包含匹配 --tls-cert-file 的 x509 证书私钥的文件 |
–insecure-port=0 | 监听不安全端口,默认值是8080,设置为0,表示禁用不安全端口 |
–audit-dynamic-configuration | 动态审计配置 |
–audit-log-maxage=15 | 基于文件名中的时间戳,旧审计日志文件的最长保留天数 |
–audit-log-maxbackup=3 | 旧审计日志文件的最大保留个数 |
–audit-log-maxsize=100 | 审计日志被轮转前的最大兆字节数 |
–audit-log-truncate-enabled | 是否启用事件和batch截断功能。 |
–audit-log-path | 如果设置,表示所有到apiserver的请求都会记录到这个文件中,‘-’表示写入标准输出 |
–audit-policy-file | 定义审计策略配置文件的路径,需要打开 ‘AdvancedAuditing’ 特性开关,AdvancedAuditing 需要一个配置来启用审计功能 |
–profiling | 在 web 接口 host:port/debug/pprof/ 上启用 profiling(默认值 true) |
–anonymous-auth | 启用到 API server 的安全端口的匿名请求,未被其他认证方法拒绝的请求被当做匿名请求,匿名请求的用户名为 system:anonymous,用户组名为 system:unauthenticated(默认值 true) |
–client-ca-file | 如果设置此标志,对于任何请求,如果存包含 client-ca-file 中的 authorities 签名的客户端证书,将会使用客户端证书中的 CommonName 对应的身份进行认证 |
–enable-bootstrap-token-auth | 启用此选项以允许 ‘kube-system’ 命名空间中的 ‘bootstrap.kubernetes.io/token’ 类型密钥可以被用于 TLS 的启动认证 |
–requestheader-allowed-names | 使用 --requestheader-username-headers 指定的,允许在头部提供用户名的客户端证书通用名称列表。如果为空,任何通过 --requestheader-client-ca-file 中 authorities 验证的客户端证书都是被允许的 |
–requestheader-client-ca-file | 在信任请求头中以 --requestheader-username-headers 指示的用户名之前,用于验证接入请求中客户端证书的根证书捆绑 |
–requestheader-extra-headers-prefix=“X-Remote-Extra-” | 用于检查的请求头的前缀列表,建议使用 X-Remote-Extra- |
–requestheader-group-headers=X-Remote-Group | 用于检查群组的请求头列表,建议使用 X-Remote-Group |
–requestheader-username-headers=X-Remote-User | 用于检查用户名的请求头列表,建议使用 X-Remote-User |
–service-account-key-file | 包含 PEM 加密的 x509 RSA 或 ECDSA 私钥或公钥的文件,用于验证 ServiceAccount 令牌,如果设置该值,–tls-private-key-file 将会被使用,指定的文件可以包含多个密钥,并且这个标志可以和不同的文件一起多次使用 |
–authorization-mode=Node,RBAC | 在安全端口上进行权限验证的插件的顺序列表,以逗号分隔的列表,包括:AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node.(默认值 “AlwaysAllow”) |
–runtime-config=api/all=true | 传递给 apiserver 用于描述运行时配置的键值对集合 |
–enable-admission-plugins=NodeRestriction | 资源限制的相关配置 |
–allow-privileged=true | 如果为 true, 将允许特权容器 |
–apiserver-count=3 | 集群中运行的 apiserver 数量,必须为正数(默认值 1 |
–event-ttl=168h | 事件驻留时间(默认值 1h0m0s) |
–kubelet-certificate-authority | 证书 authority 的文件路径 |
–kubelet-client-certificate | 用于 TLS 的客户端证书文件路径 |
–kubelet-client-key | 用于 TLS 的客户端证书密钥文件路径 |
–kubelet-https=true | 为 kubelet 启用 https(默认值 true) |
–kubelet-timeout=10s | kubelet 操作超时时间(默认值5秒) |
–proxy-client-cert-file | 当必须调用外部程序时,用于证明 aggregator 或者 kube-apiserver 的身份的客户端证书,包括代理到用户 api-server 的请求和调用 webhook 准入控制插件的请求,它期望这个证书包含一个来自于 CA 中的 --requestheader-client-ca-file 标记的签名,该 CA 在 kube-system 命名空间的 ‘extension-apiserver-authentication’ configmap 中发布,从 Kube-aggregator 收到调用的组件应该使用该 CA 进行他们部分的双向 TLS 验证 |
–proxy-client-key-file | 当必须调用外部程序时,用于证明 aggregator 或者 kube-apiserver 的身份的客户端证书密钥。包括代理到用户 api-server 的请求和调用 webhook 准入控制插件的请求 |
–service-cluster-ip-range | Service网络地址分配 ,CIDR 表示的 IP 范围,服务的 cluster ip 将从中分配, 一定不要和分配给 nodes 和 pods 的 IP 范围产生重叠 |
–service-node-port-range | Service使用的端口范围 |
–logtostderr=true | 输出日志到标准错误控制台,不输出到文件 |
–v=2 | 指定输出日志的级别 |
4、生成kube-controller-manager.service 配置启动文件
kube-controller-manager(k8s控制器管理器)是一个守护进程,它通过kube-apiserver监视集群的共享状态(kube-apiserver收集或监视到的一些集群资源状态,供kube-controller-manager或其它客户端watch), 控制器管理器并尝试将当前的状态向所定义的状态迁移(移动、靠近),它本身是有状态的,会修改集群状态信息,如果多个控制器管理器同时生效,则会有一致性问题,所以kube-controller-manager的高可用,只能是主备模式,而kubernetes集群是采用租赁锁实现leader选举,需要在启动参数中加入–leader-elect=true。
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--profiling \\
--cluster-name=kubernetes \\
--controllers=*,bootstrapsigner,tokencleaner \\
--kube-api-qps=1000 \\
--kube-api-burst=2000 \\
--leader-elect \\
--use-service-account-credentials\\
--concurrent-service-syncs=2 \\
--bind-address=0.0.0.0 \\
--secure-port=10257 \\
--tls-cert-file=/var/lib/kubernetes/kube-controller-manager.pem \\
--tls-private-key-file=/var/lib/kubernetes/kube-controller-manager-key.pem \\
--port=10252 \\
--authentication-kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--requestheader-client-ca-file=/var/lib/kubernetes/ca.pem \\
--requestheader-extra-headers-prefix="X-Remote-Extra-" \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--authorization-kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--experimental-cluster-signing-duration=876000h \\
--horizontal-pod-autoscaler-sync-period=10s \\
--concurrent-deployment-syncs=10 \\
--concurrent-gc-syncs=30 \\
--node-cidr-mask-size=24 \\
--service-cluster-ip-range=10.250.0.0/16 \\
--pod-eviction-timeout=6m \\
--terminated-pod-gc-threshold=10000 \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--logtostderr=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
1、拷贝配置文件到另外节点
scp /etc/systemd/system/kube-controller-manager.service k8s-master2:/etc/systemd/system/
scp /etc/systemd/system/kube-controller-manager.service k8s-master3:/etc/systemd/system/
扩展知识
配置选项 | 选项说明 |
---|---|
–profiling | 通过web界面启动分析接口,host:port/debug/pprof/ |
–cluster-name=kubernetes | 集群名称,默认是kubernetes |
–controllers=*,bootstrapsigner,tokencleaner | *是启用默认启用所有控制器,但bootstrapsigner,tokencleaner 这两个控制器默认是禁用的,需要人为指定启用 |
–kube-api-qps=1000 | 与kube-apiserver通信时的QPS |
–kube-api-burst=2000 | 与kube-apiserver通信时使用 |
–leader-elect | 高可用时启用选举功能 |
–use-service-account-credentials | 如果为true,为每个控制器使用单独的service account |
–concurrent-service-syncs=2 | 允许同时同步Service数量,数字越大,服务管理响应越快,同时消耗更多的CPU和网络资源; |
–bind-address=0.0.0.0 | 监控地址 |
–secure-port=10257 | 提供HTTPS服务,默认端口为10257,如果为0,不提供https服务 |
–tls-cert-file | 指定x509证书文件,如果启用了HTTPS服务,但是 --tls-cert-file和–tls-private-key-file 未提供,则会为公共地址生成自签名证书和密钥,并将其保存到–cert-dir指定的目录中。 |
–tls-private-key-file | 指定与–tls-cert-file对应的私钥 |
–port=10252 | 提供HTTP服务,不认证,如果设置0,不提供HTTP服务,默认值是10252 |
–authentication-kubeconfig | kube-controller-kube-controller-manager也是kube-apiserver的客户端,也可以使用kubeconfig方式访问kube-apiserver, |
–client-ca-file | 启用客户端证书认证 |
–requestheader-allowed-names=“aggregator” | 允许通过的客户端证书Common Name列表,可以提供 –requestheader-username-headers 中的 Header 的用户名。如果为空,则所有通过–requestheader-client-ca-file校验的都允许通过 |
–requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem | 针对收到的请求,在信任–requestheader-username-header指定的header里包含的用户名之前,验证客户端证书的根证书 |
–requestheader-extra-headers-prefix=“X-Remote-Extra-” | 要检查的请求头前缀列表,建议使用 X-Remote-Extra- |
–requestheader-group-headers=X-Remote-Group | 请求头中需要检查的组名 |
–requestheader-username-headers=X-Remote-User | 请求头中需要检查的用户名 |
–authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig | 指定kubeconfig配置文件路径 |
–cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem | 指定用于集群签发的所有集群范围内证书文件(根证书文件) |
–cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem | 指定集群签发证书的key |
–experimental-cluster-signing-duration=876000h | 证书签发时间 |
–horizontal-pod-autoscaler-sync-period=10s | HPA控制器检查周期 |
–concurrent-deployment-syncs=10 | 允许并发同步的Deployment对象的数量,更大的数量等于更快的部署响应 |
–concurrent-gc-syncs=30 | 允许并发同步的垃圾收集器数量。默认值20 |
–node-cidr-mask-size=24 | node节点的CIDR掩码,默认是24 |
–service-cluster-ip-range=10.250.0.0/16 | 集群Services 的CIDR范围 |
–pod-eviction-timeout=6m | 在失败节点删除Pod的宽限期,默认是300秒 |
–terminated-pod-gc-threshold=10000 | 在Pod垃圾收集器开始删除终止Pod前,允许存在终止的Pod数量,默认是12500 |
–root-ca-file=/etc/kubernetes/cert/ca.pem | 如果设置,该根证书权限将包含service acount的toker secret,这必须是一个有效的PEM编码CA 包 |
–service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem | 包含用于签署service account token的PEM编码RSA或者ECDSA私钥的文件名 |
–kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig | 指定kubeconfig配置文件 |
–logtostderr=true | 错误日志到标准输出,而不是文件 |
–v=2 | 日志级别 |
1 、 控制器管理器管理控制器,各个控制器负责监视(watch)apiserver暴露的集群状态,并不断地尝试把当前状态向所期待的状态迁移;
2、配置使用kubeconfig访问kube-apiserver安全端口;
3、默认非安全端口10252,安全端口10257;
4、kube-controller-manager 3节点高可用,去竞争锁,成为leader;
5、生成kube-scheduler.service配置启动文件
kube-scheduler作为kubemaster核心组件运行在master节点上面,主要是watch kube-apiserver中未被调度的Pod,如果有,通过调度算法找到最适合的节点Node,然后通过kube-apiserver以对象(pod名称、Node节点名称等)的形式写入到etcd中来完成调度,kube-scheduler的高可用与kube-controller-manager一样,需要使用选举的方式产生。
for host in k8s-master1 k8s-master2 k8s-master3; do
echo "---$host---"
ssh $host "mkdir /etc/kubernetes/config/ -p"
done
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
bindTimeoutSeconds: 600
clientConnection:
burst: 200
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
qps: 100
enableContentionProfiling: false
enableProfiling: true
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: 127.0.0.1:10251
leaderElection:
leaderElect: true
metricsBindAddress: $MASTER1_IP:10251
EOF
cat <<EOF | tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--bind-address=$MASTER1_IP \\
--secure-port=10259 \\
--port=10251 \\
--tls-cert-file=/var/lib/kubernetes/kube-scheduler.pem \\
--tls-private-key-file=/var/lib/kubernetes/kube-scheduler-key.pem \\
--authentication-kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--requestheader-allowed-names="aggregator" \\
--requestheader-client-ca-file=/var/lib/kubernetes/ca.pem \\
--requestheader-extra-headers-prefix="X-Remote-Extra-" \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--authorization-kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig \\
--logtostderr=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
拷贝配置文件到另外节点并修改其配置
scp /etc/kubernetes/config/kube-scheduler.yaml k8s-master2:/etc/kubernetes/config/
scp /etc/systemd/system/kube-scheduler.service k8s-master2:/etc/systemd/system/
scp /etc/kubernetes/config/kube-scheduler.yaml k8s-master3:/etc/kubernetes/config/
scp /etc/systemd/system/kube-scheduler.service k8s-master3:/etc/systemd/system/
拷贝过去后一定要修改,以下我贴出k8s-master2、k8s-master3配置文件
k8s-master2:
# cat /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
bindTimeoutSeconds: 600
clientConnection:
burst: 200
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
qps: 100
enableContentionProfiling: false
enableProfiling: true
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: 127.0.0.1:10251
leaderElection:
leaderElect: true
metricsBindAddress: 192.168.10.52:10251
# cat /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--config=/etc/kubernetes/config/kube-scheduler.yaml \
--bind-address=192.168.10.52 \
--secure-port=10259 \
--port=10251 \
--tls-cert-file=/var/lib/kubernetes/kube-scheduler.pem \
--tls-private-key-file=/var/lib/kubernetes/kube-scheduler-key.pem \
--authentication-kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig \
--client-ca-file=/var/lib/kubernetes/ca.pem \
--requestheader-allowed-names="aggregator" \
--requestheader-client-ca-file=/var/lib/kubernetes/ca.pem \
--requestheader-extra-headers-prefix="X-Remote-Extra-" \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--authorization-kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig \
--logtostderr=true \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
k8s-master3:
# cat /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
bindTimeoutSeconds: 600
clientConnection:
burst: 200
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
qps: 100
enableContentionProfiling: false
enableProfiling: true
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: 127.0.0.1:10251
leaderElection:
leaderElect: true
metricsBindAddress: 192.168.10.53:10251
# cat /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--config=/etc/kubernetes/config/kube-scheduler.yaml \
--bind-address=192.168.10.53 \
--secure-port=10259 \
--port=10251 \
--tls-cert-file=/var/lib/kubernetes/kube-scheduler.pem \
--tls-private-key-file=/var/lib/kubernetes/kube-scheduler-key.pem \
--authentication-kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig \
--client-ca-file=/var/lib/kubernetes/ca.pem \
--requestheader-allowed-names="aggregator" \
--requestheader-client-ca-file=/var/lib/kubernetes/ca.pem \
--requestheader-extra-headers-prefix="X-Remote-Extra-" \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--authorization-kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig \
--logtostderr=true \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
扩展知识:
配置选项 | 选项说明 |
---|---|
–config=/etc/kubernetes/kube-scheduler.yaml | 配置文件的路径 |
–bind-address= | 监控地址 |
–secure-port=10259 | 监听的安全端口,设置为0,不提供安全端口 |
–port=10251 | 监听非安全端口,设置为0,不提供非安全端口 |
–tls-cert-file=/etc/kubernetes/cert/kube-scheduler.pem | 包含默认的 HTTPS x509 证书的文件,(CA证书(如果有)在服务器证书之后并置),如果启用了 HTTPS 服务,并且未提供 --tls-cert-file 和 --tls-private-key-file,则会为公共地址生成一个自签名证书和密钥,并将其保存到 --cert-dir 指定的目录中 |
–tls-private-key-file=/etc/kubernetes/cert/kube-scheduler-key.pem | 包含与 --tls-cert-file 匹配的默认 x509 私钥的文件 |
–authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig | 指定kube-scheduler做为kube-apiserver客户端时使用kubeconfig文件 |
–client-ca-file=/etc/kubernetes/cert/ca.pem | 如果已设置,由 client-ca-file 中的授权机构签名的客户端证书的任何请求都将使用与客户端证书的 CommonName 对应的身份进行身份验证 |
–requestheader-allowed-names=“aggregator” | 客户端证书通用名称列表允许在 --requestheader-username-headers 指定的头部中提供用户名。如果为空,则允许任何由权威机构 --requestheader-client-ca-file 验证的客户端证书。 |
–requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem | 在信任 --requestheader-username-headers 指定的头部中的用户名之前用于验证传入请求上的客户端证书的根证书包。警告:通常不依赖于传入请求已经完成的授权。 |
–requestheader-extra-headers-prefix=“X-Remote-Extra-” | 要检查请求头部前缀列表。建议使用 X-Remote-Extra- |
–requestheader-group-headers=X-Remote-Group | 用于检查组的请求头部列表。建议使用 X-Remote-Group |
–requestheader-username-headers=X-Remote-User | 用于检查用户名的请求头部列表。X-Remote-User 很常见。 |
–authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig | 指向具有足够权限以创建 subjectaccessreviews.authorization.k8s.io 的 ‘core’ kubernetes 服务器的 kubeconfig 文件,这是可选的,如果为空,则禁止所有未经授权跳过的请求 |
–logtostderr=true | 日志记录到标准错误而不是文件 |
–v=2 | 日志级别详细程度的数字 |
kube-scheduler提供非安全端口10251, 安全端口10259;
kube-scheduler 部署3节点高可用,通过选举产生leader;
它监视kube-apiserver提供的watch接口,它根据预选和优选策略两个环节找一个最佳适配,然后调度到此节点;
6、启动各组件
for host in k8s-master1 k8s-master2 k8s-master3; do
echo "---$host---"
ssh $host "systemctl daemon-reload"
ssh $host "systemctl enable --now kube-apiserver kube-controller-manager kube-scheduler"
done
for host in k8s-master1 k8s-master2 k8s-master3; do
echo "---$host---"
ssh $host "systemctl daemon-reload"
ssh $host "systemctl restart kube-apiserver kube-controller-manager kube-scheduler"
done
for host in k8s-master1 k8s-master2 k8s-master3; do
echo "---$host---"
ssh $host "systemctl daemon-reload"
ssh $host "systemctl stop kube-apiserver kube-controller-manager kube-scheduler"
done
请10秒左右让 kubernetes api server初始化
7、在每个master上检查服务状态
systemctl status kube-apiserver
systemctl status kube-controller-manager
systemctl status kube-scheduler
8、拷贝admin 的kubeconfig 为kubectl默认读取的.kube/config
cd
for host in k8s-master1 k8s-master2 k8s-master3; do
echo "---$host---"
ssh $host "mkdir /root/.kube -p"
scp /root/ssl/kubernetes/admin.kubeconfig $host:/root/.kube/config
done
9、查看集群状态
kubectl cluster-info
输出信息
查看cs信息
kubectl get cs
输出信息
十一:Kubelet RBAC 授权
本节将会配置 API Server 访问 Kubelet API 的 RBAC 授权。访问 Kubelet API 是获取 metrics、日志以及执行容器命令所必需的;
所有操作均在主节点操作;
这里设置 Kubeket --authorization-mode 为 Webhook 模式。Webhook 模式使用 SubjectAccessReview API 来决定授权;
1、创建 system:kube-apiserver-to-kubelet ClusterRole 以允许请求 Kubelet API 和执行大部分来管理 Pods 的任务
1
cd /root/ssl/kubernetes
2
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
3
apiVersion: rbac.authorization.k8s.io/v1beta1
4
kind: ClusterRole
5
metadata:
6
annotations:
7
rbac.authorization.kubernetes.io/autoupdate: "true"
8
labels:
9
kubernetes.io/bootstrapping: rbac-defaults
10
name: system:kube-apiserver-to-kubelet
11
rules:
12
- apiGroups:
13
- ""
14
resources:
15
- nodes/proxy
16
- nodes/stats
17
- nodes/log
18
- nodes/spec
19
- nodes/metrics
20
verbs:
21
- "*"
22
EOF
输出信息:略
Kubernetes API Server 使用客户端凭证授权 Kubelet 为 kubernetes 用户,此凭证用 --kubelet-client-certificate flag 来定义。
2、绑定 system:kube-apiserver-to-kubelet ClusterRole 到 kubernetes 用户
1
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
2
apiVersion: rbac.authorization.k8s.io/v1beta1
3
kind: ClusterRoleBinding
4
metadata:
5
name: system:kube-apiserver
6
namespace: ""
7
roleRef:
8
apiGroup: rbac.authorization.k8s.io
9
kind: ClusterRole
10
name: system:kube-apiserver-to-kubelet
11
subjects:
12
- apiGroup: rbac.authorization.k8s.io
13
kind: User
14
name: kubernetes
15
EOF
输出信息:略
十二:部署node节点
注意 因为我们master也作为node节点,所有本节所有操作在所有节点执行
本部分将会部署 Kubernetes 工作节点。每个节点上将会安装以下服务:
container networking plugins
kubelet
kube-proxy
1、安装依赖
1、安装 OS 依赖组件
1
for host in k8s-master1 k8s-master2 k8s-master3 k8s-node1;do
2
echo "---$host---"
3
ssh $host "yum install -y socat conntrack ipset";
4
done
补充知识:socat 命令用于支持 kubectl port-forward 命令。
2、下载worker 二进制文件
1
wget https://github.com/containernetworking/plugins/releases/download/v0.8.2/cni-plugins-linux-amd64-v0.8.2.tgz
1、解压cni插件
1
tar -zxvf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin/
2
3
for host in k8s-master2 k8s-master3 k8s-node1; do
4
echo "---$host---"
5
ssh $host "mkdir /opt/cni/bin/ -p"
6
scp /opt/cni/bin/* $host:/opt/cni/bin/
7
done
2、生成kubelet.service systemd服务文件
1
cat << EOF | sudo tee /etc/systemd/system/kubelet.service
2
[Unit]
3
Description=Kubernetes Kubelet
4
Documentation=https://github.com/kubernetes/kubernetes
5
After=docker.service
6
Requires=docker.service
7
8
[Service]
9
ExecStart=/usr/local/bin/kubelet \\
10
--config=/var/lib/kubelet/kubelet-config.yaml \\
11
--image-pull-progress-deadline=2m \\
12
--kubeconfig=/var/lib/kubelet/kubeconfig \\
13
--pod-infra-container-image=cargo.caicloud.io/caicloud/pause-amd64:3.1 \\
14
--network-plugin=cni \\
15
--register-node=true \\
16
--v=2 \\
17
--container-runtime=docker \\
18
--container-runtime-endpoint=unix:///var/run/dockershim.sock \\
19
--image-pull-progress-deadline=15m
20
21
Restart=on-failure
22
RestartSec=5
23
24
[Install]
25
WantedBy=multi-user.target
26
EOF
1、拷贝启动文件到其他节点
1
for host in k8s-master2 k8s-master3 k8s-node1; do
2
echo "---$host---"
3
scp /etc/systemd/system/kubelet.service $host:/etc/systemd/system/
4
done
扩展知识
配置选项 | 选项说明 |
---|---|
–bootstrap-kubeconfig | 指定令牌认证文件 |
–cert-dir | 设置kube-controller-manager生成证书和私钥的目录 |
–cni-conf-dir= | 指定cni配置文件目录 |
–container-runtime=docker | 指定容器运行时引擎 |
–container-runtime-endpoint= | 监听的unix socket位置(Windows上面为 tcp 端口)。 |
–root-dir= | kubelet 保存数据的目录,默认:/var/lib/kubelet |
–kubeconfig= | kubelet作为客户端使用的kubeconfig认证文件,此文件是由kube-controller-mananger生成的 |
–config= | 指定kubelet配置文件 |
–hostname-override= | 用来配置该节点在集群中显示的主机名,kubelet设置了-–hostname-override参数后,kube-proxy也需要设置,否则会出现找不到Node的情况 |
–pod-infra-container-image= | 每个pod中的network/ipc 名称空间容器将使用的镜像 |
–image-pull-progress-deadline= | 镜像拉取进度最大时间,如果在这段时间拉取镜像没有任何进展,将取消拉取,默认:1m0s |
–volume-plugin-dir= | 第三方卷插件的完整搜索路径,默认:"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/" |
–logtostderr=true | 日志记录到标准错误而不是文件 |
–v=2 | 日志级别详细程度的数字 |
3、生成kubelet-config.yaml文件
1、k8s-master1节点:
1
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
2
kind: KubeletConfiguration
3
apiVersion: kubelet.config.k8s.io/v1beta1
4
authentication:
5
anonymous:
6
enabled: false
7
webhook:
8
enabled: true
9
x509:
10
clientCAFile: "/var/lib/kubernetes/ca.pem"
11
authorization:
12
mode: Webhook
13
clusterDomain: "cluster.local"
14
clusterDNS:
15
- "10.250.0.10"
16
runtimeRequestTimeout: "15m"
17
tlsCertFile: "/var/lib/kubelet/k8s-master1.pem"
18
tlsPrivateKeyFile: "/var/lib/kubelet/k8s-master1-key.pem"
19
address: "$MASTER1_IP"
20
staticPodPath: ""
21
syncFrequency: 1m
22
fileCheckFrequency: 20s
23
httpCheckFrequency: 20s
24
staticPodURL: ""
25
port: 10250
26
readOnlyPort: 0
27
rotateCertificates: true
28
serverTLSBootstrap: true
29
registryPullQPS: 0
30
registryBurst: 20
31
eventRecordQPS: 0
32
eventBurst: 20
33
enableDebuggingHandlers: true
34
enableContentionProfiling: true
35
healthzPort: 10248
36
healthzBindAddress: "$MASTER1_IP"
37
nodeStatusUpdateFrequency: 10s
38
nodeStatusReportFrequency: 1m
39
imageMinimumGCAge: 2m
40
imageGCHighThresholdPercent: 85
41
imageGCLowThresholdPercent: 80
42
volumeStatsAggPeriod: 1m
43
kubeletCgroups: ""
44
systemCgroups: ""
45
cgroupRoot: ""
46
cgroupsPerQOS: true
47
cgroupDriver: systemd
48
runtimeRequestTimeout: 10m
49
hairpinMode: promiscuous-bridge
50
maxPods: 220
51
podCIDR: "10.244.0.0/16"
52
podPidsLimit: -1
53
resolvConf: /etc/resolv.conf
54
maxOpenFiles: 1000000
55
kubeAPIQPS: 1000
56
kubeAPIBurst: 2000
57
serializeImagePulls: false
58
evictionHard:
59
memory.available: "100Mi"
60
nodefs.available: "10%"
61
nodefs.inodesFree: "5%"
62
imagefs.available: "15%"
63
evictionSoft: {}
64
enableControllerAttachDetach: true
65
failSwapOn: true
66
containerLogMaxSize: 20Mi
67
containerLogMaxFiles: 10
68
systemReserved: {}
69
kubeReserved: {}
70
systemReservedCgroup: ""
71
kubeReservedCgroup: ""
72
enforceNodeAllocatable: ["pods"]
73
EOF
2、k8s-master2节点:
1
cat << EOF | sudo tee /var/lib/kubelet/k8s-master2.yaml
2
kind: KubeletConfiguration
3
apiVersion: kubelet.config.k8s.io/v1beta1
4
authentication:
5
anonymous:
6
enabled: false
7
webhook:
8
enabled: true
9
x509:
10
clientCAFile: "/var/lib/kubernetes/ca.pem"
11
authorization:
12
mode: Webhook
13
clusterDomain: "cluster.local"
14
clusterDNS:
15
- "10.250.0.10"
16
runtimeRequestTimeout: "15m"
17
tlsCertFile: "/var/lib/kubelet/k8s-master2.pem"
18
tlsPrivateKeyFile: "/var/lib/kubelet/k8s-master2-key.pem"
19
address: "$MASTER2_IP"
20
staticPodPath: ""
21
syncFrequency: 1m
22
fileCheckFrequency: 20s
23
httpCheckFrequency: 20s
24
staticPodURL: ""
25
port: 10250
26
readOnlyPort: 0
27
rotateCertificates: true
28
serverTLSBootstrap: true
29
registryPullQPS: 0
30
registryBurst: 20
31
eventRecordQPS: 0
32
eventBurst: 20
33
enableDebuggingHandlers: true
34
enableContentionProfiling: true
35
healthzPort: 10248
36
healthzBindAddress: "$MASTER2_IP"
37
nodeStatusUpdateFrequency: 10s
38
nodeStatusReportFrequency: 1m
39
imageMinimumGCAge: 2m
40
imageGCHighThresholdPercent: 85
41
imageGCLowThresholdPercent: 80
42
volumeStatsAggPeriod: 1m
43
kubeletCgroups: ""
44
systemCgroups: ""
45
cgroupRoot: ""
46
cgroupsPerQOS: true
47
cgroupDriver: systemd
48
runtimeRequestTimeout: 10m
49
hairpinMode: promiscuous-bridge
50
maxPods: 220
51
podCIDR: "10.244.0.0/16"
52
podPidsLimit: -1
53
resolvConf: /etc/resolv.conf
54
maxOpenFiles: 1000000
55
kubeAPIQPS: 1000
56
kubeAPIBurst: 2000
57
serializeImagePulls: false
58
evictionHard:
59
memory.available: "100Mi"
60
nodefs.available: "10%"
61
nodefs.inodesFree: "5%"
62
imagefs.available: "15%"
63
evictionSoft: {}
64
enableControllerAttachDetach: true
65
failSwapOn: true
66
containerLogMaxSize: 20Mi
67
containerLogMaxFiles: 10
68
systemReserved: {}
69
kubeReserved: {}
70
systemReservedCgroup: ""
71
kubeReservedCgroup: ""
72
enforceNodeAllocatable: ["pods"]
73
EOF
74
75
scp /var/lib/kubelet/k8s-master2.yaml k8s-master2:/var/lib/kubelet/kubelet-config.yaml
76
rm -rf /var/lib/kubelet/k8s-master2.yaml
3、k8s-master3节点:
1
cat << EOF | sudo tee /var/lib/kubelet/k8s-master3.yaml
2
kind: KubeletConfiguration
3
apiVersion: kubelet.config.k8s.io/v1beta1
4
authentication:
5
anonymous:
6
enabled: false
7
webhook:
8
enabled: true
9
x509:
10
clientCAFile: "/var/lib/kubernetes/ca.pem"
11
authorization:
12
mode: Webhook
13
clusterDomain: "cluster.local"
14
clusterDNS:
15
- "10.250.0.10"
16
runtimeRequestTimeout: "15m"
17
tlsCertFile: "/var/lib/kubelet/k8s-master3.pem"
18
tlsPrivateKeyFile: "/var/lib/kubelet/k8s-master3-key.pem"
19
address: "$MASTER3_IP"
20
staticPodPath: ""
21
syncFrequency: 1m
22
fileCheckFrequency: 20s
23
httpCheckFrequency: 20s
24
staticPodURL: ""
25
port: 10250
26
readOnlyPort: 0
27
rotateCertificates: true
28
serverTLSBootstrap: true
29
registryPullQPS: 0
30
registryBurst: 20
31
eventRecordQPS: 0
32
eventBurst: 20
33
enableDebuggingHandlers: true
34
enableContentionProfiling: true
35
healthzPort: 10248
36
healthzBindAddress: "$MASTER3_IP"
37
nodeStatusUpdateFrequency: 10s
38
nodeStatusReportFrequency: 1m
39
imageMinimumGCAge: 2m
40
imageGCHighThresholdPercent: 85
41
imageGCLowThresholdPercent: 80
42
volumeStatsAggPeriod: 1m
43
kubeletCgroups: ""
44
systemCgroups: ""
45
cgroupRoot: ""
46
cgroupsPerQOS: true
47
cgroupDriver: systemd
48
runtimeRequestTimeout: 10m
49
hairpinMode: promiscuous-bridge
50
maxPods: 220
51
podCIDR: "10.244.0.0/16"
52
podPidsLimit: -1
53
resolvConf: /etc/resolv.conf
54
maxOpenFiles: 1000000
55
kubeAPIQPS: 1000
56
kubeAPIBurst: 2000
57
serializeImagePulls: false
58
evictionHard:
59
memory.available: "100Mi"
60
nodefs.available: "10%"
61
nodefs.inodesFree: "5%"
62
imagefs.available: "15%"
63
evictionSoft: {}
64
enableControllerAttachDetach: true
65
failSwapOn: true
66
containerLogMaxSize: 20Mi
67
containerLogMaxFiles: 10
68
systemReserved: {}
69
kubeReserved: {}
70
systemReservedCgroup: ""
71
kubeReservedCgroup: ""
72
enforceNodeAllocatable: ["pods"]
73
EOF
74
scp /var/lib/kubelet/k8s-master3.yaml k8s-master3:/var/lib/kubelet/kubelet-config.yaml
75
rm -rf /var/lib/kubelet/k8s-master3.yaml
4、k8s-node1节点
1
cat << EOF | sudo tee /var/lib/kubelet/k8s-node1.yaml
2
kind: KubeletConfiguration
3
apiVersion: kubelet.config.k8s.io/v1beta1
4
authentication:
5
anonymous:
6
enabled: false
7
webhook:
8
enabled: true
9
x509:
10
clientCAFile: "/var/lib/kubernetes/ca.pem"
11
authorization:
12
mode: Webhook
13
clusterDomain: "cluster.local"
14
clusterDNS:
15
- "10.250.0.10"
16
runtimeRequestTimeout: "15m"
17
address: "$NODE1_IP"
18
staticPodPath: ""
19
syncFrequency: 1m
20
fileCheckFrequency: 20s
21
httpCheckFrequency: 20s
22
staticPodURL: ""
23
port: 10250
24
readOnlyPort: 0
25
rotateCertificates: true
26
serverTLSBootstrap: true
27
registryPullQPS: 0
28
registryBurst: 20
29
eventRecordQPS: 0
30
eventBurst: 20
31
enableDebuggingHandlers: true
32
enableContentionProfiling: true
33
healthzPort: 10248
34
healthzBindAddress: "$NODE1_IP"
35
nodeStatusUpdateFrequency: 10s
36
nodeStatusReportFrequency: 1m
37
imageMinimumGCAge: 2m
38
imageGCHighThresholdPercent: 85
39
imageGCLowThresholdPercent: 80
40
volumeStatsAggPeriod: 1m
41
kubeletCgroups: ""
42
systemCgroups: ""
43
cgroupRoot: ""
44
cgroupsPerQOS: true
45
cgroupDriver: systemd
46
runtimeRequestTimeout: 10m
47
hairpinMode: promiscuous-bridge
48
maxPods: 220
49
podCIDR: "10.244.0.0/16"
50
podPidsLimit: -1
51
resolvConf: /etc/resolv.conf
52
maxOpenFiles: 1000000
53
kubeAPIQPS: 1000
54
kubeAPIBurst: 2000
55
serializeImagePulls: false
56
evictionHard:
57
memory.available: "100Mi"
58
nodefs.available: "10%"
59
nodefs.inodesFree: "5%"
60
imagefs.available: "15%"
61
evictionSoft: {}
62
enableControllerAttachDetach: true
63
failSwapOn: true
64
containerLogMaxSize: 20Mi
65
containerLogMaxFiles: 10
66
systemReserved: {}
67
kubeReserved: {}
68
systemReservedCgroup: ""
69
kubeReservedCgroup: ""
70
enforceNodeAllocatable: ["pods"]
71
tlsCertFile: "/var/lib/kubelet/k8s-node1.pem"
72
tlsPrivateKeyFile: "/var/lib/kubelet/k8s-node1-key.pem"
73
EOF
74
scp /var/lib/kubelet/k8s-node1.yaml k8s-node1:/var/lib/kubelet/kubelet-config.yaml
75
rm -rf /var/lib/kubelet/k8s-node1.yaml
扩展知识:
配置选项 | 选项说明 |
---|---|
address | kubelet 服务监听的地址 |
staticPodPath: “” | kubelet 会定期的扫描这个文件夹下的YAML/JSON 文件来创建/删除静态Pod,使用kubeadm安装时非常有用 |
syncFrequency: 1m | 同步运行容器和配置之间的最大时间间隔,默认为1m |
fileCheckFrequency: 20s | 检查新数据配置文件的周期,默认 20s |
httpCheckFrequency: 20s | 通过 http 检查新数据的周期,默认 20s |
staticPodURL: “” | |
port: 10250 | kubelet 服务的端口,默认 10250 |
readOnlyPort: 0 | 没有认证/授权的只读 kubelet 服务端口 ,设置为 0 表示禁用,默认 10255 |
rotateCertificates | 远程认证,默认为false |
serverTLSBootstrap: true | kubelet安全引导认证,重启 Kubelet,会发现出现了新的 CSR |
authentication: | 认证方式有以下几种 |
anonymous: | 匿名 |
enabled: false | 值为false |
webhook: | webhook的方式 |
enabled: true | 值为true |
x509: | x509认证 |
clientCAFile: “/etc/kubernetes/cert/ca.pem” | 集群ca证书 |
authorization: | 授权 |
mode: Webhook | 授权webhook |
registryPullQPS | 限制每秒拉取镜像个数,设置为 0 表示不限制, 默认 5 |
registryBurst | 仅当 --registry-qps 大于 0 时使用,设置拉取镜像的最大并发数,允许同时拉取的镜像数,不能超过 registry-qps ,默认 10 |
eventRecordQPS | 限制每秒创建的事件数目,设置为0则不限制,默认为5 |
eventBurst | 当–event-qps大于0时,临时允许该事件记录值超过设定值,但不能超过 event-qps 的值,默认10 |
enableDebuggingHandlers | 启用用于日志收集和本地运行容器和命令的服务端端点,默认值:true |
enableContentionProfiling | 如果启用了 profiling,则启用性能分析锁 |
healthzPort | 本地 healthz 端点的端口,设置为 0 将禁用,默认值:10248 |
healthzBindAddress | 监听healthz 端口的地址 |
clusterDomain: “cluster.local” | 集群域名, kubelet 将配置所有容器除了主机搜索域还将搜索当前域 |
clusterDNS: | DNS 服务器的IP地址列表 |
- “10.254.0.2” | 指定一个DNS服务器地址 |
nodeStatusUpdateFrequency: 10s | node节点状态更新上报频率,默认:10s |
nodeStatusReportFrequency: 1m | node节点上报自身状态频率,默认:1m |
imageMinimumGCAge: 2m | 设置镜像最少多久没有被使用才会被清理 |
imageGCHighThresholdPercent: 85 | 设置镜像占用磁盘比率最大值,超过此值将执行镜像垃圾回收,默认 85 |
imageGCLowThresholdPercent: 80 | 设置镜像占用磁盘比率最小值,低于此值将停止镜像垃圾回收,默认 80 |
volumeStatsAggPeriod: 1m | 指定kubelet计算和缓存所有容器组及卷的磁盘使用量时间间隔,设置为0禁用卷计算,默认:1m |
kubeletCgroups: “” | 可选的 cgroups 的绝对名称来创建和运行 kubelet |
systemCgroups: “” | 可选的 cgroups 的绝对名称,用于将未包含在 cgroup 内的所有非内核进程放置在根目录 / 中,修改这个参数需要重启 |
cgroupRoot: “” | Pods 可选的root cgroup, 这是由container runtime,在最佳工作的基础上处理的,默认值: ‘’,意思是使用container runtime的默认处理 |
cgroupsPerQOS: true | 支持创建QoS cgroup的层级结构,如果是true,最高层级的 |
cgroupDriver: systemd | Kubelet用来操作主机cgroups的驱动 |
runtimeRequestTimeout: 10m | 除了长时间运行的请求(pull,logs,exec,attach)以外,所有runtime请求的超时时间,当触发超时时,kubelet会取消请求,抛出一个错误并稍后重试,默认值:2m0s |
hairpinMode: promiscuous-bridge | Kubelet该怎么设置hairpin promiscuous-bridge.这个参数允许Service的端点试图访问自己的Service。如果网络没有正确配置为 “hairpin” 流量,通常当 kube-proxy 以 iptables 模式运行,并且 Pod 与桥接网络连接时,就会发生这种情况。Kubelet 公开了一个 hairpin-mode 标志,如果 pod 试图访问他们自己的 Service VIP,就可以让 Service 的 endpoints 重新负载到他们自己身上。hairpin-mode 标志必须设置为 hairpin-veth 或者 promiscuous-bridge。 |
maxPods | 当前 kubelet 可以运行的容器组数目,默认:110 |
podCIDR | pod使用的CIDR网段 |
podPidsLimit: -1 | pod中设置PID限制 |
resolvConf | 用作容器DNS解析配置的解析器配置文件,默认: “/etc/resolv.conf” |
maxOpenFiles: 1000000 | kubelet 进程可以打开的文件数目,默认:1000000 |
kubeAPIQPS | 与kube-apiserver会话时的QPS,默认:15 |
kubeAPIBurst | 与kube-apiserver会话时的并发数,默认:10 |
serializeImagePulls: false | 禁止一次只拉取一个镜像 |
evictionHard: | 一个清理阈值的集合,达到该阈值将触发一次容器清理 |
memory.available: “100Mi” | 小gf 100Mi时清理 |
nodefs.available: “10%” | |
nodefs.inodesFree: “5%” | |
imagefs.available: “15%” | |
evictionSoft: {} | 清理阈值的集合,如果达到一个清理周期将触发一次容器清理 |
enableControllerAttachDetach: true | 允许附加/分离控制器来管理调度到当前节点的附加/分离卷,并禁用kubelet执行任何附加/分离操作,默认:true |
failSwapOn: true | 如果在节点上启用了swap,kubelet将启动失败 |
containerLogMaxSize: 20Mi | 容器日志容量最大值 |
containerLogMaxFiles: 10 | 容器日志文件数最大值 |
systemReserved: {} | https://kubernetes.io/zh/docs/tasks/administer-cluster/reserve-compute-resources/,系统守护进程争取资源预留,这里均设置为默认值 |
kubeReserved: {} | kubernetes 系统守护进程争取资源预留,这里均设置为默认值 |
systemReservedCgroup: “” | 要选择性的在系统守护进程上执行 kube-reserved,需要把 kubelet 的 --kube-reserved-cgroup 标志的值设置为 kube 守护进程的父控制组要想在系统守护进程上可选地执行 system-reserved,请指定 --system-reserved-cgroup kubelet 标志的值为 OS 系统守护进程的父级控制组,这里均设置为默认值 |
kubeReservedCgroup: “” | 要选择性的在系统守护进程上执行 kube-reserved,需要把 kubelet 的 --kube-reserved-cgroup 标志的值设置为 kube 守护进程的父控制组这里均设置为默认值 |
enforceNodeAllocatable: [“pods”] | 无论何时,如果所有 pod 的总用量超过了 Allocatable,驱逐 pod 的措施将被执行 |
kubelet组件采用主动的查询机制,定期向kube-apiserver获取当前节点应该处理的任务,如果有任务分配到了自己身上(如创建Pod),从而他去处理这些任务;
kubelet暴露了两个端口10248,http形式的healthz服务,另一个是10250,https服务,其实还有一个只读的10255端口,这里是禁用的。
十三:生成kube-proxy 服务启动文件
kube-proxy是什么,这里就不得不说下service,service是一组Pod的抽象集合,它相当于一组Pod的负载均衡器,负责将请求分发到对应的pod,kube-proxy就是负责service的实现的,当请求到达service时,它通过label关联到后端并转发到某个Pod;kube-proxy提供了三种负载均衡模式:用户空间、iptables、ipvs,网上有很多关于这三种模式的区别,这里先不详述,本文采用ipvs。
kube-proxy需要运行在所有节点上(因为我们master节点也有Pod,如果没有的话,可以只部署在非master节点上),kube-proxy它主动的去监听kube-apiserver中service和endpoint的变化情况,然后根据定义的模式,创建路由规则,并提供服务service IP(headless类型的service无IP)和负载均衡功能。注意:在所有节点安装ipvsadm和ipset命令,加载ip_vs内核模块,前面章节已经执行过。
1、master节点上操作
1
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
2
kind: KubeProxyConfiguration
3
apiVersion: kubeproxy.config.k8s.io/v1alpha1
4
clientConnection:
5
burst: 200
6
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
7
qps: 100
8
bindAddress: $VIP
9
healthzBindAddress: $VIP:10256
10
metricsBindAddress: $VIP:10249
11
enableProfiling: true
12
clusterCIDR: 10.244.0.0/16
13
mode: "ipvs"
14
portRange: ""
15
kubeProxyIPTablesConfiguration:
16
masqueradeAll: false
17
kubeProxyIPVSConfiguration:
18
scheduler: rr
19
excludeCIDRs: []
20
EOF
21
22
23
for host in k8s-master2 k8s-master3;do
24
echo "---$host---"
25
scp /var/lib/kube-proxy/kube-proxy-config.yaml $host:/var/lib/kube-proxy/
26
done
27
28
29
node配置
30
31
cat << EOF | sudo tee k8s-node1.yaml
32
kind: KubeProxyConfiguration
33
apiVersion: kubeproxy.config.k8s.io/v1alpha1
34
clientConnection:
35
burst: 200
36
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
37
qps: 100
38
bindAddress: $NODE1_IP
39
healthzBindAddress: $NODE1_IP:10256
40
metricsBindAddress: $NODE1_IP:10249
41
enableProfiling: true
42
clusterCIDR: 10.244.0.0/16
43
mode: "ipvs"
44
portRange: ""
45
kubeProxyIPTablesConfiguration:
46
masqueradeAll: false
47
kubeProxyIPVSConfiguration:
48
scheduler: rr
49
excludeCIDRs: []
50
EOF
51
52
53
scp k8s-node1.yaml k8s-node1:/var/lib/kube-proxy/kube-proxy-config.yaml
54
rm -rf k8s-node1.yaml
扩展知识:
配置选项 | 选项说明 |
---|---|
clientConnection | 与kube-apiserver交互时的参数设置 |
burst: 200 | 临时允许该事件记录值超过qps设定值 |
kubeconfig | kube-proxy 客户端连接 kube-apiserver 的 kubeconfig 文件路径设置 |
qps: 100 | 与kube-apiserver交互时的QPS,默认值5 |
bindAddress | kube-proxy监听地址 |
healthzBindAddress | 用于检查服务的IP地址和端口,默认:0.0.0.0:10256 |
metricsBindAddress | metrics服务的ip地址和端口,默认:127.0.0.1:10249 |
enableProfiling | 如果设为true,则通过/debug/pprof处理程序上的web界面进行概要分析 |
clusterCIDR | kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT |
hostnameOverride | 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则; |
mode | 使用 ipvs 模式 |
portRange | 主机端口的范围(beginPort- endport,单端口或beginPort+偏移量),可用于代理服务流量。如果(未指定,0,或0-0)端口将被随机选择 |
kubeProxyIPTablesConfiguration: | |
masqueradeAll: false | 如果使用纯iptables代理,SNAT所有通过服务集群ip发送的通信 |
kubeProxyIPVSConfiguration: | |
scheduler: rr | 当proxy为ipvs模式时,ipvs调度类型 |
excludeCIDRs: [] |
1
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
2
[Unit]
3
Description=Kubernetes Kube Proxy
4
Documentation=https://github.com/kubernetes/kubernetes
5
6
[Service]
7
ExecStart=/usr/local/bin/kube-proxy \\
8
--config=/var/lib/kube-proxy/kube-proxy-config.yaml \\
9
--logtostderr=true \\
10
--v=2
11
Restart=on-failure
12
RestartSec=5
13
14
[Install]
15
WantedBy=multi-user.target
16
EOF
1、拷贝启动配置文件到其他节点
1
for host in k8s-master2 k8s-master3 k8s-node1; do
2
echo "---$host---"
3
scp /etc/systemd/system/kube-proxy.service $host:/etc/systemd/system/
4
done
2、启动worker服务
1
for host in k8s-master1 k8s-master2 k8s-master3 k8s-node1; do
2
echo "---$host---"
3
ssh $host "systemctl daemon-reload"
4
ssh $host "systemctl enable --now kubelet kube-proxy"
5
done
6
systemctl status kubelet
7
systemctl status kube-proxy
此时所有节点的kubelet启动之后 会自动加入集群
查看集群节点
1
kubectl get nodes
输出信息
1
# kubectl get nodes
2
NAME STATUS ROLES AGE VERSION
3
k8s-master1 NotReady <none> 9s v1.17.4
4
k8s-master2 NotReady <none> 9s v1.17.4
5
k8s-master3 NotReady <none> 19s v1.17.4
6
k8s-node1 NotReady <none> 19s v1.17.4
可以看到 所有的节点都已经成功加入集群
状态显示 NotReady 是因为没有配置网络插件 配置好网络插件就 Ready了
十四:部署网络配置
本节主要介绍部署网络 dns 插件;
本节的网络插件选用的是calico 以容器的方式运行;
本节的dns插件选用的是coredns以容器的方式运行;
1、安装calico网络
1、下载calico网络插件的yaml文件
#1.16版本的kubernetes calico网络插件应选用3.9及以上的版本
1
curl https://docs.projectcalico.org/v3.10/manifests/calico.yaml -O
2、修改网络插件yaml文件内容
1
sed -i 's|192.168.0.0|10.244.0.0|' calico.yaml
3、应用yaml文件
1
kubectl apply -f calico.yaml
4、查看pod状态
1
kubectl get pods -n kube-system
输出信息:
1
NAME READY STATUS RESTARTS AGE
2
calico-kube-controllers-7489ff5b7c-828pz 1/1 Running 0 3m40s
3
calico-node-6fld7 1/1 Running 0 3m40s
4
calico-node-8z747 1/1 Running 0 3m40s
5
calico-node-d7cc5 1/1 Running 0 3m40s
网络插件安装成功
2、安装coredns插件
1、先下载jq命令
(coredns提供的部署脚本需要使用jq命令对生成的yaml文件进行整理)
wget https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64 -O /usr/bin/jq
chmod a+x /usr/bin/jq
2、克隆GitHub仓库
yum -y install git
git clone https://github.com/coredns/deployment.git
3、执行
cd deployment/kubernetes/
./deploy.sh -i 10.250.0.10 | kubectl apply -f -
ip为前面写的集群dns的IP
输出信息:略
4、查看pod
kubectl scale -n kube-system deployment coredns --replicas=3
kubectl get pods -n kube-system
输出信息:
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-7489ff5b7c-828pz 1/1 Running 8 11m
calico-node-6fld7 1/1 Running 5 11m
calico-node-8z747 1/1 Running 6 11m
calico-node-d7cc5 1/1 Running 4 11m
calico-node-nr5dt 1/1 Running 4 11m
coredns-59845f77f8-9cn69 1/1 Running 5 3m31s
coredns-59845f77f8-d9vs4 1/1 Running 3 3m31s
coredns-59845f77f8-z7xgp 1/1 Running 3 3m31s
5、验证
进入pod内部 ping baidu.com
如果能ping通 则插件部署完毕
至此,恭喜大家,二进制安装部署部分完成!!!!
验证
kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml
输出信息:
leader节点为k8s-master01
ip a
输出信息:
关闭k8s-master01主机
poweroff
在k8s-master02上检查VIP以及leader节点
kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml
leader已切换到k8s-master2上
检查VIP
ip a
VIP已切换到k8s-master2上
注:当一次执行kubectl get csr时,所有节点状态为Pending,处理办法
# kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-48t2k 84m system:node:k8s-master2 Pending
csr-4g9cz 69m system:node:k8s-master2 Pending
csr-6v547 23m system:node:k8s-master2 Pending
csr-8bjcd 12m system:node:k8s-master1 Pending
csr-97qzk 85m system:node:k8s-master1 Pending
csr-d85x7 68m system:node:k8s-master3 Pending
csr-gpkdb 9m20s system:node:k8s-master3 Pending
csr-jgk9q 83m system:node:k8s-master3 Pending
csr-k8nm2 70m system:node:k8s-master1 Pending
csr-nmddc 10m system:node:k8s-master2 Pending
csr-qtz9q 12m system:node:k8s-master1 Pending
csr-qzwcc 24m system:node:k8s-master1 Pending
csr-rqlrf 23m system:node:k8s-master3 Pending
csr-xzv2b 9m27s system:node:k8s-master3 Pending
csr-zv7nz 23m system:node:k8s-master2 Pending
需要在 master 允许其证书申请
# kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve
#再次执行kubectl get csr正常
补充知识:
查看单独服务日志
journalctl -xefu kubelet
实时输出日志
journalctl -f 和tail -f 命令一样效果
查看pod详细信息
kubectl get pods -A -o wide
查看pod日志
kubectl describe pod nginx-6db489d4b7-5xxt4