vlambda博客
学习文章列表

0156.K kubeadm安装高可用K8S集群(2/2)




(2/2)使用kubeadm通过kubeadm-config.yaml文件部署多主多从kubernetes集群,并使用haproxy和keepalived提供的VIP实现master高可用。




1/2详见:


2.5 部署k8s的Master节点

2.5.1 以yaml配置文件的方式部署k8s的Master节点(二选一)

1) 在rh-master01创建kubeadm-config.yaml

# vim kubeadm-config.yaml #内容如下apiVersion: kubeadm.k8s.io/v1beta2bootstrapTokens:- groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authenticationkind: InitConfigurationlocalAPIEndpoint: advertiseAddress: 192.168.80.135 #本机IP bindPort: 6443nodeRegistration: criSocket: /var/run/dockershim.sock name: rh-master01 #本主机名 taints: - effect: NoSchedule key: node-role.kubernetes.io/master---apiServer: timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta2certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrolPlaneEndpoint: "192.168.80.134:16443" #虚拟IP和haproxy端口controllerManager: {}dns: type: CoreDNSetcd: local: dataDir: /var/lib/etcdimageRepository: registry.aliyuncs.com/google_containers #镜像仓库源kind: ClusterConfigurationkubernetesVersion: v1.23.2 # k8s版本networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: "10.96.0.0/12"scheduler: {}
---apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationfeatureGates: SupportIPVSProxyMode: truemode: ipvs


2) [可选]如果内容有更新,可以使用如下的命令更新kubeadm-config.yaml文件,需要将k8s设置到对应的版本

[root@rh-master01 ~]# kubeadm config migrate --old-config kubeadm-config.yaml --new-config "new.yaml"


将kubeadm-config.yaml(或更新后的new.yaml)文件复制到所有的master节点

[root@rh-master01 ~]# scp kubeadm-config.yaml rh-master02:/root/[root@rh-master01 ~]# scp kubeadm-config.yaml rh-master03:/root/


3) 使用kubeadm-config.yaml(或更新后的new.yaml)文件,在所有的master节点提前下载镜像,可以节省初始化时间

[root@rh-master01 ~]# kubeadm config images pull --config /root/kubeadm-config.yaml[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.2[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.2[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.2[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.2[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6


4) rh-master01节点初始化

第一个节点初始化后,会在/etc/kubernetes目录下生成对应的证书和配置文件,后续需要将其它的Master节点加入到rh-master01节点即可。

在rh-master01执行初始化:

[root@rh-master01 ~]# kubeadm init --config /root/kubeadm-config.yaml --upload-certs[init] Using Kubernetes version: v1.23.2[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local rh-master01] and IPs [10.96.0.1 192.168.80.135 192.168.80.134][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [localhost rh-master01] and IPs [192.168.80.135 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [localhost rh-master01] and IPs [192.168.80.135 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "admin.conf" kubeconfig file[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "kubelet.conf" kubeconfig file[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "controller-manager.conf" kubeconfig file[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "scheduler.conf" kubeconfig file[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 13.139226 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the clusterNOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace[upload-certs] Using certificate key:2a453e5c58581b3b5c10f7dcc2efd9796a6b29b2262224d5634b371c53e26af5[mark-control-plane] Marking the node rh-master01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers][mark-control-plane] Marking the node rh-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: abcdef.0123456789abcdef[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[addons] Applied essential addon: kube-proxy
#有这行提示,说明创建成功Your Kubernetes control-plane has initialized successfully!
#如果是普通用户管理集群,需要创建并配置目录To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config#如果是root用户,只需要配置如下环境变量Alternatively, if you are the root user, you can run:  export KUBECONFIG=/etc/kubernetes/admin.conf
#部署pod网络方法You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
#其它master节点加入该集群命令,初始化成功后,会产生token值,用于其他节点加入时使用You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 192.168.80.134:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:1e79626eebaa7ab9c29bb410ebc0c11587efefb7f207aca4b1c8767ed80629dd \ --control-plane --certificate-key 2a453e5c58581b3b5c10f7dcc2efd9796a6b29b2262224d5634b371c53e26af5Please note that the certificate-key gives access to cluster sensitive data, keep it secret!As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.80.134:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:1e79626eebaa7ab9c29bb410ebc0c11587efefb7f207aca4b1c8767ed80629dd


5) 清理环境-初始化失败时执行

如果初始化失败,清理初始化配置

kubeadm reset -f;ipvsadm --clear;rm -rf ~/.kube

待原因排查确认并更改配置后,再次初始化。


6) rh-master01节点配置环境变量,用于访问kubernetes集群

如果是root用户,配置环境变量:

cat >> /root/.bash_profile << EOFexport KUBECONFIG=/etc/kubernetes/admin.confEOF


使环境变量生效

source ~/.bash_profile


7) rh-master01中查看节点的状态

[root@rh-master01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONrh-master01 NotReady control-plane,master 27m v1.23.2


8) 查看pod状态

采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内。


8.1) 正常状态

可以在rh-master01节点查看Pod的状态,处coredns显示Pending外,其它pod应为Running。

[root@rh-master01 ~]# kubectl get pod -n kube-system    #使用kube-flannel.yml在第一个master部署CNI网络插件后,coredns才会从Pending转变为RunningNAME READY STATUS RESTARTS AGEcoredns-6d8c4cb4d-7znjc 0/1 Pending 0 97scoredns-6d8c4cb4d-x45xn 0/1 Pending 0 97setcd-rh-master01 1/1 Running 2 101skube-apiserver-rh-master01 1/1 Running 2 102skube-controller-manager-rh-master01 1/1 Running 5 102skube-proxy-vsh9s 1/1 Running 0 97skube-scheduler-rh-master01 1/1 Running 5 101s


8.2) kube-proxy报错CrashLoopBackOff处理记录

详见:


2.5.2 命令行的方式部署k8s的Master节点(二选一)

在rh-master01、rh-master02以及rh-master03节点输入如下的命令:

kubeadm config images pull --kubernetes-version=v1.23.2 --image-repository=registry.aliyuncs.com/google_containers


在rh-master01节点输入如下的命令:

kubeadm init \ --apiserver-advertise-address=192.168.80.135 \ --image-repository registry.aliyuncs.com/google_containers \ --control-plane-endpoint=192.168.80.134:16443 \ --kubernetes-version v1.23.2 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16 \ --upload-certs


2.6 高可用Master

2.6.1 将rh-master02节点加入到集群中

1) 将rh-master02节点加入到集群中

kubeadm join 192.168.80.134:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:1e79626eebaa7ab9c29bb410ebc0c11587efefb7f207aca4b1c8767ed80629dd \ --control-plane --certificate-key 2a453e5c58581b3b5c10f7dcc2efd9796a6b29b2262224d5634b371c53e26af5


2) 如果是root用户,配置环境变量

配置root用户环境变量

cat >> /root/.bash_profile << EOFexport KUBECONFIG=/etc/kubernetes/admin.confEOF

普通用户环境变量查看rh-master01初始化最后提示。


使环境变量生效

source ~/.bash_profile


3) rh-master02中查看节点的状态

[root@rh-master02 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONrh-master01 NotReady control-plane,master 47m v1.23.2rh-master02   NotReady   control-plane,master   2m47s   v1.23.2


2.6.2 将rh-master03节点加入到集群中

1) 将rh-master03节点加入到集群中

kubeadm join 192.168.80.134:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:1e79626eebaa7ab9c29bb410ebc0c11587efefb7f207aca4b1c8767ed80629dd \ --control-plane --certificate-key 2a453e5c58581b3b5c10f7dcc2efd9796a6b29b2262224d5634b371c53e26af5


2) 如果是root用户,配置环境变量

配置root用户环境变量

cat >> /root/.bash_profile << EOFexport KUBECONFIG=/etc/kubernetes/admin.confEOF

普通用户环境变量查看rh-master01初始化最后提示。


使环境变量生效

source ~/.bash_profile


2.6.3 token过期处理

默认的token有效期为24小时,当过期之后,该token就不能用了。

1) 生成新的token

如果token过期了,需要生成新的token(在rh-master01节点):

kubeadm token create --print-join-command


#生成一个永不过期的token

kubeadm token create --ttl 0 --print-join-command


2) 生成--certificate-key

Master节点如果要加入到集群中,需要生成--certificate-key(在rh-master01节点):

kubeadm init phase upload-certs --upload-certs


3) 使用新token将其他master加入集群

然后将其他Master节点加入到集群中:

# 需要做对应的修改

kubeadm join 192.168.80.134:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:505e373bae6123fc3e27e778c5fedbccbf0f91a51efdcc11b32c4573605b8e71 \ --control-plane --certificate-key 70aef5f76111a5824085c644b3f34cf830efad00c1b16b878701166bf069664e


2.7 Node节点的配置

1) 将rh-node1加入到集群中

kubeadm join 192.168.80.134:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:1e79626eebaa7ab9c29bb410ebc0c11587efefb7f207aca4b1c8767ed80629dd

如:

[root@rh-node01 ~]# kubeadm join 192.168.80.134:16443 --token abcdef.0123456789abcdef \> --discovery-token-ca-cert-hash sha256:1e79626eebaa7ab9c29bb410ebc0c11587efefb7f207aca4b1c8767ed80629dd[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


2) 将rh-node2加入到集群中

kubeadm join 192.168.80.134:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:1e79626eebaa7ab9c29bb410ebc0c11587efefb7f207aca4b1c8767ed80629dd

    

2.8 部署CNI网络插件

部署CNI网络插件前,在Master节点上使用kubectl工具查看节点状态,均为NotReady:

[root@rh-master01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONrh-master01 NotReady control-plane,master 59m v1.23.2rh-master02 NotReady control-plane,master 14m v1.23.2rh-master03 NotReady control-plane,master 10m v1.23.2rh-node01 NotReady <none> 68s v1.23.2rh-node02 NotReady <none> 19s v1.23.2

kubernetes支持多种网络插件,比如flannel、calico、canal等,任选一种即可,本次选择flannel,如果网络不行,可以许大仙提供的kube-flannel.yml,当然,你也可以安装calico,请点这里calico.yaml,推荐安装calico。


在Master节点上获取flannel配置文件(可能会失败,如果失败,请下载到本地,然后安装):

[root@rh-master01 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml...2022-03-31 09:56:11 (53.8 MB/s) - ‘kube-flannel.yml’ saved [5692/5692]


在所有Master节点使用配置文件启动flannel(在线):

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


或以之前下载到本地的kube-flannel.yml启动flannel(离线):

[root@rh-master01 ~]# kubectl apply -f kube-flannel.ymlWarning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+podsecuritypolicy.policy/psp.flannel.unprivileged createdclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.apps/kube-flannel-ds created


查看部署CNI网络插件进度(直至所有状态均为Running):

[root@rh-master01 ~]# kubectl get pods -n kube-system -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEScoredns-6d8c4cb4d-7znjc 1/1 Running 0 14m 10.244.4.2 rh-node02 <none> <none>coredns-6d8c4cb4d-x45xn 1/1 Running 0 14m 10.244.4.3 rh-node02 <none> <none>etcd-rh-master01 1/1 Running 2 15m 192.168.80.135 rh-master01 <none> <none>etcd-rh-master02 1/1 Running 1 12m 192.168.80.136 rh-master02 <none> <none>etcd-rh-master03 1/1 Running 0 11m 192.168.80.137 rh-master03 <none> <none>kube-apiserver-rh-master01 1/1 Running 2 15m 192.168.80.135 rh-master01 <none> <none>kube-apiserver-rh-master02 1/1 Running 1 12m 192.168.80.136 rh-master02 <none> <none>kube-apiserver-rh-master03 1/1 Running 1 11m 192.168.80.137 rh-master03 <none> <none>kube-controller-manager-rh-master01 1/1 Running 6 (12m ago) 15m 192.168.80.135 rh-master01 <none> <none>kube-controller-manager-rh-master02 1/1 Running 1 12m 192.168.80.136 rh-master02 <none> <none>kube-controller-manager-rh-master03 1/1 Running 1 10m 192.168.80.137 rh-master03 <none> <none>kube-flannel-ds-2m7dj 1/1 Running 0 9m41s 192.168.80.136 rh-master02 <none> <none>kube-flannel-ds-kclhd 1/1 Running 0 9m41s 192.168.80.135 rh-master01 <none> <none>kube-flannel-ds-qx4b9 1/1 Running 0 9m41s 192.168.80.139 rh-node02 <none> <none>kube-flannel-ds-s9f52 1/1 Running 0 9m41s 192.168.80.138 rh-node01 <none> <none>kube-flannel-ds-wqgnb 1/1 Running 0 9m41s 192.168.80.137 rh-master03 <none> <none>kube-proxy-9sl9k 1/1 Running 0 12m 192.168.80.136 rh-master02 <none> <none>kube-proxy-h5lnd 1/1 Running 0 10m 192.168.80.138 rh-node01 <none> <none>kube-proxy-vhg9g 1/1 Running 0 10m 192.168.80.139 rh-node02 <none> <none>kube-proxy-vsh9s 1/1 Running 0 14m 192.168.80.135 rh-master01 <none> <none>kube-proxy-xqrk4 1/1 Running 0 11m 192.168.80.137 rh-master03 <none> <none>kube-scheduler-rh-master01 1/1 Running 6 (12m ago) 15m 192.168.80.135 rh-master01 <none> <none>kube-scheduler-rh-master02 1/1 Running 1 12m 192.168.80.136 rh-master02 <none> <none>kube-scheduler-rh-master03 1/1 Running 1 11m 192.168.80.137 rh-master03 <none> <none>


部署CNI网络插件后,再次在Master节点使用kubectl工具查看节点状态,均为Ready:

[root@rh-master01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONrh-master01 Ready control-plane,master 19m v1.23.2rh-master02 Ready control-plane,master 16m v1.23.2rh-master03 Ready control-plane,master 15m v1.23.2rh-node01 Ready <none> 14m v1.23.2rh-node02 Ready <none> 14m v1.23.2


查看集群健康状况:

[root@rh-master01 ~]# kubectl get csWarning: v1 ComponentStatus is deprecated in v1.19+NAME STATUS MESSAGE ERRORscheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true","reason":""}


查看集群信息:

[root@rh-master01 ~]# kubectl cluster-infoKubernetes control plane is running at https://192.168.80.134:16443CoreDNS is running at https://192.168.80.134:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.



0156.K kubeadm安装高可用K8S集群(2/2)
0156.K kubeadm安装高可用K8S集群(2/2)

3 服务部署

0156.K kubeadm安装高可用K8S集群(2/2)


3.1 前言

在Kubernetes集群中部署一个Nginx程序,测试下集群是否正常工作。


3.2 步骤

部署Nginx:

[root@rh-master01 ~]# kubectl create deployment nginx --image=nginx:1.14-alpinedeployment.apps/nginx created


暴露端口:

[root@rh-master01 ~]# kubectl expose deployment nginx --port=80 --type=NodePortservice/nginx exposed


查看服务状态,直到pod状态为Running:

[root@rh-master01 ~]# kubectl get pods,svcNAME READY STATUS RESTARTS AGEpod/nginx-7cbb8cd5d8-x6rxg 0/1 ContainerCreating 0 15s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19mservice/nginx NodePort 10.108.73.69 <none> 80:32072/TCP 9s


http://192.168.80.134:32072 #此时可以看到"Welcome to nginx!"欢迎页面



0156.K kubeadm安装高可用K8S集群(2/2)
0156.K kubeadm安装高可用K8S集群(2/2)

4. kubernetes中kubectl命令自动补全

0156.K kubeadm安装高可用K8S集群(2/2)


kubectl 为 Bash、Zsh、Fish 和 PowerShell 提供了自动补全支持,可以为节省大量输入。

可以使用命令生成 Bash 的 kubectl 的自动补全功能,补全脚本依赖于bash-completion,下文以Bash设置自动补全功能进行演示。

Fish 和 Zsh 设置自动完成的过程,请参考文末链接。


4.1 安装bash-completion

yum install -y bash-completion


4.2 配置bash补全

1) 用户下补全方法(二选一)

echo 'source /usr/share/bash-completion/bash_completion' >> ~/.bashrcecho 'source <(kubectl completion bash)' >> ~/.bashrc


2) 系统补全方法(二选一)

kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null


4.3 kubectl别名设置

如果需要kubectl的别名,可以扩展 shell 补全以使用该别名:

echo 'alias k=kubectl' >> ~/.bashrcecho 'complete -F __start_kubectl k' >> ~/.bashrc


4.4 注意

bash-completion 将所有完成脚本都放在/etc/bash_completion.d.

两种方法是等效的。重新加载 shell 后,kubectl 自动完成功能应该可以工作了。


4.5 验证kubectl自动补全功能

[root@rh-master01 ~]# . .bashrc #加载配置,使配置生效[root@rh-master01 ~]# k v #输入k v,点击tab键,此时会自动补全参数version的全部信息[root@rh-master01 ~]# k version



0156.K kubeadm安装高可用K8S集群(2/2)
0156.K kubeadm安装高可用K8S集群(2/2)

5. 参考


https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/ha-topology/#%E5%A0%86%E5%8F%A0-stacked-etcd-%E6%8B%93%E6%89%91https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#%E5%87%86%E5%A4%87%E5%BC%80%E5%A7%8Bhttps://www.yuque.com/fairy-era/yg511q/gf89ub#3ed9fdbbhttps://www.yuque.com/fairy-era/yg511q/hg3u04https://stackoverflow.com/questions/49250310/kube-proxy-crashloopbackoff-after-kubeadm-init/49258389https://blog.csdn.net/qq_15138049/article/details/122558489





- 完 -



旨在交流,不足之处,还望抛砖。




 


往期推荐