vlambda博客
学习文章列表

k8s系列-10-k8s集群验证和图形化界面访问k8s

老板们,点个关注吧。

k8s系列-10-k8s集群验证和图形化界面访问k8s


 

上一篇我们成功使用kubespary部署了最新版本的k8s,那么这篇我们就查看下到底安装了哪些服务,以及如何图形化界面访问k8s系统。

k8s系列-10-k8s集群验证和图形化界面访问k8s

 

查看集群内容

1、查看命名空间

[root@node1 ~]# kubectl get nsNAME STATUS AGEdefault Active 23hingress-nginx Active 23hkube-node-lease Active 23hkube-public Active 23hkube-system Active 23h[root@node1 ~]#


2、查看default有什么内容

[root@node1 ~]# kubectl get all -n defaultNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/kubernetes ClusterIP 10.200.0.1 <none> 443/TCP 23h[root@node1 ~]#


3、查看ingress-nginx有什么内容

[root@node1 ~]# kubectl get all -n ingress-nginxNAME READY STATUS RESTARTS AGEpod/ingress-nginx-controller-68hbq 1/1 Running 1 (14m ago) 23hpod/ingress-nginx-controller-clt8p 1/1 Running 1 (14m ago) 23hpod/ingress-nginx-controller-hmcf6 1/1 Running 1 (14m ago) 23h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEdaemonset.apps/ingress-nginx-controller 3 3 3 3 3 kubernetes.io/os=linux 23h[root@node1 ~]#
可以看到有三个pod和一个daemonset,说明每一个节点上都运行了一个ingerss-nginx。


4、查看kube-node-lease和kube-public有什么内容

[root@node1 ~]# kubectl get all -n kube-node-leaseNo resources found in kube-node-lease namespace.[root@node1 ~]# kubectl get all -n kube-publicNo resources found in kube-public namespace.[root@node1 ~]#
这两个命名空间下什么内容都没有,为空。


5、查看kube-system有什么内容

[root@node1 ~]# kubectl get all -n kube-systemNAME READY STATUS RESTARTS AGEpod/calico-kube-controllers-5788f6558-tv62d 1/1 Running 2 (21m ago) 24hpod/calico-node-lv2mq 1/1 Running 1 (21m ago) 24hpod/calico-node-nvlvd 1/1 Running 1 (21m ago) 24hpod/calico-node-r9znq 1/1 Running 1 (21m ago) 24hpod/coredns-8474476ff8-2zmkb 1/1 Running 1 (21m ago) 23hpod/coredns-8474476ff8-bjssc 1/1 Running 1 (21m ago) 23hpod/dns-autoscaler-5ffdc7f89d-pstjw 1/1 Running 1 (21m ago) 23hpod/kube-apiserver-node1 1/1 Running 2 (21m ago) 24hpod/kube-apiserver-node2 1/1 Running 2 (21m ago) 24hpod/kube-controller-manager-node1 1/1 Running 2 (21m ago) 24hpod/kube-controller-manager-node2 1/1 Running 2 (21m ago) 24hpod/kube-proxy-8qfm5 1/1 Running 1 (21m ago) 24hpod/kube-proxy-d8d7d 1/1 Running 1 (21m ago) 24hpod/kube-proxy-vlfb2 1/1 Running 1 (21m ago) 24hpod/kube-scheduler-node1 1/1 Running 3 (21m ago) 24hpod/kube-scheduler-node2 1/1 Running 2 (21m ago) 24hpod/kubernetes-dashboard-548847967d-5fk9w 1/1 Running 1 (21m ago) 23hpod/kubernetes-metrics-scraper-6d49f96c97-thk7v 1/1 Running 1 (21m ago) 23hpod/nginx-proxy-node3 1/1 Running 1 (21m ago) 24hpod/nodelocaldns-5lfw2 0/1 Pending 0 23hpod/nodelocaldns-9pk5m 1/1 Running 1 (21m ago) 23hpod/nodelocaldns-d8m7h 0/1 Pending 0 23h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/coredns ClusterIP 10.200.0.3 <none> 53/UDP,53/TCP,9153/TCP 23hservice/dashboard-metrics-scraper ClusterIP 10.200.177.189 <none> 8000/TCP 23hservice/kubernetes-dashboard ClusterIP 10.200.30.147 <none> 443/TCP 23h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEdaemonset.apps/calico-node 3 3 3 3 3 kubernetes.io/os=linux 24hdaemonset.apps/kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 24hdaemonset.apps/nodelocaldns 3 3 1 3 1 kubernetes.io/os=linux 23h
NAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/calico-kube-controllers 1/1 1 1 24hdeployment.apps/coredns 2/2 2 2 23hdeployment.apps/dns-autoscaler 1/1 1 1 23hdeployment.apps/kubernetes-dashboard 1/1 1 1 23hdeployment.apps/kubernetes-metrics-scraper 1/1 1 1 23h
NAME DESIRED CURRENT READY AGEreplicaset.apps/calico-kube-controllers-5788f6558 1 1 1 24hreplicaset.apps/coredns-8474476ff8 2 2 2 23hreplicaset.apps/dns-autoscaler-5ffdc7f89d 1 1 1 23hreplicaset.apps/kubernetes-dashboard-548847967d 1 1 1 23hreplicaset.apps/kubernetes-metrics-scraper-6d49f96c97 1 1 1 23h[root@node1 ~]#
在这个命名空间下可以看到有很多pod、service、daemonset、deployment、replicaset,除了pod我们前面介绍过之外,剩余的那些东西都分别代表什么内容呢?下面咱们来详细说明下。


6、资源分类

pod:是k8s中最小的单元ReplicaSet:调度器,通过标签控制 pod 的副本数目Deployment:控制器,管理无状态的应用StatefulSet:管理有状态的应用DaemonSet:可以在每个节点运行 pod 主键Job:批处理CronJob:批处理


上面的都是在node1节点,也就是master节点看到的内容,至于别的节点就自行去实践即可。


ingress-nginx

从上面kube-system这个命名空间中可以看到,ingerss-nginx只有在node3上运行了,所以我们需要切换到node上查看,至于为什么只在node3运行,看完我们就知道了。
[root@node3 ~]# crictl psCONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID570b3da43f846 a9f76bcccfb5f 35 minutes ago Running ingress-nginx-controller 1 a6c5c45508d8ca11842fca1afe 6570786a0fd3b 36 minutes ago Running calico-node 1 641441cd33f53c6b5f26c06708 7801cfc6d5c07 36 minutes ago Running kubernetes-metrics-scraper 1 526b653df8827dae61986d67eb 296a6d5035e2d 36 minutes ago Running coredns 1 a04b5d5cf27a2f3f2e6a0677c1 72f07539ffb58 36 minutes ago Running kubernetes-dashboard 1 6d28d27cd415bce6269e3270f8 fcd3512f2a7c5 36 minutes ago Running calico-kube-controllers 2 5f489f39e9c14fc1f16997de4a 5bae806f8f123 36 minutes ago Running node-cache 1 331eac1c6a9817dd92d5eeeebf 8f8fdd6672d48 36 minutes ago Running kube-proxy 1 69655ac71243d1d5821872d6ac f6987c8d6ed59 36 minutes ago Running nginx-proxy 1 f5e8c2f953ca7[root@node3 ~]# cat /etc/kubernetes/manifests/nginx-proxy.yml apiVersion: v1kind: Podmetadata: name: nginx-proxy namespace: kube-system labels: addonmanager.kubernetes.io/mode: Reconcile k8s-app: kube-nginx annotations: nginx-cfg-checksum: "a9814dd8ff52d61bc33226a61d3159315ba1c9ad"spec: hostNetwork: true dnsPolicy: ClusterFirstWithHostNet nodeSelector: kubernetes.io/os: linux priorityClassName: system-node-critical containers: - name: nginx-proxy image: docker.io/library/nginx:1.21.4 imagePullPolicy: IfNotPresent resources: requests: cpu: 25m memory: 32M livenessProbe: httpGet: path: /healthz port: 8081 readinessProbe: httpGet: path: /healthz port: 8081 volumeMounts: - mountPath: /etc/nginx name: etc-nginx readOnly: true volumes: - name: etc-nginx hostPath: path: /etc/nginx[root@node3 ~]#


从上面“volumeMounts”参数可以看到,挂载的目录是/etc/nginx,那么我们去看下那个里面有什么:

[root@node3 ~]# cat /etc/nginx/nginx.conf error_log stderr notice;
worker_processes 2;worker_rlimit_nofile 130048;worker_shutdown_timeout 10s;
events { multi_accept on; use epoll; worker_connections 16384;}
stream { upstream kube_apiserver { least_conn; server 192.168.112.130:6443; server 192.168.112.131:6443; }
server { listen 127.0.0.1:6443; proxy_pass kube_apiserver; proxy_timeout 10m; proxy_connect_timeout 1s; }}
http { aio threads; aio_write on; tcp_nopush on; tcp_nodelay on;
keepalive_timeout 5m; keepalive_requests 100; reset_timedout_connection on; server_tokens off; autoindex off;
server { listen 8081; location /healthz { access_log off; return 200; } location /stub_status { stub_status on; access_log off; } } }[root@node3 ~]#


我们重点关注下以下这一段:

stream { upstream kube_apiserver { least_conn; server 192.168.112.130:6443; server 192.168.112.131:6443; }
server { listen 127.0.0.1:6443; proxy_pass kube_apiserver; proxy_timeout 10m; proxy_connect_timeout 1s; }}

k8s系列-10-k8s集群验证和图形化界面访问k8s


哦,懂了,原来是将master节点上的kube_apiserver做了负载,由于master是node1和node2,已经占用了6443端口,所以不会在node1和node2上启动ingerss-nginx,故而只有在node3上有该服务。


清理代理

我们在安装的时候,还记得添加了一个外网代理吧,是不是没有删除,这个时候我们先清空掉,不需要了,这叫什么?

k8s系列-10-k8s集群验证和图形化界面访问k8s


每个节点上都需要操作哈:
[root@node1 ~]# rm -f /etc/systemd/system/containerd.service.d/http-proxy.conf [root@node1 ~]# systemctl daemon-reload[root@node1 ~]# systemctl restart containerd[root@node1 ~]# grep 8118 -r /etc/yum*/etc/yum.conf:proxy=http://192.168.112.130:8118[root@node1 ~]#[root@node1 ~]# vim /etc/yum.conf # 把上面grep出来的那一行,删除掉。 [root@node1 ~]#


测试集群

1、新建一个nginx的daemonset:
[root@node1 ~]# mkdir k8s[root@node1 ~]# cd k8s/[root@node1 k8s]# touch nginx-ds.yml[root@node1 k8s]# vim nginx-da.ymlapiVersion: v1kind: Servicemetadata: name: nginx-ds labels: app: nginx-dsspec: type: NodePort selector: app: nginx-ds ports: - name: http port: 80 targetPort: 80---apiVersion: apps/v1kind: DaemonSetmetadata: name: nginx-dsspec: selector: matchLabels: app: nginx-ds template: metadata: labels: app: nginx-ds spec: containers: - name: my-nginx image: nginx:1.19 ports: - containerPort: 80[root@node1 k8s]# kubectl apply -f nginx-ds.ymlservice/nginx-ds createddaemonset.apps/nginx-ds created[root@node1 k8s]


2、查看下运行状态:
# 参数 -o wide 表示输出额外信息。对于Pod,将输出Pod所在的Node[root@node1 k8s]# kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-ds-kwdp6 1/1 Running 0 8m11s 10.233.28.9 node3 <none> <none>nginx-ds-tf6mh 1/1 Running 0 8m11s 10.233.44.5 node2 <none> <none>nginx-ds-v5w8c 1/1 Running 0 8m11s 10.233.154.5 node1 <none> <none>[root@node1 k8s]# 
可以看到,在三个节点上都运行了一个nginx的pod,如果你发现上面那条命令执行之后,没有看到IP地址,那么稍等一会儿就会有了。


3、ping下IP地址,看下是否可以连通,最好每个节点都执行下,我这里只写一台的操作:
[root@node1 k8s]# ping 10.233.28.9PING 10.233.28.9 (10.233.28.9) 56(84) bytes of data.64 bytes from 10.233.28.9: icmp_seq=1 ttl=63 time=0.662 ms64 bytes from 10.233.28.9: icmp_seq=2 ttl=63 time=0.505 ms^C--- 10.233.28.9 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1000msrtt min/avg/max/mdev = 0.505/0.583/0.662/0.082 ms[root@node1 k8s]# ping 10.233.44.5PING 10.233.44.5 (10.233.44.5) 56(84) bytes of data.64 bytes from 10.233.44.5: icmp_seq=1 ttl=63 time=0.409 ms64 bytes from 10.233.44.5: icmp_seq=2 ttl=63 time=1.40 ms^C--- 10.233.44.5 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1000msrtt min/avg/max/mdev = 0.409/0.908/1.407/0.499 ms[root@node1 k8s]# ping 10.233.154.5PING 10.233.154.5 (10.233.154.5) 56(84) bytes of data.64 bytes from 10.233.154.5: icmp_seq=1 ttl=64 time=0.163 ms64 bytes from 10.233.154.5: icmp_seq=2 ttl=64 time=0.061 ms^C--- 10.233.154.5 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1000msrtt min/avg/max/mdev = 0.061/0.112/0.163/0.051 ms[root@node1 k8s]#


4、检查service可达性:
[root@node1 k8s]# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.200.0.1 <none> 443/TCP 27hnginx-ds NodePort 10.200.255.83 <none> 80:30962/TCP 32m[root@node1 k8s]#


5、访问服务:
[root@node1 k8s]# curl http://10.200.255.83:80<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; }</style></head><body><h1>Welcome to nginx!</h1><p>If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.</p>
<p>For online documentation and support please refer to<a href="http://nginx.org/">nginx.org</a>.<br/>Commercial support is available at<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p></body></html>[root@node1 k8s]#


6、检查node可达性,即物理服务器的IP地址:
[root@node1 k8s]# curl http://192.168.112.130:30962[root@node1 k8s]# curl http://192.168.112.131:30962[root@node1 k8s]# curl http://192.168.112.132:30962
看是否返回状态和上面的一样。

检查DNS可用性
再次新建一个nginx的pod:
[root@node1 k8s]# vim nginx-pod.yml apiVersion: v1kind: Podmetadata: name: nginxspec: containers: - name: nginx image: docker.io/library/nginx:1.19 ports: - containerPort: 80[root@node1 k8s]#


创建pod:
[root@node1 k8s]# kubectl apply -f nginx-pod.yml pod/nginx created[root@node1 k8s]#


查看dns:
[root@node1 k8s]# kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx 1/1 Running 0 5m14s 10.233.28.10 node3 <none> <none>nginx-ds-kwdp6 1/1 Running 0 51m 10.233.28.9 node3 <none> <none>nginx-ds-tf6mh 1/1 Running 0 51m 10.233.44.5 node2 <none> <none>nginx-ds-v5w8c 1/1 Running 0 51m 10.233.154.5 node1 <none> <none>[root@node1 k8s]# kubectl exec nginx -it -- /bin/bashroot@nginx:/# cat /etc/resolv.confsearch default.svc.cluster.local svc.cluster.local cluster.local localdomainnameserver 169.254.25.10options ndots:5root@nginx:/#


可以看到nds的IP地址,就是咱们在部署的时候,指定的IP地址,咱们再验证下,curl一下上一步创建的nginx的daemonset的名字:
root@nginx:/# curl nginx-ds<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; }</style></head><body><h1>Welcome to nginx!</h1><p>If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.</p>
<p>For online documentation and support please refer to<a href="http://nginx.org/">nginx.org</a>.<br/>Commercial support is available at<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p></body></html>root@nginx:/#


OK,从上面看,是木有什么问题的。

k8s系列-10-k8s集群验证和图形化界面访问k8s


记得退出哟。
root@nginx:/# exitexit[root@node1 k8s]#


日志功能

[root@node1 k8s]# kubectl get podNAME READY STATUS RESTARTS AGEnginx 1/1 Running 0 61mnginx-ds-kwdp6 1/1 Running 0 108mnginx-ds-tf6mh 1/1 Running 0 108mnginx-ds-v5w8c 1/1 Running 0 108m[root@node1 k8s]# kubectl logs nginx/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d//docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh/docker-entrypoint.sh: Configuration complete; ready for start up[root@node1 k8s]#


Exec功能

[root@node1 k8s]# kubectl get pods -l app=nginx-dsNAME READY STATUS RESTARTS AGEnginx-ds-kwdp6 1/1 Running 0 109mnginx-ds-tf6mh 1/1 Running 0 109mnginx-ds-v5w8c   1/1     Running   0          109m[root@node1 k8s]# kubectl exec nginx-ds-kwdp6 -it -- nginx -vnginx version: nginx/1.19.10[root@node1 k8s]#


访问dashboard

创建service,暴露端口出去:
[root@node1 k8s]# vim dashboard-svc.ymlapiVersion: v1kind: Servicemetadata: namespace: kube-system name: dashboard labels: app: dashboardspec: type: NodePort selector: k8s-app: kubernetes-dashboard ports: - name: https nodePort: 30000 port: 443 targetPort: 8443[root@node1 k8s]# kubectl apply -f dashboard-svc.yml service/dashboard created[root@node1 k8s]#


浏览器访问,IP地址是你自己的哈:
https://192.168.112.130:30000/#/login


出现界面如下:

k8s系列-10-k8s集群验证和图形化界面访问k8s


需要我们具有一个token,我们从服务器上获取:
[root@node1 k8s]# kubectl create sa dashboard-admin -n kube-systemserviceaccount/dashboard-admin created[root@node1 k8s]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-adminclusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created[root@node1 k8s]# ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')[root@node1 k8s]# kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}'eyJhbGciOiJSUzI1NiIsImtpZCI6Ik9JcGxDOGtHeFZ3YWVZN2FpY19sek5CTVh4dVI5NmRKRURnMGV5dUZTN3cifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tdGN3c3EiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZGMxOTMzZWQtMDRlMC00NGE4LTg2MmYtOWFmNWVhNTJiNGJkIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.VyDiRYKTppNxemq9cVXHNTSeAxeqmtlHjVq5VD8steWg9Az8KcPrryk0bL42XruHZEZi6vZUEd-iZfl0BPCp4UdNHqYSsdPKnUzNzwD-kwBoZfZEtnI9poqwVjaSWakiDTolKeBEMOaHT1TWqA4rffu0DAlxoXkTs8Vu42bc0sfAN2A6ER57VR115-DeGRRvqG4cjrLC5QdLIOiB7w9KHgo1mngk5lffEBLWRZUz3jv6ecFDytSYaGFJ5FdrwYFqID-dKGShQu9y6DXZu8sjkiAr4tUhtga35m4OakbYWCrxFq29jdCj5zbSDQr1Bokxe9Z2zXOSu3rCqoI_3ODIPg[root@node1 k8s]#


将token复制出来,在浏览器中访问出来的界面中输入:

k8s系列-10-k8s集群验证和图形化界面访问k8s


输入之后点击“登录”即可出现下面的界面:

k8s系列-10-k8s集群验证和图形化界面访问k8s


在这里我们可以看到所有的集群信息,可以编辑yml文件,查看日志,进入pod内部,下面我们看看如何查看日志和进入pod,其他的很多功能就不一一介绍了,推荐多操作几遍,每个功能都点点,就知道有哪些了。


查看pod的日志:

k8s系列-10-k8s集群验证和图形化界面访问k8s

k8s系列-10-k8s集群验证和图形化界面访问k8s

k8s系列-10-k8s集群验证和图形化界面访问k8s

可以从这里的右上角下载日志,或者一些别的小操作。


进入pod内部的话,如下:

k8s系列-10-k8s集群验证和图形化界面访问k8s


至此,本文结束,多实操,多实操,多实操。


往期推荐

 

添加关注,带你高效运维