vlambda博客
学习文章列表

k8s部署-17-网络插件和dns解析服务配置

 

经过前几篇的文章,我们通过二进制方式部署的k8s集群,差不多所有的组件都部署好了,接下来就差我们的网络插件和dns了,在这里我们的网络插件使用calico,dns解析使用coredns。

k8s部署-17-网络插件和dns解析服务配置

 

网络插件安装

PS:该步骤只需要在node1上执行即可,即master节点执行。
下面的地址是官网地址,感兴趣的可以看下介绍:
https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises


如果你的节点数量是50以下,选择以下命令下载:
[root@node1 ~]# curl https://projectcalico.docs.tigera.io/manifests/calico.yaml -O[root@node1 ~]# ls calico.yaml calico.yaml[root@node1 ~]#


如果是50个节点以上,请执行如下命令下载:
[root@node1 ~]# curl https://projectcalico.docs.tigera.io/manifests/calico-typha.yaml -o calico.yaml[root@node1 ~]# ls calico.yaml calico.yaml[root@node1 ~]#


修改IP自动发现:
为啥要修改呢?因为如果当你的服务器上存在多个IP地址,或者说有很多虚拟网卡的时候,他有时候就会获取到错误的IP地址,所以要进行如下配置。
[root@node1 ~]# vim calico.yaml  # 有两个配置需要修改# 修改前- name: IP value: "autodetect" # 修改成- name: IP valueFrom: fieldRef: fieldPath: status.hostIP # 修改前是被注释的状态# - name: CALICO_IPV4POOL_CIDR# value: "192.168.0.0/16"# 修改成- name: CALICO_IPV4POOL_CIDR value: "10.200.0.0/16"[root@node1 ~]# 


让其生效:
[root@node1 ~]# kubectl apply -f calico.yaml


等几分钟之后,查看状态如下,如果status不是running,可能是镜像还没有下载完,再稍等几分钟就行了。
[root@node1 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONnode2 Ready <none> 15h v1.20.2node3 Ready <none> 15h v1.20.2[root@node1 ~]#[root@node1 ~]# kubectl get pod -n kube-systemNAME READY STATUS RESTARTS AGEcalico-kube-controllers-858c9597c8-6gzd5 1/1 Running 0 43mcalico-node-6k479 1/1 Running 0 43mcalico-node-bnbxx 1/1 Running 0 43mnginx-proxy-node3 1/1 Running 1 15h[root@node1 ~]


DNS解析

设置nds的cluster-ip地址:
# 这个IP要和之前安装apiserver的时候配置中的一个哈[root@node1 ~]# COREDNS_CLUSTER_IP=10.233.0.10


创建coredns.yaml的配置:
# 以下配置文件不需要修改任何信息[root@node1 ~]# vim coredns.yaml ---apiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExistsdata: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { prefer_udp } cache 30 loop reload loadbalance }---apiVersion: v1kind: ServiceAccountmetadata: name: coredns namespace: kube-system labels: addonmanager.kubernetes.io/mode: Reconcile---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: Reconcile name: system:corednsrules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: EnsureExists name: system:corednsroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:corednssubjects: - kind: ServiceAccount name: coredns namespace: kube-system---apiVersion: v1kind: Servicemetadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "coredns" addonmanager.kubernetes.io/mode: Reconcile annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true"spec: selector: k8s-app: kube-dns clusterIP: ${COREDNS_CLUSTER_IP} ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCP---apiVersion: apps/v1kind: Deploymentmetadata: name: "coredns" namespace: kube-system labels: k8s-app: "kube-dns" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "coredns"spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 0 maxSurge: 10% selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: seccomp.security.alpha.kubernetes.io/pod: 'runtime/default' spec: priorityClassName: system-cluster-critical nodeSelector: kubernetes.io/os: linux serviceAccountName: coredns tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - topologyKey: "kubernetes.io/hostname" labelSelector: matchLabels: k8s-app: kube-dns nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 preference: matchExpressions: - key: node-role.kubernetes.io/master operator: In values: - "" containers: - name: coredns image: "docker.io/coredns/coredns:1.6.7" imagePullPolicy: IfNotPresent resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP timeoutSeconds: 5 successThreshold: 1 failureThreshold: 10 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP timeoutSeconds: 5 successThreshold: 1 failureThreshold: 10 dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile

[root@node1 ~]# sed -i "s/\${COREDNS_CLUSTER_IP}/${COREDNS_CLUSTER_IP}/g" coredns.yaml[root@node1 ~]#


使其生效:
[root@node1 ~]# kubectl apply -f coredns.yaml


部署NodeLocal DNSCache:

[root@node1 ~]# COREDNS_CLUSTER_IP=10.233.0.10# 以下配置文件不需要修改任何信息[root@node1 ~]# vim nodelocaldns.yaml---apiVersion: v1kind: ConfigMapmetadata: name: nodelocaldns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists
data: Corefile: | cluster.local:53 { errors cache { success 9984 30 denial 9984 5 } reload loop bind 169.254.25.10 forward . ${COREDNS_CLUSTER_IP} { force_tcp } prometheus :9253 health 169.254.25.10:9254 } in-addr.arpa:53 { errors cache 30 reload loop bind 169.254.25.10 forward . ${COREDNS_CLUSTER_IP} { force_tcp } prometheus :9253 } ip6.arpa:53 { errors cache 30 reload loop bind 169.254.25.10 forward . ${COREDNS_CLUSTER_IP} { force_tcp } prometheus :9253 } .:53 { errors cache 30 reload loop bind 169.254.25.10 forward . /etc/resolv.conf prometheus :9253 }---apiVersion: apps/v1kind: DaemonSetmetadata: name: nodelocaldns namespace: kube-system labels: k8s-app: kube-dns addonmanager.kubernetes.io/mode: Reconcilespec: selector: matchLabels: k8s-app: nodelocaldns template: metadata: labels: k8s-app: nodelocaldns annotations: prometheus.io/scrape: 'true' prometheus.io/port: '9253' spec: priorityClassName: system-cluster-critical serviceAccountName: nodelocaldns hostNetwork: true dnsPolicy: Default # Don't use cluster DNS. tolerations: - effect: NoSchedule operator: "Exists" - effect: NoExecute operator: "Exists" containers: - name: node-cache image: "registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/dns_k8s-dns-node-cache:1.16.0" resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-localip", "169.254.25.10", "-conf", "/etc/coredns/Corefile", "-upstreamsvc", "coredns" ] securityContext: privileged: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9253 name: metrics protocol: TCP livenessProbe: httpGet: host: 169.254.25.10 path: /health port: 9254 scheme: HTTP timeoutSeconds: 5 successThreshold: 1 failureThreshold: 10 readinessProbe: httpGet: host: 169.254.25.10 path: /health port: 9254 scheme: HTTP timeoutSeconds: 5 successThreshold: 1 failureThreshold: 10 volumeMounts: - name: config-volume mountPath: /etc/coredns - name: xtables-lock mountPath: /run/xtables.lock volumes: - name: config-volume configMap: name: nodelocaldns items: - key: Corefile path: Corefile - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods. terminationGracePeriodSeconds: 0 updateStrategy: rollingUpdate: maxUnavailable: 20% type: RollingUpdate---apiVersion: v1kind: ServiceAccountmetadata: name: nodelocaldns namespace: kube-system labels: addonmanager.kubernetes.io/mode: Reconcile[root@node1 ~]# [root@node1 ~]# sed -i "s/\${COREDNS_CLUSTER_IP}/${COREDNS_CLUSTER_IP}/g" nodelocaldns.yaml[root@node1 ~]#


使其生效:
[root@node1 ~]# kubectl apply -f nodelocaldns.yaml


最后状态如下,即表示成功:
[root@node1 ~]# kubectl get pod -n kube-systemNAME READY STATUS RESTARTS AGEcalico-kube-controllers-858c9597c8-6gzd5 1/1 Running 0 62mcalico-node-6k479 1/1 Running 0 62mcalico-node-bnbxx 1/1 Running 0 62mcoredns-84646c885d-6fsjk 1/1 Running 0 7m11scoredns-84646c885d-sdb6l 1/1 Running 0 7m11snginx-proxy-node3 1/1 Running 1 15hnodelocaldns-gj9xf 1/1 Running 0 3m17snodelocaldns-sw9jh 1/1 Running 0 3m17s[root@node1 ~]#


至此,使用二进制搭建k8s集群,就表示成功了,下一篇我们将做一个简单的测试,看看集群是否一切正常。



往期推荐

 

添加关注,带你高效运维