vlambda博客
学习文章列表

K8s + Docker 部署ELK日志系统





K8s + Docker 部署ELK日志系统,分布式/微服务架构下必备技能!



前提:假定你已经安装并集成好docker、docker私服和k8s基础环境!


部署Elasticsearch



https://www.elastic.co/guide/en/elasticsearch/reference/7.8/docker.html


ELK集成需要配套版本号,这里统一为:7.8.0版本(如果不想使用官方的镜像,自己制作也是可以的)。


2、确定运行es的节点服务器,例如node7,给对应的节点打上运行es的标签,执行如下命令:


kubectl label nodes node7 deploy.elk=true


3、在node7节点创建绑定目录,并授权:


mkdir -p /opt/apps-mgr/es/chmod -R 777 /opt/apps-mgr/es


4、系统内核优化,使之符合es的生产模式:


vi /etc/security/limits.conf,然后末尾追加:--ulimit - nofile 65535vi /etc/security/limits.conf,然后末尾追加:--ulimit - nproc 4096 vi /etc/fstab,然后注释所有包含swap的行vi /etc/sysctl.conf,然后末尾追加:vm.max_map_count=262144,执行sysctl -p生效


5、从docker官方私服拉取镜像,并推送镜像到内部私服:


docker pull docker.elastic.co/elasticsearch/elasticsearch:7.8.0docker tag docker.elastic.co/elasticsearch/elasticsearch:7.8.0 10.68.60.103:5000/elasticsearch:7.8.0docker push 10.68.60.103:5000/elasticsearch:7.8.0



6、编写es-deployment.yaml文件,内容如下:


apiVersion: apps/v1kind: Deploymentmetadata: name: es-deployment namespace: my-namespace labels: app: es-deploymentspec: replicas: 1 selector: matchLabels: app: es-pod template: metadata: labels: app: es-pod spec: nodeSelector: deploy.elk: "true" restartPolicy: Always containers: - name: tsale-server-container image: "10.68.60.103:5000/elasticsearch:7.8.0" ports: - containerPort: 9200 env: - name: node.name value: "es01" - name: cluster.name value: "tsale-sit2-es" - name: discovery.seed_hosts value: "10.68.60.111" - name: cluster.initial_master_nodes value: "es01" - name: bootstrap.memory_lock value: "false" - name: ES_JAVA_OPTS value: "-Xms512m -Xmx1g"        resources: limits: memory: "1G" cpu: "1" requests:            memory: "512Mi"            cpu: "500m" volumeMounts: - mountPath: "/usr/share/elasticsearch/data" name: "es-data-volume" - mountPath: "/usr/share/elasticsearch/logs" name: "es-logs-volume" imagePullSecrets: - name: regcred volumes: - name: "es-data-volume" hostPath: path: "/opt/apps-mgr/es/data" type: DirectoryOrCreate - name: "es-logs-volume" hostPath: path: "/opt/apps-mgr/es/logs" type: DirectoryOrCreate


注意:elasticsearch docker 镜像默认情况下使用uid:gid (1000:0)作为运行es的用户和用户组。


7、执行deployment:


kubectl apply -f es-deployment.yaml


8、创建对外暴露端口Service:


apiVersion: v1kind: Servicemetadata: namespace: my-namespace name: es-service-9200spec: type: NodePort selector: app: es-pod ports: - protocol: TCP port: 9200 targetPort: 9200      nodePort: 9200---
apiVersion: v1kind: Servicemetadata: namespace: my-namespace name: es-service-9300spec: type: NodePort selector:    app: es-pod ports: - protocol: TCP port: 9300 targetPort: 9300 nodePort: 9300


9、执行service:


kubectl apply -f es-service.yaml


10、浏览器访问:


http://node7:9200/_cat/health


如果返回下面信息则表示部署成功:


K8s + Docker 部署ELK日志系统



11、如果启动失败,可以通过下面方式查询日志进行分析:


kubectl get pods -n my-namespace -o widekubectl logs -f es-deployment-67f47f8d44-7cj5p -n my-namespacekubectl describe pod es-deployment-986bc449f-gnjb4 -n my-namespace
# 或者直接进入node7查看绑定宿主机的日志:less /opt/apps-mgr/es/logs/




部署Kibana




https://www.elastic.co/guide/en/kibana/7.8/docker.html#environment-variable-config


2、从docker官方私服拉取镜像,并推送镜像到内部私服:


docker pull docker.elastic.co/kibana/kibana:7.8.0docker tag docker.elastic.co/kibana/kibana:7.8.0 10.68.60.103:5000/kibana:7.8.0docker push 10.68.60.103:5000/kibana:7.8.0


如果不想使用官方的镜像,自己制作也是可以的


3、确定运行es的节点服务器,例如node7,给对应的节点打上运行es的标签,执行如下命令:


kubectl label nodes node7 deploy.elk=true


3、在node7节点创建绑定目录,并授权:


mkdir -p /opt/apps-mgr/kibana/chmod -R 777 /opt/apps-mgr/kibana


4、创建kibana.yaml配置文件,并配置几个关键项:


vi /opt/apps-mgr/kibana/kibana.yaml,内容如下:

elasticsearch.hosts: http://10.68.60.111:9200server.host: 0.0.0.0


关于kibana.yaml配置文件更多配置请查阅官方文档:


https://www.elastic.co/guide/en/kibana/7.8/settings.html



5、编写kibana-deployment.yaml配置文件:


apiVersion: apps/v1kind: Deploymentmetadata: name: kibana-deployment namespace: my-namespace labels: app: kibana-deploymentspec: replicas: 1 selector: matchLabels: app: kibana-pod template: metadata: labels: app: kibana-pod spec: nodeSelector: deploy.elk: "true" restartPolicy: Always containers: - name: kibana-container image: "10.68.60.103:5000/kibana:7.8.0" ports:        - containerPort: 5601 volumeMounts: - mountPath: "/usr/share/kibana/config/kibana.yml" name: "kibana-conf-volume" imagePullSecrets: - name: regcred volumes: - name: "kibana-conf-volume" hostPath: path: "/opt/apps-mgr/kibana/kibana.yml" type: File


6、执行deployment:


kubectl apply -f kibana-deployment.yaml


7、查看启动情况:


kubectl get pods -n my-namespace -o widekubectl logs -f kibana-deployment-67f47f8d44-7cj5p -n my-namespacekubectl describe pod kibana-deployment-986bc449f-gnjb4 -n my-namespace


8、创建对外暴露端口Service:


apiVersion: v1kind: Servicemetadata: namespace: my-namespace name: kibana-servicespec: type: NodePort selector: app: kibana-pod ports: - protocol: TCP port: 5601 targetPort: 5601 nodePort: 5601


9、执行Service:


kubectl apply -f kibana-service.yamlkubectl get service -n my-namespace


10、浏览器访问:


http://node7:5601


出现如下图,证明部署成功,且可以正常使用:


K8s + Docker 部署ELK日志系统


现在我们只是部署好了kibana,等到我们后面将logstash配置好之后就可以进一步从界面化去配置kibana的访问索引信息,配置对应的日志es索引进行日志查看。


部署logstash


1、logstash我们计划自行制作镜像,因为官方提供的镜像设置了目录用户权限,在进行配置数据注入时会报Read-only file system异常。创建构建docker镜像目录:


mkdir -p /opt/docker/build/logstash


2、下载配套的logstash版本:


wget https://artifacts.elastic.co/downloads/logstash/logstash-7.8.0.tar.gz


3、编写Dockerfile文件,内容如下:


FROM 10.68.60.103:5000/jdk:v1.8.0_181_x64
LABEL maintainer="lazy"
ADD logstash-7.8.0.tar.gz /
RUN mkdir -p /opt/soft &&\ mv /logstash-7.8.0 /opt/soft/logstash &&\ mkdir -p /opt/soft/logstash/pipeline &&\ cp -R /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
WORKDIR /opt/soft/logstash
ENV LOGSTASH_HOME /opt/soft/logstashENV PATH $LOGSTASH_HOME/bin:$PATH
ENTRYPOINT [ "sh", "-c", "/opt/soft/logstash/bin/logstash" ]


注意:FROM 10.68.60.103:5000/jdk:v1.8.0_181_x64镜像是我们自己构建的jdk8镜像,并推送到内部私服。


4、构建镜像,并推送到私服:


docker build -t 10.68.60.103:5000/logstash:7.8.0 -f Dockerfile .docker push 10.68.60.103:5000/logstash:7.8.0


5、编写logstash-daemonset.yaml文件,内容如下:


---# 创建ConfigMap定义logstash配置文件内容apiVersion: v1kind: ConfigMapmetadata: name: logstash-config namespace: my-namespace labels: k8s-app: logstashdata: logstash.yml: |- # 节点描述 node.name: ${NODE_NAME} # 持久化数据存储路径 path.data: /opt/soft/logstash/data # 管道ID pipeline.id: ${NODE_NAME} # 主管道配置目录(需要手工创建该目录) path.config: /opt/soft/logstash/pipeline # 定期检查配置是否已更改,并在配置发生更改时重新加载配置 config.reload.automatic: true # 几秒钟内,Logstash会检查配置文件 config.reload.interval: 30s # 绑定网卡地址 http.host: "0.0.0.0" # 绑定端口 http.port: 9600 # logstash 日志目录 path.logs: /opt/soft/logstash/logs logstash.conf: |- # 输入块定义 input { # 文件收集插件 file { # 收集器ID id => "admin-server" # 以\n换行符结尾作为一个事件发送 # 排除.gz结尾的文件 exclude => "*.gz" # tail模式 mode => tail # 从文件尾部开始读取 start_position => "end" # 收据数据源路径文件 path => ["/opt/apps-mgr/admin-server/logs/*.log"] # 为每个事件添加type字段 type => "admin-server" # 每个事件编解码器,类似过滤器 # multiline支持多行拼接起来作为一个事件 codec => multiline { # 不以时间戳开头的 pattern => "^%{TIMESTAMP_ISO8601}" # 如果匹配上面的pattern模式,将执行what策略 negate => true # previous策略是拼接到前一个事件后面 what => "previous" #总结:凡是不以时间戳开头的事件行直接拼接到上一个事件行后面作为一个事件处理, # 例如Java堆栈处理结果是整个Java错误堆栈作为一条数据,因为Java报错信息是不以时间戳开头的 } } }  input { # 文件收集插件 file { # 收集器ID id => "api-server" # 以\n换行符结尾作为一个事件发送 # 排除.gz结尾的文件 exclude => "*.gz" # tail模式 mode => tail # 从文件尾部开始读取 start_position => "end" # 收据数据源路径文件 path => ["/opt/apps-mgr/api-server/logs/*.log"] # 为每个事件添加type字段 type => "api-server" # 每个事件编解码器,类似过滤器 # multiline支持多行拼接起来作为一个事件 codec => multiline { # 不以时间戳开头的 pattern => "^%{TIMESTAMP_ISO8601}" # 如果匹配上面的pattern模式,将执行what策略 negate => true # previous策略是拼接到前一个事件后面 what => "previous" #总结:凡是不以时间戳开头的事件行直接拼接到上一个事件行后面作为一个事件处理, # 例如Java堆栈处理结果是整个Java错误堆栈作为一条数据,因为Java报错信息是不以时间戳开头的 } }    }  # 过滤块定义 filter { #不做任何过滤,原样发送给输出阶段 }  # 输出块定义 output { # 输出到elasticsearch if [type] == "admin-server" { elasticsearch { hosts => ["http://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}"]          index => "admin_server_%{+YYYY.MM.dd}" } }
if [type] == "api-server" { elasticsearch { hosts => ["http://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}"] index => "api_server_%{+YYYY.MM.dd}" }      } }
---# 通过DaemonSet对象使指定的多个节点分别运行一个PodapiVersion: apps/v1kind: DaemonSetmetadata: name: logstash namespace: my-namespace labels: k8s-app: logstashspec: selector: matchLabels: k8s-app: logstash template: metadata: labels: k8s-app: logstash spec:      # 指定运行节点标签 nodeSelector: deploy.type: biz_app       # 指定当前DaemonSet权限账号       serviceAccountName: logstash  terminationGracePeriodSeconds: 30 hostNetwork: true dnsPolicy: ClusterFirstWithHostNet # 禁用环境变量DNS enableServiceLinks: false containers: - name: logstash-container image: 10.68.60.103:5000/logstash:7.8.0        # 如果本地存在镜像则不拉取 imagePullPolicy: IfNotPresent        # 注入环境变量信息 env: - name: ELASTICSEARCH_HOST value: "10.68.60.111" - name: ELASTICSEARCH_PORT value: "9200" - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName# resources:# limits:# memory: 256Mi# requests:# cpu: 200m# memory: 100Mi securityContext: # 指定运行的用户0=root runAsUser: 0        # 挂载 volumeMounts: - name: logstash-config-volume mountPath: /opt/soft/logstash/config/logstash.yml subPath: logstash.yml - name: logstash-pipeline-volume mountPath: /opt/soft/logstash/pipeline/logstash.conf subPath: logstash.conf - name: logstash-collect-volume mountPath: /opt/apps-mgr - name: logstash-data-volume mountPath: /opt/soft/logstash/data - name: logstash-logs-volume mountPath: /opt/soft/logstash/logs volumes: - name: logstash-config-volume configMap: name: logstash-config defaultMode: 0777 items: - key: logstash.yml path: logstash.yml - name: logstash-pipeline-volume configMap: name: logstash-config defaultMode: 0777 items: - key: logstash.conf path: logstash.conf - name: logstash-collect-volume hostPath: path: /opt/apps-mgr type: DirectoryOrCreate - name: logstash-data-volume hostPath: path: /opt/soft/logstash/data type: DirectoryOrCreate - name: logstash-logs-volume hostPath: path: /opt/soft/logstash/logs type: DirectoryOrCreate

---# 将角色和账号绑定apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: logstashsubjects:- kind: ServiceAccount name: logstash namespace: my-namespaceroleRef: kind: ClusterRole name: logstash apiGroup: rbac.authorization.k8s.io ---# 创建一个集群级别的角色对象apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name: logstash labels: k8s-app: logstash# 配置角色权限 rules:- apiGroups: [""] # "" indicates the core API group resources: - namespaces - pods verbs: - get - watch - list---# 创建一个抽象的ServiceAccount逻辑账号对象apiVersion: v1kind: ServiceAccountmetadata: name: logstash namespace: my-namespace labels: k8s-app: logstash---





6、执行配置文件:


kubectl apply -f logstash-daemonset.yaml



配置Kibana


1、登录kibana


http://node7:5601/app/kibana


2、点击管理菜单(stack mangerment):


K8s + Docker 部署ELK日志系统



3、点击索引模板菜单:


K8s + Docker 部署ELK日志系统


4、添加索引模板:


K8s + Docker 部署ELK日志系统


5、输入logstash 输出部分配置的索引前缀,例如:admin_server_*


K8s + Docker 部署ELK日志系统


6、添加成功后就可以在Discover菜单进行选择查看:


K8s + Docker 部署ELK日志系统



7、效果如下:




---------- 正文结束 ----------



Java软件编程之家