configmap of k8s ~ fluent sets es index prefix

Posted by Iokina on Wed, 20 May 2020 18:16:58 +0200

For the component fluent D, you are responsible for grabbing logs. You can grab logs from the docker console or from the specified folder. For the log files stored in the folder, we need to configure the logback first, and then configure the configmap of fluent D, so as to grab the persistent logs and push them to the elastic storage medium Inside.

logback control storage location

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <property name="logPath" value="/var/log/"/>
    <springProperty scope="context" name="springAppName" source="spring.application.name"/>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%date-%level-%X{X-B3-TraceId:-}-%X{X-B3-SpanId:-}-[%file:%line]-%msg%n</pattern>
        </encoder>
    </appender>

    <appender name="fileInfoLog" filePermissions="rw-r--r--" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers class="net.logstash.logback.composite.loggingevent.LoggingEventJsonProviders">
                <pattern>
                    <pattern>
                        {
                        "level": "%level",
                        "application": "${springAppName:-}",
                        "trace": "%X{X-B3-TraceId:-}",
                        "span": "%X{X-B3-SpanId:-}",
                        "exportable": "%X{X-Span-Export:-}",
                        "pid": "${PID:-}",
                        "thread": "%thread",
                        "class": "%logger{40}",
                        "message": "%message"
                        }
                    </pattern>
                </pattern>
            </providers>
        </encoder>
        <!--Rolling strategy-->
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!--route-->
            <fileNamePattern>${logPath}/info.%d.log</fileNamePattern>
            <maxHistory>7</maxHistory>
        </rollingPolicy>
    </appender>
    <root level="INFO">
        <appender-ref ref="STDOUT"/>
        <appender-ref ref="fileInfoLog"/>
   </root>

Fluent registers in pod by sidecar

This sidecar design is mainly to decouple. It shares the storage volume with the container in the pod. In fact, it reads the logs generated by the container, and then pushes the logs to the storage medium. In this case, it is pushed to elastic, which is queried and analyzed through kibana. The yaml deployment script of k8s is as follows

kind: Deployment
apiVersion: apps/v1
metadata:
  name: hello-world-deployment
  namespace: saas
  labels:
    app: hello-world
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-world
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
        - name: hello-world
          image: 172.17.0.22:8888/saas/hello-world:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 9001
          env:
            - name: spring.profiles.active
              value: prod
          volumeMounts:
            - name: varlog
              mountPath: /var/log
        - name: fluent-sidecar
          image: registry.cn-beijing.aliyuncs.com/k8s-mqm/fluentd-elasticsearch:v2.1.0
          env:
            - name: FLUENTD_ARGS
              value: -c /etc/fluentd-config/fluentd.conf
          volumeMounts:
            - name: varlog
              mountPath: /var/log
            - name: config-volume
              mountPath: /etc/fluentd-config
      volumes:
        - name: varlog
          emptyDir: {}
        - name: config-volume
          configMap:
            name: fluentd-config

Finally, add configuration for fluent D, that is, configmap in k8s. Note that it is for a certain namespace, which cannot be accessed across namespaces

<source>
type tail
format json
path /var/log/*.log
pos_file /var/log/*.log.pos
tag test.*
</source>

<match **>
@id elasticsearch
@type elasticsearch
@log_level debug
type_name fluentd
host elasticsearch.elk
port 9200
logstash_format true
logstash_prefix test #Prefix representing index
flush_interval 10s
</match> 	  	

Finally, index in kibana

Management - > create index, select test - *, and save

Topics: Java ElasticSearch Spring Docker xml