在k8s上架设ELK教学

【YC的迷路青春】

版本要从头到尾相同ELK都是 这边用7.12.0

原则上就是新增一大堆的yaml档案
应该只需要一股脑先新增yaml档案就对了,连动应该有写好,这边都还没有涉猎到丢log所以按理来说应该是大家都会长一样的,公用版本级别的文章(?),我觉得应该要出一个这样的文章吧,或许是我自己没找到。

我自己觉得有这篇文章的内容。架一个最基本的ELK上k8s应该是相对轻松很多很多。

如果按照上面的指令一股脑输入进去 却有任何error,请回报给我,这样我也可以有修正的机会,我尽量看到就回,谢谢。

一定有不少文章可以直接帮你架好 但是如果跟着这样一个一个架 应该会比较容易理解大家各自在干嘛。

一·Elasticsearch

新增这三个yaml档案
1.elasticsearch的设定档案
2.Deployment
3.service
这边先用Deployment比较好盖 比较好理解 弄完之後在改成statefulSet

kind: ConfigMap
apiVersion: v1
metadata:
  name: elasticsearch-config-yc
data:
  elasticsearch.yml: |
    cluster.name: "docker-cluster" 
    network.host: 0.0.0.0
    xpack.license.self_generated.type: trial 
    xpack.monitoring.collection.enabled: true
kind: Deployment
apiVersion: apps/v1
metadata:
  name: yc-elasticsearch
spec:
  replicas: 1
  selector:
    matchLabels:
      app: yc-elasticsearch
  template:
    metadata:
      labels:
        app: yc-elasticsearch
    spec:
      volumes:
        - name: config
          configMap:
            name: elasticsearch-config-yc
            defaultMode: 420
      initContainers:
        - name: increase-vm-max-map
          image: busybox
          command:
            - sysctl
            - '-w'
            - vm.max_map_count=262144
          securityContext:
            privileged: true
      containers:
        - name: yc-elasticsearch
          image: 'docker.elastic.co/elasticsearch/elasticsearch:7.12.0'
          ports:
            - containerPort: 9200
              protocol: TCP
            - containerPort: 9300
              protocol: TCP
          env:
            - name: ES_JAVA_OPTS
              value: '-Xms512m -Xmx512m'
            - name: discovery.type
              value: single-node
          volumeMounts:
            - name: config
              mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
              subPath: elasticsearch.yml
kind: Service
apiVersion: v1
metadata:
  name: yc-elasticsearch
spec:
  ports:
    - name: yc-elasticsearch
      protocol: TCP
      port: 80
      targetPort: 9200
  selector:
    app: yc-elasticsearch
  type: ClusterIP
  sessionAffinity: None

这边可以打curl serviceIP 看如果有tagline" : "You Know, for Search 那就对了
elasticsearch不与其他人挂勾 所以通常ELK从E开始写

可以下kubectl logs "pod name" 看是否开启成功

二·logstash

下一个 logstash

  1. /usr/share/logstash/pipeline 的设定档案
  2. /usr/share/logstash/config/logstash.yml 的设定档案
  3. Deployment
  4. Service
kind: ConfigMap
apiVersion: v1
metadata:
  name: logstash-config-yc
  namespace: default
data:
  logstash.yml: >
    http.host: "0.0.0.0"

    xpack.monitoring.elasticsearch.hosts: [
    "http://yc-elasticsearch.default.svc.cluster.local:80" ]

这地方之後如果要丢log了 是改这边

kind: ConfigMap
apiVersion: v1
metadata:
  name: logstash-pipelines-yc
data:
  logstash.conf: |
    input {
      beats {
        port => 5044
      }
    }
    output {
      elasticsearch {
        hosts => ["http://yc-elasticsearch.default.svc.cluster.local:80"]
        index => "log_test"
      }
    }

这地方之後如果要丢log了 这边也可能会需要加几个volumes。

kind: Deployment
apiVersion: apps/v1
metadata:
  name: yc-logstash
spec:
  replicas: 1
  selector:
    matchLabels:
      app: yc-logstash
  template:
    metadata:
      labels:
        app: yc-logstash
    spec:
      volumes:
        - name: config
          configMap:
            name: logstash-config-yc
            defaultMode: 420
        - name: pipelines
          configMap:
            name: logstash-pipelines-yc
            defaultMode: 420
      containers:
        - name: yc-logstash
          image: 'docker.elastic.co/logstash/logstash:7.12.0'
          ports:
            - containerPort: 5044
              protocol: TCP
            - containerPort: 5000
              protocol: TCP
            - containerPort: 5000
              protocol: UDP
            - containerPort: 9600
              protocol: TCP
          env:
            - name: ELASTICSEARCH_HOST
              value: 'http://yc-elasticsearch.default.svc.cluster.local'
            - name: LS_JAVA_OPTS
              value: '-Xms512m -Xmx512m'
          volumeMounts:
            - name: pipelines
              mountPath: /usr/share/logstash/pipeline
            - name: config
              mountPath: /usr/share/logstash/config/logstash.yml
              subPath: logstash.yml
kind: Service
apiVersion: v1
metadata:
  name: yc-logstash
spec:
  ports:
    - name: logstash
      protocol: TCP
      port: 80
      targetPort: 9600
    - name: filebeat
      protocol: TCP
      port: 5044
      targetPort: 5044
  selector:
    app: yc-logstash
  type: ClusterIP
  sessionAffinity: None

三·Kibana

再来就是Kibana
1./usr/share/kibana/config/kibana.yml 的设定档案
2.Deployment
3.service

kind: ConfigMap
apiVersion: v1
metadata:
  name: kibana-config-yc
data:
  kibana.yml: >
    server.name: kibana

    server.host: 0.0.0.0

    elasticsearch.hosts: [
    "http://yc-elasticsearch.default.svc.cluster.local:80" ]

    monitoring.ui.container.elasticsearch.enabled: true

kind: Deployment
apiVersion: apps/v1
metadata:
  name: yc-kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      component: yc-kibana
  template:
    metadata:
      labels:
        component: yc-kibana
    spec:
      volumes:
        - name: config
          configMap:
            name: kibana-config-yc
            defaultMode: 420
      containers:
        - name: elk-kibana
          image: 'docker.elastic.co/kibana/kibana:7.12.0'
          ports:
            - name: yc-kibana
              containerPort: 5601
              protocol: TCP
          volumeMounts:
            - name: config
              mountPath: /usr/share/kibana/config/kibana.yml
              subPath: kibana.yml
kind: Service
apiVersion: v1
metadata:
  name: yc-kibana
spec:
  ports:
    - name: yc-kibana
      protocol: TCP
      port: 80
      targetPort: 5601
  selector:
    component: yc-kibana
  type: LoadBalancer

到这边就算是架完ELK了,再来就是汇入log

汇入log这边介绍两个方法
一个是宣告储存体,在上服务的时候透过volumes把log写在储存体里面,然後我们去读取
(在logstash那边也写一个volumes 这样就可以做简单的测试了)
另一个是filebeat,这边下集待续。

应该会满有帮助的。

我们的作法 测试volumes 我们必须在储存体盖一个 永久性磁碟 去连结档案用共(filesharing)的部分。

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: log-azurefile
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
  volumeName: log-azurefile-yc
  storageClassName: ''
  volumeMode: Filesystem
status:
  phase: Bound
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 2Gi
kind: PersistentVolume
apiVersion: v1
metadata:
  name: log-azurefile
spec:
  capacity:
    storage: 2Gi
  azureFile:
    secretName: elk-secret
    shareName: yc/logs
    secretNamespace: null
  accessModes:
    - ReadWriteMany
  claimRef:
    kind: PersistentVolumeClaim
    namespace: default
    name: log-azurefile
  mountOptions:
    - dir_mode=0777
    - file_mode=0777
    - uid=1000
    - gid=1000
    - mfsymlinks
    - nobrl
  volumeMode: Filesystem

然後需要一个可以连到档案共用的秘密

kind: Secret
apiVersion: v1
metadata:
  name: elk-secret
  namespace: default
data:
  azurestorageaccountkey: xxxxxxx
  azurestorageaccountname: xxxxxxxxx
type: Opaque

这样就可以连到在档案共共的yc/logs了
log-azurefile

然後我们回去logstash那边 补上

volumes:
  - name: volume-log
          persistentVolumeClaim:
            claimName: log-azurefile

volumeMounts:
    - name: volume-sso-log
              mountPath: /usr/local/tomcat/logs

这样就可以在logstash的deployment 的 /usr/local/tomcat/logs 地方跟ys/log 档案共用互相打通

你在ys/log新增文件 同时也可以在logstash的/usr/local/tomcat/logs也有相同的文件喔,反之亦然。

在logstash-pipelines里面+上file

input {
  beats {
    port => 5044
  }
  file{
    path => "/usr/local/tomcat/logs/*.log"
  }
}

先去kubectl exec -it logstash-xxxx bash
cd /usr/local/tomcat/logs/

看一下档案有没有打通
然後新增.log档案 就可以看kibana是否有东西进去了


<<:  Unity 资料库开发2(webservice)

>>:  [Day 52] 留言板後台及前台(八) - 加入图片上传

22.unity读取文字文件并分行(TextAsset、Split)

1.准备好文字文件 2.撰写能导入文字文件的脚本 参考TextAsset、Split using S...

33岁转职者的前端笔记-DAY 14 排版技巧小笔记-标签属性元素及定位方法

区块元素(block) 预设为区块元素的标签有:h1~h6,p,div,section,header...

Day21 Android - Retrofit(Post)

昨天所说的(Get)主要用於取得api的资料,像是昨天https://jsonplaceholder...

Day4 WordPress 介绍,基础设定与发文

上篇文章我们在 BlueHost 架起了 WordPress 环境,但也许你还不知道什麽是 Word...

Day01 初探 iOS

前言 目前担任Android/Flutter Developer,从Android 开发出生,这一年...