使用?Loki?實現(xiàn)?Kubernetes?容器日志監(jiān)控的方法
一、基本介紹
Loki 是由 Grafana Labs 團隊開發(fā)的,基于 Go 語言實現(xiàn),是一個水平可擴展,高可用性,多租戶的日志聚合系統(tǒng)。它的設(shè)計非常經(jīng)濟高效且易于操作,因為它不會為日志內(nèi)容編制索引,而是為每個日志流配置一組標(biāo)簽。Loki 項目受 Prometheus 啟發(fā)。
官方的介紹就是:Like Prometheus, but for logs
,類似于 Prometheus 的日志系統(tǒng)。
1.Loki 架構(gòu)
Loki
:主服務(wù),用于存儲日志和處理查詢。Promtail
:代理服務(wù),用于采集日志,并轉(zhuǎn)發(fā)給 Loki。Grafana
:通過 Web 界面來提供數(shù)據(jù)展示、查詢、告警等功能。
2.Loki 工作原理
首先由 Promtail 進(jìn)行日志采集,并發(fā)送給 Distributor
組件,Distributor 組件會對接收到的日志流進(jìn)行正確性校驗,并將驗證后的日志分批并行發(fā)送給 Ingester
組件。Ingester 組件會將接收過來的日志流構(gòu)建成數(shù)據(jù)塊,并進(jìn)行壓縮后存放到所連接的后端存儲中。
Querier
組件,用于接收 HTTP 查詢請求,并將查詢請求轉(zhuǎn)發(fā)給 Ingester
組件,來返回存在 Ingester 內(nèi)存中的數(shù)據(jù)。要是在 Ingester 的內(nèi)存中沒有找到符合條件的數(shù)據(jù)時,那么 Querier 組件便會直接在后端存儲中進(jìn)行查詢(內(nèi)置去重功能)。
二、使用 Loki 實現(xiàn)容器日志監(jiān)控
1.安裝 Loki
1)創(chuàng)建 RBAC 授權(quán)
[root@k8s-master01 ~]# cat <<END > loki-rbac.yaml apiVersion: v1 kind: Namespace metadata: name: logging --- apiVersion: v1 kind: ServiceAccount metadata: name: loki namespace: logging --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: loki namespace: logging rules: - apiGroups: ["extensions"] resources: ["podsecuritypolicies"] verbs: ["use"] resourceNames: [loki] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: loki namespace: logging roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: loki subjects: - kind: ServiceAccount name: loki END [root@k8s-master01 ~]# kubectl create -f loki-rbac.yaml
2)創(chuàng)建 ConfigMap 文件
[root@k8s-master01 ~]# cat <<END > loki-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: loki namespace: logging labels: app: loki data: loki.yaml: | auth_enabled: false ingester: chunk_idle_period: 3m chunk_block_size: 262144 chunk_retain_period: 1m max_transfer_retries: 0 lifecycler: ring: kvstore: store: inmemory replication_factor: 1 limits_config: enforce_metric_name: false reject_old_samples: true reject_old_samples_max_age: 168h schema_config: configs: - from: "2022-05-15" store: boltdb-shipper object_store: filesystem schema: v11 index: prefix: index_ period: 24h server: http_listen_port: 3100 storage_config: boltdb_shipper: active_index_directory: /data/loki/boltdb-shipper-active cache_location: /data/loki/boltdb-shipper-cache cache_ttl: 24h shared_store: filesystem filesystem: directory: /data/loki/chunks chunk_store_config: max_look_back_period: 0s table_manager: retention_deletes_enabled: true retention_period: 48h compactor: working_directory: /data/loki/boltdb-shipper-compactor shared_store: filesystem END [root@k8s-master01 ~]# kubectl create -f loki-configmap.yaml
3)創(chuàng)建 StatefulSet
[root@k8s-master01 ~]# cat <<END > loki-statefulset.yaml apiVersion: v1 kind: Service metadata: name: loki namespace: logging labels: app: loki spec: type: NodePort ports: - port: 3100 protocol: TCP name: http-metrics targetPort: http-metrics nodePort: 30100 selector: app: loki --- apiVersion: apps/v1 kind: StatefulSet metadata: name: loki namespace: logging labels: app: loki spec: podManagementPolicy: OrderedReady replicas: 1 selector: matchLabels: app: loki serviceName: loki updateStrategy: type: RollingUpdate template: metadata: labels: app: loki spec: serviceAccountName: loki initContainers: - name: chmod-data image: busybox:1.28.4 imagePullPolicy: IfNotPresent command: ["chmod","-R","777","/loki/data"] volumeMounts: - name: storage mountPath: /loki/data containers: - name: loki image: grafana/loki:2.3.0 imagePullPolicy: IfNotPresent args: - -config.file=/etc/loki/loki.yaml volumeMounts: - name: config mountPath: /etc/loki - name: storage mountPath: /data ports: - name: http-metrics containerPort: 3100 protocol: TCP livenessProbe: httpGet: path: /ready port: http-metrics scheme: HTTP initialDelaySeconds: 45 readinessProbe: httpGet: path: /ready port: http-metrics scheme: HTTP initialDelaySeconds: 45 securityContext: readOnlyRootFilesystem: true terminationGracePeriodSeconds: 4800 volumes: - name: config configMap: name: loki - name: storage hostPath: path: /app/loki END [root@k8s-master01 ~]# kubectl create -f loki-statefulset.yaml
2.安裝 Promtail
1)創(chuàng)建 RBAC 授權(quán)文件
[root@k8s-master01 ~]# cat <<END > promtail-rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: loki-promtail labels: app: promtail namespace: logging --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: app: promtail name: promtail-clusterrole namespace: logging rules: - apiGroups: [""] resources: ["nodes","nodes/proxy","services","endpoints","pods"] verbs: ["get", "watch", "list"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: promtail-clusterrolebinding labels: app: promtail namespace: logging subjects: - kind: ServiceAccount name: loki-promtail namespace: logging roleRef: kind: ClusterRole name: promtail-clusterrole apiGroup: rbac.authorization.k8s.io END [root@k8s-master01 ~]# kubectl create -f promtail-rbac.yaml
2)創(chuàng)建 ConfigMap 文件
Promtail 配置文件:官方介紹
[root@k8s-master01 ~]# cat <<"END" > promtail-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: loki-promtail namespace: logging labels: app: promtail data: promtail.yaml: | client: backoff_config: max_period: 5m max_retries: 10 min_period: 500ms batchsize: 1048576 batchwait: 1s external_labels: {} timeout: 10s positions: filename: /run/promtail/positions.yaml server: http_listen_port: 3101 target_config: sync_period: 10s scrape_configs: - job_name: kubernetes-pods-name pipeline_stages: - docker: {} kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: - __meta_kubernetes_pod_label_name target_label: __service__ - source_labels: - __meta_kubernetes_pod_node_name target_label: __host__ - action: drop regex: '' source_labels: - __service__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - action: replace replacement: $1 separator: / source_labels: - __meta_kubernetes_namespace - __service__ target_label: job - action: replace source_labels: - __meta_kubernetes_namespace target_label: namespace - action: replace source_labels: - __meta_kubernetes_pod_name target_label: pod - action: replace source_labels: - __meta_kubernetes_pod_container_name target_label: container - replacement: /var/log/pods/*$1/*.log separator: / source_labels: - __meta_kubernetes_pod_uid - __meta_kubernetes_pod_container_name target_label: __path__ - job_name: kubernetes-pods-app pipeline_stages: - docker: {} kubernetes_sd_configs: - role: pod relabel_configs: - action: drop regex: .+ source_labels: - __meta_kubernetes_pod_label_name - source_labels: - __meta_kubernetes_pod_label_app target_label: __service__ - source_labels: - __meta_kubernetes_pod_node_name target_label: __host__ - action: drop regex: '' source_labels: - __service__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - action: replace replacement: $1 separator: / source_labels: - __meta_kubernetes_namespace - __service__ target_label: job - action: replace source_labels: - __meta_kubernetes_namespace target_label: namespace - action: replace source_labels: - __meta_kubernetes_pod_name target_label: pod - action: replace source_labels: - __meta_kubernetes_pod_container_name target_label: container - replacement: /var/log/pods/*$1/*.log separator: / source_labels: - __meta_kubernetes_pod_uid - __meta_kubernetes_pod_container_name target_label: __path__ - job_name: kubernetes-pods-direct-controllers pipeline_stages: - docker: {} kubernetes_sd_configs: - role: pod relabel_configs: - action: drop regex: .+ separator: '' source_labels: - __meta_kubernetes_pod_label_name - __meta_kubernetes_pod_label_app - action: drop regex: '[0-9a-z-.]+-[0-9a-f]{8,10}' source_labels: - __meta_kubernetes_pod_controller_name - source_labels: - __meta_kubernetes_pod_controller_name target_label: __service__ - source_labels: - __meta_kubernetes_pod_node_name target_label: __host__ - action: drop regex: '' source_labels: - __service__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - action: replace replacement: $1 separator: / source_labels: - __meta_kubernetes_namespace - __service__ target_label: job - action: replace source_labels: - __meta_kubernetes_namespace target_label: namespace - action: replace source_labels: - __meta_kubernetes_pod_name target_label: pod - action: replace source_labels: - __meta_kubernetes_pod_container_name target_label: container - replacement: /var/log/pods/*$1/*.log separator: / source_labels: - __meta_kubernetes_pod_uid - __meta_kubernetes_pod_container_name target_label: __path__ - job_name: kubernetes-pods-indirect-controller pipeline_stages: - docker: {} kubernetes_sd_configs: - role: pod relabel_configs: - action: drop regex: .+ separator: '' source_labels: - __meta_kubernetes_pod_label_name - __meta_kubernetes_pod_label_app - action: keep regex: '[0-9a-z-.]+-[0-9a-f]{8,10}' source_labels: - __meta_kubernetes_pod_controller_name - action: replace regex: '([0-9a-z-.]+)-[0-9a-f]{8,10}' source_labels: - __meta_kubernetes_pod_controller_name target_label: __service__ - source_labels: - __meta_kubernetes_pod_node_name target_label: __host__ - action: drop regex: '' source_labels: - __service__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - action: replace replacement: $1 separator: / source_labels: - __meta_kubernetes_namespace - __service__ target_label: job - action: replace source_labels: - __meta_kubernetes_namespace target_label: namespace - action: replace source_labels: - __meta_kubernetes_pod_name target_label: pod - action: replace source_labels: - __meta_kubernetes_pod_container_name target_label: container - replacement: /var/log/pods/*$1/*.log separator: / source_labels: - __meta_kubernetes_pod_uid - __meta_kubernetes_pod_container_name target_label: __path__ - job_name: kubernetes-pods-static pipeline_stages: - docker: {} kubernetes_sd_configs: - role: pod relabel_configs: - action: drop regex: '' source_labels: - __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror - action: replace source_labels: - __meta_kubernetes_pod_label_component target_label: __service__ - source_labels: - __meta_kubernetes_pod_node_name target_label: __host__ - action: drop regex: '' source_labels: - __service__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - action: replace replacement: $1 separator: / source_labels: - __meta_kubernetes_namespace - __service__ target_label: job - action: replace source_labels: - __meta_kubernetes_namespace target_label: namespace - action: replace source_labels: - __meta_kubernetes_pod_name target_label: pod - action: replace source_labels: - __meta_kubernetes_pod_container_name target_label: container - replacement: /var/log/pods/*$1/*.log separator: / source_labels: - __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror - __meta_kubernetes_pod_container_name target_label: __path__ END [root@k8s-master01 ~]# kubectl create -f promtail-configmap.yaml
3)創(chuàng)建 DaemonSet 文件
[root@k8s-master01 ~]# cat <<END > promtail-daemonset.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: loki-promtail namespace: logging labels: app: promtail spec: selector: matchLabels: app: promtail updateStrategy: rollingUpdate: maxUnavailable: 1 type: RollingUpdate template: metadata: labels: app: promtail spec: serviceAccountName: loki-promtail containers: - name: promtail image: grafana/promtail:2.3.0 imagePullPolicy: IfNotPresent args: - -config.file=/etc/promtail/promtail.yaml - -client.url=http://192.168.1.1:30100/loki/api/v1/push env: - name: HOSTNAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName volumeMounts: - mountPath: /etc/promtail name: config - mountPath: /run/promtail name: run - mountPath: /data/k8s/docker/data/containers name: docker readOnly: true - mountPath: /var/log/pods name: pods readOnly: true ports: - containerPort: 3101 name: http-metrics protocol: TCP securityContext: readOnlyRootFilesystem: true runAsGroup: 0 runAsUser: 0 readinessProbe: failureThreshold: 5 httpGet: path: /ready port: http-metrics scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists volumes: - name: config configMap: name: loki-promtail - name: run hostPath: path: /run/promtail type: "" - name: docker hostPath: path: /data/k8s/docker/data/containers - name: pods hostPath: path: /var/log/pods END [root@k8s-master01 ~]# kubectl create -f promtail-daemonset.yaml
4)Promtail 關(guān)鍵配置
volumeMounts: - mountPath: /data/k8s/docker/data/containers name: docker readOnly: true - mountPath: /var/log/pods name: pods readOnly: true volumes: - name: docker hostPath: path: /data/k8s/docker/data/containers - name: pods hostPath: path: /var/log/pods
這里需要注意,hostPath
和 mountPath
配置的路徑要相同(這里說的相同指的是,要和宿主機的容器目錄相同),因為 Promtail 在讀取容器內(nèi)的日志時,會通過 K8s 的 API 接口來返回容器信息(通過源路徑取的)。如果配置的不同,將會導(dǎo)致 httpGet
檢查失敗。
輸出:Not ready: Unable to find any logs to tail. Please verify permissions, volumes, scrape_config, etc.
信息,原因可能就是因為掛載路徑?jīng)]有匹配上,導(dǎo)致 Promtail tail
不到日志文件??梢韵仁謩訉⒕途w檢查關(guān)閉,來查看 Promtail 容器的日志。
3.安裝 Grafana
[root@k8s-master01 ~]# cat <<END > grafana-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: grafana labels: app: grafana namespace: logging spec: replicas: 1 selector: matchLabels: app: grafana template: metadata: labels: app: grafana spec: containers: - name: grafana image: grafana/grafana:8.4.7 imagePullPolicy: IfNotPresent env: - name: GF_AUTH_BASIC_ENABLED value: "true" - name: GF_AUTH_ANONYMOUS_ENABLED value: "false" resources: requests: cpu: 100m memory: 200Mi limits: cpu: '1' memory: 2Gi readinessProbe: httpGet: path: /login port: 3000 volumeMounts: - name: storage mountPath: /var/lib/grafana volumes: - name: storage hostPath: path: /app/grafana --- apiVersion: v1 kind: Service metadata: name: grafana labels: app: grafana namespace: logging spec: type: NodePort ports: - port: 3000 targetPort: 3000 nodePort: 30030 selector: app: grafana END [root@k8s-master01 ~]# mkdir -p /app/grafana [root@k8s-master01 ~]# chmod -R 777 /app/grafana [root@k8s-master01 ~]# kubectl create -f grafana-deploy.yaml
4.驗證
訪問 Grafana 控制臺:http://192.168.1.1:30030
(賬號密碼:admin/admin
)
1)配置數(shù)據(jù)源
2)選擇 Loki 數(shù)據(jù)源
3)配置 Loki 的 URL 地址
4)驗證
到此這篇關(guān)于使用 Loki 實現(xiàn) Kubernetes 容器日志監(jiān)控的文章就介紹到這了,更多相關(guān)Kubernetes 日志監(jiān)控內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
網(wǎng)站開發(fā)中的文件存儲目錄結(jié)構(gòu)的探討
網(wǎng)站應(yīng)用中經(jīng)常會有文件存儲的需求,目錄結(jié)構(gòu)該怎么建才好呢?讓我們來做下分析2010-07-07vscode配置遠(yuǎn)程開發(fā)與免密登錄的技巧
這篇文章主要介紹了vscode配置遠(yuǎn)程開發(fā)與免密登錄的技巧,本文給大家介紹的非常詳細(xì),對大家的學(xué)習(xí)或工作具有一定的參考借鑒價值,需要的朋友可以參考下2021-04-04Web 設(shè)計與開發(fā)者必須知道的 15 個站點
今天讀到一篇文章,介紹了15個對 Web 設(shè)計與開發(fā)師極端有用的站點,里面有不少也是我們一直在使用的,也許對很多人都有用,翻譯出來以餉同仁。2009-08-08