0%

K8S容器集群接入Filebeat+ES+Kibana

概览

Filebeat是用于转发和集中日志数据的轻量级传送程序。作为服务器上的代理安装,Filebeat监视您指定的日志文件或位置,收集日志事件,并将它们转发到Elasticsearch或Logstash进行索引。

Filebeat的工作方式如下:启动Filebeat时,它将启动一个或多个输入,这些输入将在为日志数据指定的位置中查找。对于Filebeat所找到的每个日志,Filebeat都会启动收集器。每个收割机都读取一个日志以获取新内容,并将新日志数据发送到libbeat,libbeat会汇总事件并将汇总的数据发送到您为Filebeat配置的输出。

Elasticsearch是一个开源的搜索引擎,它可以非常方便的实现单机和集群部署,为您提供托管的分布式搜索引擎服务。在ELK整个生态中,Elasticsearch集群支持结构化、非结构化文本的多条件检索、统计、报表。

Kibana 是一个开源的数据分析与可视化平台,与Elasticsearch搜索引擎一起使用。您可以用Kibana搜索、查看、交互存放在Elasticsearch索引中的数据,也可以使用Kibana以图表、表格、地图等方式展示数据。

Kibana的官方文档:https://www.elastic.co/guide/en/kibana/current/index.html

官方文档

安装

准备服务器

IP地址 配置信息 组件
172.22.1.48 8核、32G内存、200G存储 es
172.22.1.49 8核、32G内存、200G存储 es
172.22.1.50 8核、32G内存、200G存储 es+kibana

修改系统参数

1
2
3
4
5
6
vim /etc/security/limits.conf

* hard nofile 65535
* soft nofile 65535
* hard nproc 65535
* soft nproc 65535
1
2
3
4
5
vim /etc/sysctl.conf

vm.max_map_count=655360

sysctl -p

准备安装包

https://www.elastic.co/cn/start

共两个安装包

  • elasticsearch-8.1.0-linux-x86_64.tar.gz
  • kibana-8.1.0-linux-x86_64.tar.gz

安装es

解压安装包

1
2
cd /data/servers/
tar zxf elasticsearch-8.1.0

chown

1
2
user add dig
chown dig:dig -R elasticsearch-8.1.0

创建数据目录和日志目录

1
2
3
mkdir -p /data/servers/es_data/
mkdir -p /data/servers/es_logs/
chown -R dig:dig /data/servers/es_data/ /data/servers/es_logs/

生成证书

https://blog.csdn.net/qq330983778/article/details/103537252

1
bin/elasticsearch-certutil ca
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.

Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority

By default the 'ca' mode produces a single PKCS#12 output file which holds:
* The CA certificate
* The CA's private key

If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key

Please enter the desired output file [elastic-stack-ca.p12]: {①}
Enter password for elastic-stack-ca.p12 : {②}

①:此位置设置文档输出地址和名称。默认名称为elastic-stack-ca.p12。这个文件是PKCS#12密钥存储库,它包含您的CA的公共证书和用于为每个节点签署证书的私有密钥。

②:此位置设置证书的密码。计划将来向集群添加更多的节点,请记住其密码。

1
bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'cert' mode generates X.509 certificate and private keys.
* By default, this generates a single certificate and key for use
on a single instance.
* The '-multiple' option will prompt you to enter details for multiple
instances and will generate a certificate and key for each one
* The '-in' option allows for the certificate generation to be automated by describing
the details of each instance in a YAML file

* An instance is any piece of the Elastic Stack that requires a SSL certificate.
Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
may all require a certificate and private key.
* The minimum required value for each instance is a name. This can simply be the
hostname, which will be used as the Common Name of the certificate. A full
distinguished name may also be used.
* A filename value may be required for each instance. This is necessary when the
name would result in an invalid file or directory name. The name provided here
is used as the directory name (within the zip) and the prefix for the key and
certificate files. The filename is required if you are prompted and the name
is not displayed in the prompt.
* IP addresses and DNS names are optional. Multiple values can be specified as a
comma separated string. If no IP addresses or DNS names are provided, you may
disable hostname verification in your SSL configuration.

* All certificates generated by this tool will be signed by a certificate authority (CA).
* The tool can automatically generate a new CA for you, or you can provide your own with the
-ca or -ca-cert command line options.

By default the 'cert' mode produces a single PKCS#12 output file which holds:
* The instance certificate
* The private key for the instance certificate
* The CA certificate

If you specify any of the following options:
* -pem (PEM formatted output)
* -keep-ca-key (retain generated CA key)
* -multiple (generate multiple certificates)
* -in (generate certificates from an input file)
then the output will be be a zip file containing individual certificate/key files

Enter password for CA (/usr/local/es-cluster/elastic-stack-ca.p12) : ①
Please enter the desired output file [elastic-certificates.p12]: ②
Enter password for elastic-certificates.p12 : ③

① : 此位置需要输入elastic-stack-ca.p12 CA授权证书的密码。

② : 此位置为需要输出证书位置。

③ : 此位置为证书的密码。使用空密码可以直接回车结束。

编辑配置

注意

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
cp /data/servers/elasticsearch-8.1.0/config/elasticsearch.yml /data/servers/elasticsearch-8.1.0/config/elasticsearch.yml_bak_0317

cat >/data/servers/elasticsearch-8.1.0/config/elasticsearch.yml<<EOF
cluster.name: k8s-es
node.name: k8s-es-1 # 节点名称不一致,变量
path.data: /data/servers/es_data/
path.logs: /data/servers/es_logs/
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["172.22.1.48", "172.22.1.49","172.22.1.50"]
cluster.initial_master_nodes: ["k8s-es-1", "k8s-es-2", "k8s-es-3"]
action.destructive_requires_name: false
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
http.cors.enabled: true
http.cors.allow-origin: "*"
EOF

配置默认用户的密码

1
bin/elasticsearch-setup-passwords interactive
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y

Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana]:
Reenter password for [kibana]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]

启动es

1
bin/elasticsearch -d

访问es

安装kibana

解压安装包

1
tar zxf kibana-8.1.0-linux-x86_64.tar.gz

chown

1
chown dig:dig -R kibana-8.1.0

编辑配置

1
2
3
4
5
6
7
8
9
10
11
cat >/data/servers/kibana-8.1.0/config/kibana.yml<<EOF
server.host: "0.0.0.0"
server.publicBaseUrl: "https://k8s-elk.zhaohongye.com"
i18n.locale: "zh-CN"
elasticsearch.hosts: ["http://172.22.1.48:9200","http://172.22.1.49:9200","http://172.22.1.50:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "*******"
xpack.encryptedSavedObjects.encryptionKey: 78218e353bd1f01ed605743714145c6e
xpack.reporting.encryptionKey: 69ec2fa16b8d222020a1e5b3fa8ed394
xpack.security.encryptionKey: 6778c4d6fb7ed45e535e29524fb114e8
EOF

启动kibana

1
bin/kibana
1
nohup bin/kibana &

安装Filebeat

https://www.elastic.co/guide/en/beats/filebeat/current/running-on-kubernetes.html

Yaml文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"

# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# node: ${NODE_NAME}
# hints.enabled: true
# hints.default_config:
# type: container
# paths:
# - /var/log/containers/*${data.kubernetes.container.id}.log

processors:
- add_cloud_metadata:
- add_host_metadata:

cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}

output.elasticsearch:
hosts: ["http://172.22.1.48:9200","http://172.22.1.49:9200","http://172.22.1.50:9200"]
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:8.1.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
# - name: ELASTICSEARCH_HOST
# value: 172.22.1.23
# - name: ELASTICSEARCH_PORT
# value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: izuche@2022
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0640
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
# When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: filebeat
namespace: kube-system
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: Role
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: filebeat-kubeadm-config
namespace: kube-system
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: Role
name: filebeat-kubeadm-config
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
- nodes
verbs:
- get
- watch
- list
- apiGroups: ["apps"]
resources:
- replicasets
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: filebeat
# should be the namespace where filebeat is running
namespace: kube-system
labels:
k8s-app: filebeat
rules:
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: filebeat-kubeadm-config
namespace: kube-system
labels:
k8s-app: filebeat
rules:
- apiGroups: [""]
resources:
- configmaps
resourceNames:
- kubeadm-config
verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
---

apply

1
2
3
4
5
6
7
8
9
10
11
k apply -f filebeat.yaml

configmap/filebeat-config unchanged
daemonset.apps/filebeat configured
clusterrolebinding.rbac.authorization.k8s.io/filebeat unchanged
rolebinding.rbac.authorization.k8s.io/filebeat unchanged
rolebinding.rbac.authorization.k8s.io/filebeat-kubeadm-config unchanged
clusterrole.rbac.authorization.k8s.io/filebeat unchanged
role.rbac.authorization.k8s.io/filebeat unchanged
role.rbac.authorization.k8s.io/filebeat-kubeadm-config unchanged
serviceaccount/filebeat unchanged

查看filebeat日志

效果