Step By Step Installation For Elasticsearch Operator on Kubernetes and Metircbeat, Filebeat and heartbeat on EKS

Izek Chen
9 min readFeb 20, 2020

ECK is a new orchestration product based on the Kubernetes Operator pattern that lets users provision, manage, and operate Elasticsearch clusters on Kubernetes. Today I will deploy all the component step by step

Component:
- elasticsearch-operator
- Elasticsearch
- Kibana
- metricbeat
- filebeat
- heartbeat

Step1: Install custom resource definitions and the operator with its RBAC rules and monitor the operator logs:

kubectl apply -f https://download.elastic.co/downloads/eck/1.0.1/all-in-one.yaml

Step2: Deploy an Elasticsearch cluster, make sure your node have enough cpu or memory resources for elasticsearch.

# This sample sets up an Elasticsearch cluster with 3 nodes.
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: dev-prod
spec:
version: 7.6.0
nodeSets:
- name: default
config:
# most Elasticsearch configuration parameters are possible to set, e.g: node.attr.attr_name: attr_value
node.master: true
node.data: true
node.ingest: true
node.ml: false
# this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost
node.store.allow_mmap: false
podTemplate:
metadata:
labels:
# additional labels for pods
app: elasticsearch
spec:
# this changes the kernel setting on the node to allow ES to use mmap
# if you uncomment this init container you will likely also want to remove the
# "node.store.allow_mmap: false" setting above
# initContainers:
# - name: sysctl
# securityContext:
# privileged: true
# command: [‘sh’, ‘-c’, ‘sysctl -w vm.max_map_count=262144']
###
# uncomment the line below if you are using a service mesh such as linkerd2 that uses service account tokens for pod identification.
# automountServiceAccountToken: true
containers:
- name: elasticsearch
# specify resource limits and requests
resources:
limits:
memory: 4Gi
cpu: 1
env:
- name: ES_JAVA_OPTS
value: "-Xms2g -Xmx2g"

count: 3
# # request 2Gi of persistent data storage for pods in this topology element
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: gp2

# # inject secure settings into Elasticsearch nodes from k8s secrets references
# secureSettings:
# - secretName: ref-to-secret
# - secretName: another-ref-to-secret
# # expose only a subset of the secret keys (optional)
# entries:
# - key: value1
# path: newkey # project a key to a specific path (optional)
# http:
# service:
# spec:
# # expose this cluster Service with a LoadBalancer
# type: LoadBalancer
# tls:
# selfSignedCertificate:
# # add a list of SANs into the self-signed HTTP certificate
# subjectAltNames:
# - ip: 192.168.1.2
# - ip: 192.168.1.3
# - dns: elasticsearch-sample.example.com
# certificate:
# # provide your own certificate
# secretName: my-cert

Step3: if you want to change the elasticsearch service with “LoadBalancer” type, remember to modify it. If you only want it as an internal ELB you need to add the annotation

apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

Step4: Deploy kibana

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: ai-dev-kibana
spec:
version: 7.6.0
count: 1
elasticsearchRef:
name: "ai-dev-prod"
#http:
# service:
# spec:
# type: LoadBalancer
# this shows how to customize the Kibana pod
# with labels and resource limits
podTemplate:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
resources:
limits:
memory: 1Gi
cpu: 1

Step5: Modify kibana service it you want to expose it as LoadBalancer

refer to step3apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

Step6: Install filebeat via filebeat-kubernetes.yaml.

if you are facing the x509 certificate issue, please set not verity

ssl.verification_mode: "none"---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"

output:
elasticsearch:
ssl.verification_mode: "none"

# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# node: ${NODE_NAME}
# hints.enabled: true
# hints.default_config:
# type: container
# paths:
# - /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_cloud_metadata:
- add_host_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.6.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: "https://ai-dev-prod-es-http.elasticsearch.svc"
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: *********
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
---

Step7: Install metricbeat via metricbeat-kubernetes.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-config
namespace: kube-system
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
# To enable hints based autodiscover uncomment this:
#metricbeat.autodiscover:
# providers:
# - type: kubernetes
# node: ${NODE_NAME}
# hints.enabled: true
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
ssl.verification_mode: "none"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-modules
namespace: kube-system
labels:
k8s-app: metricbeat
data:
system.yml: |-
- module: system
period: 10s
metricsets:
- cpu
- load
- memory
- network
- process
- process_summary
#- core
#- diskio
#- socket
processes: ['.*']
process.include_top_n:
by_cpu: 5 # include top 5 processes by CPU
by_memory: 5 # include top 5 processes by memory
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)'
kubernetes.yml: |-
- module: kubernetes
metricsets:
- node
- system
- pod
- container
- volume
period: 10s
host: ${NODE_NAME}
hosts: ["https://${HOSTNAME}:10250"]
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.verification_mode: "none"
# If using Red Hat OpenShift remove ssl.verification_mode entry and
# uncomment these settings:
#ssl.certificate_authorities:
#- /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
- module: kubernetes
metricsets:
- proxy
period: 10s
host: ${NODE_NAME}
hosts: ["localhost:10249"]
---
# Deploy a Metricbeat instance per node for node metrics retrieval
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: metricbeat
namespace: kube-system
labels:
k8s-app: metricbeat
spec:
selector:
matchLabels:
k8s-app: metricbeat
template:
metadata:
labels:
k8s-app: metricbeat
spec:
serviceAccountName: metricbeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: metricbeat
image: docker.elastic.co/beats/metricbeat:7.6.0
args: [
"-c", "/etc/metricbeat.yml",
"-e",
"-system.hostfs=/hostfs",
]
env:
- name: ELASTICSEARCH_HOST
value: "https://ai-dev-prod-es-http.elasticsearch.svc"
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: **********
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: modules
mountPath: /usr/share/metricbeat/modules.d
readOnly: true
- name: dockersock
mountPath: /var/run/docker.sock
- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
volumes:
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: config
configMap:
defaultMode: 0600
name: metricbeat-daemonset-config
- name: modules
configMap:
defaultMode: 0600
name: metricbeat-daemonset-modules
- name: data
hostPath:
path: /var/lib/metricbeat-data
type: DirectoryOrCreate
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-deployment-config
namespace: kube-system
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-deployment-modules
namespace: kube-system
labels:
k8s-app: metricbeat
data:
# This module requires `kube-state-metrics` up and running under `kube-system` namespace
kubernetes.yml: |-
- module: kubernetes
metricsets:
- state_node
- state_deployment
- state_replicaset
- state_pod
- state_container
- state_cronjob
- state_resourcequota
# Uncomment this to get k8s events:
#- event
period: 10s
host: ${NODE_NAME}
hosts: ["kube-state-metrics:8080"]
---
# Deploy singleton instance in the whole cluster for some unique data sources, like kube-state-metrics
apiVersion: apps/v1
kind: Deployment
metadata:
name: metricbeat
namespace: kube-system
labels:
k8s-app: metricbeat
spec:
selector:
matchLabels:
k8s-app: metricbeat
template:
metadata:
labels:
k8s-app: metricbeat
spec:
serviceAccountName: metricbeat
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: metricbeat
image: docker.elastic.co/beats/metricbeat:7.6.0
args: [
"-c", "/etc/metricbeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: modules
mountPath: /usr/share/metricbeat/modules.d
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: metricbeat-deployment-config
- name: modules
configMap:
defaultMode: 0600
name: metricbeat-deployment-modules
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metricbeat
subjects:
- kind: ServiceAccount
name: metricbeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: metricbeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metricbeat
labels:
k8s-app: metricbeat
rules:
- apiGroups: [""]
resources:
- nodes
- namespaces
- events
- pods
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
resources:
- replicasets
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources:
- statefulsets
- deployments
- replicasets
verbs: ["get", "list", "watch"]
- apiGroups:
- ""
resources:
- nodes/stats
verbs:
- get
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metricbeat
namespace: kube-system
labels:
k8s-app: metricbeat
---

Step8: Setup heartbeat

---
apiVersion: v1
kind: ConfigMap
metadata:
name: heartbeat-deployment-config
namespace: kube-system
labels:
k8s-app: heartbeat
data:
heartbeat.yml: |-
heartbeat.monitors:
- type: http
schedule: ‘@every 5s’
urls:
- "apms-service.ai.svc:80"
fields:
env: dev
tags: dev-env
site: ASIa
name: apms-service
check.response.status: 404
fields_under_root: true
heartbeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition:
contains.equals:
kubernetes.labels.heartbeat_type: http
config:
- type: http
urls: ["http://${data.host}:${data.kubernetes.labels.heartbeat_port}/${data.kubernetes.labels.heartbeat_url}"]
id: "${data.kubernetes.pod.name}"
check.response.status: 200
schedule: "@every 1s"
timeout: 2s
- condition.equals:
kubernetes.labels.heartbeat_type: tcp
config:
- type: tcp
hosts: ["${data.host}:${data.kubernetes.labels.heartbeat_port}"]
schedule: "@every 1s"
timeout: 2s
- type: kubernetes
templates:
- condition:
contains:
kubernetes.labels.app: apms
config:
- type: icmp
hosts: ["${data.host}:${data.port}"]
schedule: "@every 1s"
timeout: 5s
- type: kubernetes
templates:
- config:
- type: icmp
hosts: ["${data.host}"]
schedule: ‘*/5 * * * * * *’
- type: kubernetes
templates:
- config:
- type: tcp
hosts: ["${data.host}:${data.port}"]
schedule: "@every 1s"
timeout: 5s
name: heartbeat-monitoring
output:
elasticsearch:
hosts: ${ELASTICSEARCH_HOSTS}
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
ssl.verification_mode: "none"
setup:
kibana:
ssl.verification_mode: "none"
processors:
- add_cloud_metadata:
- add_docker_metadata:
- add_host_metadata:
- add_id: ~
- add_kubernetes_metadata:
include_annotations:
- annotation_to_include
add_resource_metadata:
namespace:
enabled: true
node:
enabled: true
xpack.monitoring.enabled: true
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: heartbeat
namespace: kube-system
labels:
k8s-app: heartbeat
spec:
template:
metadata:
labels:
k8s-app: heartbeat
spec:
serviceAccountName: heartbeat
hostNetwork: false
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: heartbeat
image: docker.elastic.co/beats/heartbeat:7.6.0
args: [
"-c", "/etc/heartbeat.yml",
"-e",
]
env:
- name: ELASTIC_CLOUD_ID
- name: ELASTIC_CLOUD_AUTH
- name: ELASTICSEARCH_HOSTS
value: "https://ai-dev-prod-es-http.elasticsearch.svc"
- name: KIBANA_HOST
value: "https://ai-dev-kibana-kb-http.elasticsearch.svc"
- name: ELASTICSEARCH_USERNAME
value: "elastic"
- name: ELASTICSEARCH_PASSWORD
value: "**********"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/heartbeat.yml
readOnly: true
subPath: heartbeat.yml
volumes:
- name: config
configMap:
defaultMode: 0600
name: heartbeat-deployment-config
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: heartbeat
subjects:
- kind: ServiceAccount
name: heartbeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: heartbeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: heartbeat
labels:
k8s-app: heartbeat
rules:
- apiGroups: [""]
resources:
- nodes
- namespaces
- events
- pods
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
resources:
- replicasets
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources:
- statefulsets
- deployments
verbs: ["get", "list", "watch"]
- apiGroups:
- ""
resources:
- nodes/stats
verbs:
- get
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: heartbeat
namespace: kube-system
labels:
k8s-app: heartbeat

After all the step above, I believe that you will able to see the beautiful graph

Inventory view:

Kubernetes status view
Inventory view
Node Metrics
Pod metrics
Service uptime
Live streaming log collection

--

--