Virtalis Reach Help

Deploying the Virtalis Reach Monitoring Service Stack

Overview

This section describes the deployment of various monitoring services which allow a Kubernetes Administrator to monitor the health, metrics, and logs for all cluster services including Virtalis Reach.

List of services to be deployed:

  • Prometheus Stack (health, metrics)
  • Grafana
  • Prometheus
  • Alertmanager
  • ELK Stack (logging)
  • Elasticsearch
  • Kibana
  • Logstash

Variables and Commands

In this section, variables enclosed in <> arrows should be replaced with the appropriate values. For example:

docker login <my_id> <my_password> --verbosity debug


becomes 

docker login admin admin --verbosity debug


Commands to execute in a shell are shown as code and each block of code is designed to be a single command that can be copy and pasted:

These are commands to be entered in a shell in your clusters administration console
This is another block of code \
that uses "\" to escape newlines \
and can be copy and pasted straight into your console


Set Up the Deployment Shell

Export some environment variables which will be used throughout the installation:

export MONITORING_DOMAIN=<the domain monitoring services will be hosted on>
export MONITORING_NAMESPACE=monitoring
export MONITORING_TLS_SECRET=reach-tls-secret


Create a new namespace:

kubectl create namespace "${MONITORING_NAMESPACE}"
kubectl label ns "${MONITORING_NAMESPACE}" release=prometheus-stack


The command below uses the pwgen package to generate a random string of 30 alphanumeric characters.

Before proceeding, make sure pwgen is installed on your machine or use a different package to generate the string replacing the command inside the brackets:

$(pwgen 30 1 -s) → $(someOtherPackage --arg1 --arg2)

Create Secrets

Create a secret which will store Grafana credentials:

kubectl create secret generic grafana \
-n "${MONITORING_NAMESPACE}" \
--from-literal="user"=$(pwgen 30 1 -s) \
--from-literal="password"=$(pwgen 30 1 -s)
kubectl create secret generic elastic-credentials  -n $MONITORING_NAMESPACE \
--from-literal=password=$(pwgen -c -n -s 30 1 | tr -d '\n') \
--from-literal=username=elastic
kubectl create secret generic kibana-credentials -n $MONITORING_NAMESPACE \
--from-literal=encryption-key=$(pwgen -c -n -s 32 1 | tr -d '\n')

Storage

Express

If you only want to modify the storage class and leave all other parameters such as size as default, export these variables out:

export MONITORING_SC=<name of storage class>
export ELASTICSEARCH_SC_ARGS="--set \
volumeClaimTemplate.storageClassName=${MONITORING_SC}"
export LOGSTASH_SC_ARGS="--set \
volumeClaimTemplate.storageClassName=${MONITORING_SC}"}
export PROMETHEUS_SC_ARGS="
--set alertmanager.alertmanagerSpec.storage.\
volumeClaimTemplate.spec.storageClassName=${MONITORING_SC}
--set prometheus.prometheusSpec.storageSpec.\
volumeClaimTemplate.spec.storageClassName=${MONITORING_SC}
--set grafana.persistence.storageClassName=${MONITORING_SC}
"

Custom Parameters

Here is a list of different monitoring services and how to customize their storage.

Elasticsearch

Please refer to the volumeClaimTemplate: section found in the values.yaml file in the elasticsearch helm chart repository for a list of available parameters to customize such as size, access modes and so on.

These values can be added/tweaked in the following files:

  • k8s/misc/elk/elasticsearch/values-prod.yaml
  • k8s/misc/elk/elasticsearch/values-common.yaml

Logstash

Please refer to the volumeClaimTemplate: sections found in the values.yaml file in the logstash helm chart repository for a list of available parameters to customise such as size, access modes and so on.

  • k8s/misc/elk/logstash/values-prod.yaml
  • k8s/misc/elk/logstash/values-common.yaml

Prometheus Stack

Please refer to the volumeClaimTemplate: sections found in the values.yaml file in the prometheus-stack helm chart repository for a list of available parameters to customise such as size, access modes and so on.

These values can be added/tweaked in the following files:

  • k8s/misc/elk/prometheus/values-prod.yaml
  • k8s/misc/elk/prometheus/values-common.yaml

Monitoring TLS

Manually create a TLS secret from a TLS key and cert or use the LetsEncrypt integration with cert-manager.

Manually Creating a TLS Cert Secret

kubectl create secret tls -n "${MONITORING_NAMESPACE}" \
"${MONITORING_TLS_SECRET}" --key="tls.key" --cert="tls.crt"

LetsEncrypt with Cert-manager

Export the following:

export KIBANA_INGRESS_ANNOTATIONS="--set ingress.annotations\
.cert-manager\.io/cluster-issuer=letsencrypt-prod"
export PROMETHEUS_INGRESS_ANNOTATIONS="--set prometheus.ingress\
.annotations.cert-manager\.io/cluster-issuer=letsencrypt-prod"
export GRAFANA_INGRESS_ANNOTATIONS="--set grafana.ingress\
.annotations.cert-manager\.io/cluster-issuer=letsencrypt-prod"
export ALERTMANAGER_INGRESS_ANNOTATIONS="--set alertmanager.ingress.\
annotations.cert-manager\.io/cluster-issuer=letsencrypt-prod"


Installing Grafana, Alertmanager, and Prometheus

Add these repos to Helm and update:

helm repo add prometheus-community https://\
prometheus-community.github.io/helm-charts && \
helm repo update


Export the following:

export ALERTMANAGER_INGRESS="--set alertmanager.ingress\
.hosts[0]=${MONITORING_DOMAIN} --set alertmanager.ingress\
.tls[0].secretName=$MONITORING_TLS_SECRET --set alertmanager\
.ingress.tls[0].hosts={${MONITORING_DOMAIN}}"
export PROMETHEUS_INGRESS="--set prometheus.ingress\
.hosts[0]=${MONITORING_DOMAIN} --set prometheus.ingress\
.tls[0].secretName=$MONITORING_TLS_SECRET --set prometheus\
.ingress.tls[0].hosts={${MONITORING_DOMAIN}}"
export GRAFANA_INGRESS="--set grafana.ingress.hosts[0]\
=${MONITORING_DOMAIN} --set grafana.ingress.tls[0]\
.secretName=$MONITORING_TLS_SECRET --set grafana.ingress\
.tls[0].hosts={${MONITORING_DOMAIN}}"


Install:

helm install prometheus-stack \
--namespace "${MONITORING_NAMESPACE}"  \
--set grafana.admin.existingSecret="grafana" \
--set grafana.admin.userKey="user" \
--set grafana.admin.passwordKey="password" \
--set grafana.'grafana\.ini'.server.root_url\
="https://${MONITORING_DOMAIN}/grafana" \
--set grafana.'grafana\.ini'.server.domain="${MONITORING_DOMAIN}" \
--set grafana.'grafana\.ini'.server.serve_from_sub_path='true' \
$ALERTMANAGER_INGRESS \
$PROMETHEUS_INGRESS \
$GRAFANA_INGRESS \
$PROMETHEUS_INGRESS_ANNOTATIONS \
$GRAFANA_INGRESS_ANNOTATIONS \
$ALERTMANAGER_INGRESS_ANNOTATIONS \
$PROMETHEUS_SC_ARGS \
-f misc/prometheus/values-common.yaml \
-f misc/prometheus/values-prod.yaml \
prometheus-community/kube-prometheus-stack


Check the status of deployed pods:

kubectl get pods -n "${MONITORING_NAMESPACE}"

Accessing the Grafana Frontend

Retrieve the Grafana admin user:

kubectl get secret --namespace "${MONITORING_NAMESPACE}" \
grafana -o jsonpath="{.data.user}" | base64 --decode; echo


Retrieve the Grafana admin password:

kubectl get secret --namespace "${MONITORING_NAMESPACE}" \
grafana -o jsonpath="{.data.password}" | base64 --decode; echo


Grafana can now be accessed at https://${MONITORING_DOMAIN}/grafana/ from a web-browser using the admin user and admin password.

Installing Elasticsearch, Kibana and Logstash

Add this helm repo and update:

helm repo add elastic https://helm.elastic.co
helm repo update


Export this variable:

export KIBANA_INGRESS="--set ingress.hosts[0]\
=$MONITORING_DOMAIN --set ingress.tls[0].secretName\
=$MONITORING_TLS_SECRET --set ingress.tls[0]\
.hosts[0]=$MONITORING_DOMAIN"

Install Elasticsearch

helm install elasticsearch \
--version 7.10 elastic/elasticsearch \
-f misc/elk/elasticsearch/values-common.yaml \
-f misc/elk/elasticsearch/values-prod.yaml \
$ELASTICSEARCH_SC_ARGS \
-n $MONITORING_NAMESPACE


Install Kibana

helm install kibana \
--version 7.10 elastic/kibana \
-n $MONITORING_NAMESPACE \
$KIBANA_INGRESS_ANNOTATIONS \
$KIBANA_INGRESS \
-f misc/elk/kibana/values-common.yaml \
-f misc/elk/kibana/values-prod-first-time.yaml \
-f misc/elk/kibana/values-prod.yaml

Patch Kibana

kubectl patch deploy kibana-kibana \
-n monitoring -p "$(cat misc/elk/kibana/probe-patch.yaml)"

Get the elasticsearch admin password:kubectl get secret elastic-credentials -o jsonpath\

="{.data.password}" -n $MONITORING_NAMESPACE | \
base64 --decode; echo


Open up kibana in a web browser, log in using the elasticsearch admin password and the username “elastic” and add any additional underprivileged users that you want to have access to the logging system:

https://$MONITORING_DOMAIN/kibana/app/management/security/users

Install Filebeat

helm install filebeat \
--version 7.10 elastic/filebeat \
-n $MONITORING_NAMESPACE \
-f misc/elk/filebeat/values-common.yaml \
-f misc/elk/filebeat/values-prod.yaml

Install Logstash

helm install logstash \
--version 7.10 elastic/logstash \
-n $MONITORING_NAMESPACE \
$LOGSTASH_SC_ARGS \
-f misc/elk/logstash/values-prod.yaml \
-f misc/elk/logstash/values-common.yaml

Clean-up Post Monitoring Installation

Unset environment variables:

unset MONITORING_DOMAIN && \
unset MONITORING_NAMESPACE


Clear bash history:

history -c

This will clean up any secrets exported in the system.

Print page
2021.4
October 20, 2021 19:06

Need more?