Virtalis Reach Help

Deploying Virtalis Reach on a Kubernetes Cluster

Overview

This section describes deploying a complete Virtalis Reach system into a Kubernetes cluster. The content is highly technical, consisting primarily of shell commands that should be executed on the cluster administration shell. 

The commands perform the actions required to deploy Virtalis Reach, however, you should read and understand what these commands do and be aware that your cluster or deployment may have a specific configuration. Virtalis Reach is a configurable platform consisting of many connected micro services allowing the deployment to be configured and adapted for different use-cases and environments.

If you are unsure of the usage or impact of a particular system command then seek advice, improper use of server infrastructure can have serious consequences.

Prerequisites

Virtalis Reach requires:

  • Kubernetes cluster (either on prem or in the cloud):
  • at least version v1.21.3
  • 8 cores
  • At least 64GB of memory available to a single node (128GB total recommended)
  • 625GB of storage (see the storage section for more information)
  • Nginx as the cluster ingress controller
  • Access to the internet during the software deployment and update
  • A network policy compatible network plugin

Virtalis Reach does not require:

  • A GPU in the server
  • A connection to the internet following the software deployment

The follow administration tools are required along with their recommended tested version:

  • kubectl v1.21.3 - this package allows us to communicate with a Kubernetes cluster on the command line.
  • helm 3 v3.6.3 - this package is used to help us install large Kubernetes charts consisting of numerous resources
  • oras v0.8.1 - this package is used to download an archive from our internal registry containing some configuration files which will be used to deploy Virtalis Reach
  • azure cli stable - this package is used to authenticate with our internal registry hosted on Azure
  • jq v1.6 - this package is used to parse and traverse JSON on the command line
  • yq v4.6.1 - this package is used to parse and traverse YAML on the command line

These tools are not installed on the Virtalis Reach server - only on the machine that will communicate with a Kubernetes cluster for the duration of the installation.

If using recent versions of Ubuntu, the Azure CLI as installed by Snap is called azure-cli not az which refers to an older version in the Ubuntu repos. Alias azure-cli to az if needed.

Variables and Commands

In this section, variables enclosed in <> arrows should be replaced with the appropriate values. For example:

docker login <my_id> <my_password> --verbosity debug

becomes 

docker login admin admin --verbosity debug


Commands to execute in a shell are shown as code and each block of code is designed to be a single command that can be copy and pasted:

These are commands to be entered in a shell in your clusters administration console
This is another block of code \
that uses "\" to escape newlines \
and can be copy and pasted straight into your console


Some steps have been included in a single bash script which can be inspected before being run.

Pre-installation

Set Up the Deployment Shell

Make a directory to store temporary installation files:

mkdir /home/root/Reach && cd /home/root/Reach


Export the following variables:

export REACH_VERSION=2021.4.0
export ACR_REGISTRY_NAME=virtaliscustomer
export SKIP_MIGRATIONS=1


Substitute the variable values and export them:

export REACH_DOMAIN=<the domain Virtalis Reach will be hosted on>
export TLS_SECRET_NAME=<the name of the secret containing the tls cert>
export REACH_NAMESPACE=<name of kubernetes namespace to deploy Virtalis Reach on>
export ACR_USERNAME=<service principal id>
export ACR_PASSWORD=<service principal password>
export reach_licence__key=<licence xml snippet>
export reach_licence__signature=<licence signature>


Export the environment variables if Virtalis Reach TLS is/will be configured to use LetsEncrypt:

export KEYCLOAK_ANNOTATIONS="--set ingress.annotations\
.cert-manager\.io/cluster-issuer=letsencrypt-prod"
export INGRESS_ANNOTATIONS="--set ingress.annotations\
.cert-manager\.io/cluster-issuer=letsencrypt-prod"


Optional configuration variables:

export MANAGED_TAG=<custom image tag for Virtalis Reach services>
export OFFLINE_INSTALL=<when set to true, patch Virtalis Reach so that it can be taken off the internet>
export MQ_WINDCHILL=<when set to 1, deploy rabbitmq with windchill support>
export MQ_TEAMCENTER=<when set to 1, deploy rabbitmq with teamcenter support>
export MQ_CLIENT_KEY_PASS=<password that was/will be used for the client_key password, see windchill/teamcenter installation document>

If both windchill and teamcenter are enabled, the certificates and client key pass must be the same for both instances.

Windchill Pre-installation

If Virtalis Reach is going to be configured to connect to Windchill then run the following commands:

export MQ_WINDCHILL=1
export WINDCHILL_HOSTNAME=<hostname of the windchill instance>
export WINDCHILL_AUTHTYPE=Basic
export WINDCHILL_USERNAME=<windchill auth username>
export WINDCHILL_PASSWORD=<windchill auth password>
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: windchill
  namespace: $REACH_NAMESPACE
type: Opaque
stringData:
  hostname: ${WINDCHILL_HOSTNAME}
  authType: ${WINDCHILL_AUTHTYPE}
  username: ${WINDCHILL_USERNAME}
  password: ${WINDCHILL_PASSWORD}
EOF

Teamcenter Pre-installation

If Virtalis Reach is going to be configured to connect to Teamcenter then run the following commands:

export MQ_TEAMCENTER=1
export TEAMCENTER_HOSTNAME=<hostname of the teamcenter instance>
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: teamcenter
  namespace: $REACH_NAMESPACE
type: Opaque
stringData:
  hostname: ${TEAMCENTER_HOSTNAME}
  authType: OAuth
EOF

Checking the Nginx Ingress Controller

kubectl get pods -n ingress-nginx

This should return at least 1 running pod:

ingress-nginx nginx-ingress-controller…….. 1/1 Running 


If Nginx is not installed then please contact Virtalis to see if we can support a different ingress controller. Virtalis Reach is currently only compatible with Nginx.

If there is no Ingress controller currently installed on the cluster, and you are confident you should install Nginx, then you can execute these commands to install it:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx && \
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx \
-n ingress-nginx \
--create-namespace

Storage

Kubernetes supports a wide variety of volume plugins which allow you to provision storage dynamically as well as with constraints depending on your requirements.

List of supported volume plugins

All PersistentVolumes used by Virtalis Reach reserve 625gb of storage space in total, this is a provisional amount which will likely change depending on your workload. 

Default

By default, Virtalis Reach is deployed with the local volume plugin which creates volumes on the worker nodes. This is not the recommended way to deploy Virtalis Reach and is only appropriate for test level deployments as all databases are tied to the single disk of the node that they’re deployed on which hinders the performance of the system.

To assist in dynamic local volume provisioning, we use the Local Path Provisioner service developed by Rancher:

kubectl apply -f \
https://raw.githubusercontent.com/rancher/\
local-path-provisioner/master/deploy/local-path-storage.yaml

Custom

You can customize how storage for Virtalis Reach is provisioned by specifying which storage class you want to use. This must be created by a Kubernetes Administrator beforehand or, in some environments, a default class is also suitable. For example, when deploying to an Azure Kubernetes Service instance, it comes with a default storage class on the cluster which can be used to request storage from Azure.

Express

If you only want to modify the storage class and leave all other parameters such as size as default, export these variables out:

export REACH_SC=<name of storage class>
export REACH_SC_ARGS=" --set persistence\
.storageClass="${REACH_SC}" --set core\
.persistentVolume.storageClass\
="${REACH_SC}" --set master.persistence\
.storageClass="${REACH_SC}" "

Custom Parameters

Here is a list of different databases in use by Virtalis Reach and how to customize their storage.

Minio

Please refer to the persistence: section found in the values.yaml file in the Bitnami Minio helm chart repository for a list of available parameters to customize such as size, access modes and so on.

These values can be added/tweaked in the following files:

  • k8s/misc/artifact-binary-store/values-prod.yaml
  • k8s/misc/import-binary-store/values-prod.yaml
  • k8s/misc/import-folder-binary-store/values-prod.yaml
  • k8s/misc/vrdb-binary-store/values-prod.yaml

Neo4j

Please refer to the core: persistentVolume: section found in the values.yaml file in the Neo4j helm chart repository for a list of available parameters to customize such as size, access modes and so on.

These values can be added/tweaked in the following files:

  • k8s/misc/artifact-store/values-prod.yaml
  • k8s/misc/vrdb-store/values-prod.yaml

Alternatively, the Neo4j helm chart configuration documentation can also be found here: https://neo4j.com/labs/neo4j-helm/1.0.0/configreference/

Mysql

Please refer to the master: persistence: section found in the values.yaml file in the Bitnami Mysql helm chart repository for a list of available parameters to customize such as size, access modes and so on. These values can be added/tweaked in the following files:

  • k8s/misc/collaboration-session-db/values-prod.yaml
  • k8s/misc/import-folder-db/values-prod.yaml
  • k8s/misc/job-db/values-prod.yaml
  • k8s/misc/background-job-db/values-prod.yaml
  • k8s/misc/keycloak-db/values-prod.yaml

Miscellaneous

Please refer to the persistence: section found in the values.yaml file in the Bitnami Rabbitmq helm chart repository for a list of available parameters to customize such as size, access modes and so on. These values can be added/tweaked in the following files:

  • k8s/misc/message-queue/values-prod.yaml

Deploying Virtalis Reach

Create a namespace:

kubectl create namespace "${REACH_NAMESPACE}"


Add namespace labels required by NetworkPolicies:

kubectl label namespace ingress-nginx reach-ingress=true; \
kubectl label namespace kube-system reach-egress=true


The ‘ingress-nginx’ entry on line 1 will have to be modified if your nginx ingress is deployed to a different namespace in your cluster.

Configure Virtalis Reach TLS

Manually create a TLS secret from a TLS key and cert or use the LetsEncrypt integration with cert-manager.

Manually Creating a TLS Cert Secret

kubectl create secret tls -n "${REACH_NAMESPACE}" \
"${TLS_SECRET_NAME}" --key="tls.key" --cert="tls.crt"

LetsEncrypt with Cert-manager

Requirements:

  • The machine hosting Virtalis Reach can be reached via a public IP address (used to validate the ownership of your domain)
  • A domain that you own (cannot be used for domains ending with .local)

Create a namespace for cert-manager:

kubectl create namespace cert-manager


Install the recommended version v1.0.2 of cert-manager:

kubectl apply -f https://github.com/jetstack/\
cert-manager/releases/download/v1.0.2/cert-manager.yaml

Create a new file:

nano prod_issuer.yaml


Paste in the following and replace variables wherever appropriate:

apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: cert-manager
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: <your_email_address>
    privateKeySecretRef:
      name: <value of the $TLS_SECRET_NAME variable you exported before>
    solvers:
    - http01:
        ingress:
          class: nginx


Press ctrl+o and then enter to save and then press ctrl+x to exit nano, now apply the file:

kubectl apply -f prod_issuer.yaml


Source: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes


If you wish to do so, you can follow the digital ocean guide above and deploy an example service to test cert-manager before using it on Virtalis Reach.

Download Installation Files

Log in with Oras;

oras login "${ACR_REGISTRY_NAME}".azurecr.io \
--username "${ACR_USERNAME}" \
-p "${ACR_PASSWORD}"


Pull the Kubernetes deployment file archive from the Virtalis registry and unzip it:

oras pull "${ACR_REGISTRY_NAME}"\
.azurecr.io/misc/k8s:$REACH_VERSION &&
tar -zxvf k8s.tar.gz


Make the installation scripts executable:

cd k8s && sudo chmod +x *.sh

Create and Deploy Secrets

Randomised secrets are used to securely interconnect the Virtalis Reach microservices.

The script below uses the pwgen package to generate a random string of 30 alphanumeric characters. Before proceeding make sure pwgen is installed on your machine.

./create-secrets.sh

Deploy Virtalis Reach and Database Services

./deploy.sh


Wait until all pods are showing up as Ready:

watch -n2 kubectl get pods -n $REACH_NAMESPACE


You will now be able to access the Virtalis Reach frontend client by opening the domain Virtalis Reach was installed on in a web-browser.

Install the Automated Backup System

Optionally, install the automated backup system by referring to Virtalis Reach Automated Backup System or activate your own backup solution.

Retrieving the Keycloak Admin Password

Run the following command:

kubectl get secret --namespace ${REACH_NAMESPACE} \
keycloak -o jsonpath="{.data.admin_password}" \
| base64 --decode; echo


Refer to Virtalis Reach User Management for more information on how to administer the system inside Keycloak.

Post Deployment Clean-up

Unset exported environment variables:

unset REACH_DOMAIN && \
unset TLS_SECRET_NAME && \
unset REACH_NAMESPACE && \
unset ACR_USERNAME && \
unset ACR_PASSWORD && \
unset ACR_REGISTRY_NAME && \
unset REACH_SC && \
unset REACH_SC_ARGS && \
unset reach_licence__key && \
unset reach_licence__signature


Clear bash history:

history -c


This will clean up any secrets exported in the system.

Test Network Policies

Virtalis Reach utilizes NetworkPolicies which restrict the communication of the internal service on a network level. 

Please note: NetworkPolicies require a supported Kubernetes network plugin like Cilium.

To test these policies, run a temporary pod:

kubectl run -it --rm test-pod \
-n ${REACH_NAMESPACE} --image=debian


Install the curl package:

apt update && apt install curl


Run a request to test the connection to one of our backend APIs. This should return a timeout error

curl http://artifact-access-api-service:5000


Exit the pod which will delete it:

Exit


Additionally, you can test the egress by checking that any outbound connections made to a public address are denied.

Get the name of the edge-server pod:

kubectl get pods -n ${REACH_NAMESPACE} | grep edge-server


Exec inside the running pod using the pod name from above:

kubectl exec -it <pod_name> -n ${REACH_NAMESPACE} -- /bin/bash


Running a command like apt update which makes an outbound request should timeout:

apt update


Exit the pod:

exit
Print page
2021.4
October 20, 2021 19:07

Need more?