This document covers deploying a complete Virtalis Reach system into a Kubernetes cluster. The target audience are system administrators and the content is highly technical, consisting primarily of shell commands that should be executed on the cluster administration shell.
The commands perform the actions required to deploy Virtalis Reach, however, you should read and understand what these commands do and be aware that your cluster or deployment may have a specific configuration. Virtalis Reach is a configurable platform consisting of many connected microservices allowing the deployment to be configured and adapted for different use-cases and environments.
If you are unsure of the usage or impact of a particular system command then seek advice. Improper use of server infrastructure can have serious consequences.
Virtalis Reach requires:
Kubernetes cluster (either on premises or in the cloud):
- At least version v1.22.7
- 8 cores
- At least 64GB of memory available to a single node (128GB total recommended)
- 625GB of storage (see the storage section for more information)
- Nginx as the cluster ingress controller
- Access to the internet during the software deployment and update
- A network policy compatible network plugin
Virtalis Reach does not require:
- A GPU in the server
- A connection to the internet following the software deployment
The follow administration tools are required along with their recommended tested version:
- kubectl v1.22.7 - this package allows us to communicate with a Kubernetes cluster on the command line
- helm 3 v3.6.3 - this package is used to help us install large Kubernetes charts consisting of numerous resources
- oras v0.8.1 - this package is used to download an archive from our internal registry containing some configuration files which will be used to deploy Virtalis Reach
- azure cli stable - this package is used to authenticate with our internal registry hosted on Azure
- jq v1.6 - this package is used to parse and traverse JSON on the command line
- yq v4.6.1 - this package is used to parse and traverse YAML on the command line
These tools are not installed on the Virtalis Reach server but only on the machine that will communicate with a Kubernetes cluster for the duration of the installation.
If using recent versions of Ubuntu, the Azure CLI as installed by Snap is called azure-cli not az which refers to an older version in the Ubuntu repos. Alias azure-cli to az if needed
In this document, variables enclosed in angled brackets <VARIABLE> should be replaced with the appropriate values. For example:
In this document, commands to execute in a shell are shown as code and each block of code is designed to be a single command that can be copied and pasted.
Some steps have been included in a single bash script which can be inspected before being run.
Set Up the Deployment Shell
Make a directory to store temporary installation files:
Export the following variables:
Substitute the variable values and export them:
Substitute and export the following variables, wrap the values in single quotes to prevent bash substitution:
Export the environment variables if Virtalis Reach TLS will be configured to use LetsEncrypt:
Optional configuration variables:
Configuring External File Sources
For further information on how to configure this section, please refer to Authentication with External Systems.
Export a JSON object of the external file sources for the Translator Service and the Job API.
Create network policies for the translator service and the Job API to allow them to talk to the
configured URL source.
Note: The port will need to be tweaked depending on the protocol and or port of the URL source server.
Allowing https traffic (port 443):
Allow traffic on a non=standard port:
Installing File Source Certificates
If any of the configured file sources are secured with TLS and use a certificate or certificates signed by a private authority then they have to be installed to make them trusted.
For every certificate that you wish to install, create a secret:
Export a JSON array of the secrets in the following format for the Translator Service and the Job API:
Note: For the above steps to take effect, the create-secrets.sh and deploy.sh scripts must be run.
Example - installing three different certificates for the import-translator service and only two for the Job-API:
Checking the Nginx Ingress Controller
This should return at least 1 running pod.
ingress-nginx nginx-ingress-controller…….. 1/1 Running
If Nginx is not installed, then please contact Virtalis to see if we can support a different ingress controller. Virtalis Reach is currently only compatible with Nginx.
If there is no Ingress controller currently installed on the cluster, and you are confident you should install Nginx, then you can execute these commands to install it:
Kubernetes supports a wide variety of volume plugins which allow you to provision storage dynamically as well as with constraints depending on your requirements.
All PersistentVolumes used by Virtalis Reach reserve 625gb of storage space in total. This is a provisional amount which will likely change depending on your workload.
By default, Virtalis Reach is deployed with the local volume plugin which creates volumes on the worker nodes. This is not the recommended way to deploy Virtalis Reach and is only appropriate for test level deployments as all databases are tied to the single disk of the node that they’re deployed on which hinders the performance of the system.
To assist in dynamic local volume provisioning, we use the Local Path Provisioner service developed by Rancher:
You can customize how storage for Virtalis Reach is provisioned by specifying which storage class you want to use. This must be created by a Kubernetes Administrator beforehand or, in some environments, a default class is also suitable. For example, when deploying to an Azure Kubernetes Service instance, it comes with a default storage class on the cluster which can be used to request storage from Azure.
If you only want to modify the storage class and leave all other parameters like size as default, export these variables out:
A list of different databases in use by Virtalis Reach and how to customize their storage is shown below.
Thedefault values can be found in /home/root/Reach/k8s/misc/<chartname>/values-common.yaml and /home/root/Reach/k8s/misc/<chartname>/values-prod.yaml
Please refer to the persistence: section found in the values.yaml file in the Bitnami Minio helm chart repository for a list of available parameters to customize such as size, access modes and so on.
Please refer to the core: persistentVolume: section found in the values.yaml file in the Neo4j helm chart repository for a list of available parameters to customize such as size, access modes and so on.
Alternatively, the Neo4j helm chart configuration documentation can also be found here https://neo4j.com/labs/neo4j-helm/1.0.0/configreference/
Please refer to the master: persistence: section found in the values.yaml file in the Bitnami Mysql helm chart repository for a list of available parameters to customize such as size, access modes and so on.
Please refer to the persistence: section found in the values.yaml file in the Bitnami Rabbitmq helm chart repository for a list of available parameters to customize such as size, access modes and so on.
Deploying Virtalis Reach
Create a namespace:
Add namespace labels used by NetworkPolicies:
The ‘ingress-nginx’ entry on line 1 will have to be modified if your nginx ingress is deployed to a different namespace.
Configure Virtalis Reach TLS
Manually create a TLS secret from a TLS key and cert or use the LetsEncrypt integration with cert-manager.
Manually Creating a TLS Cert Secret
LetsEncrypt with Cert-manager
- The machine hosting Virtalis Reach can be reached via a public IP address (used to validate the ownership of your domain)
- A domain that you own (cannot be used for domains ending with .local)
- Inbound connections on port 80 are allowed
Create a namespace for cert-manager:
Install the recommended version of cert-manager:
Create a new file:
Paste in the following and replace variables wherever appropriate:
Press ctrl+o and then enter to save and then press ctrl+x to exit nano, now apply the file:
If you wish to do so, you can follow the digital ocean guide above and deploy an example service to test cert-manager before using it on Virtalis Reach.
Download Installation Files
Log in with Oras:
Pull the Kubernetes deployment file archive from the Virtalis registry and unzip it:
Make the installation scripts executable:
Create and Deploy Secrets
Randomised secrets are used to securely interconnect the Virtalis Reach microservices.
The script below uses the pwgen package to generate a random string of 30 alphanumeric characters. Before proceeding, make sure pwgen is installed on your machine.
Deploy Virtalis Reach and Database Services
Wait until all pods are showing up as Ready:
You will now be able to access the Virtalis Reach frontend client by opening the domain Virtalis Reach was installed on in a web-browser.
Install the Automated Backup System
Optionally install the automated backup system by referring to the Virtalis Reach Automated Backup System document or activate your own backup solution.
Retrieving the Keycloak Admin Password
Run the following command:
Refer to Virtalis Reach User Management for more information on how to administer the system inside Keycloak.
Post Deployment Clean-up
Clear bash history:
This will clean up any secrets exported in the system.
Test Network Policies
Virtalis Reach utilizes NetworkPolicies which restrict the communication of the internal service on a network level.
Note: NetworkPolicies require a supported Kubernetes network plugin like Cilium.
To test these policies, run a temporary pod:
Install the curl package:
Run a request to test the connection to one of our backend apis, this should return a timeout error:
Exit the pod, which will delete it:
Additionally, you can test the egress by checking that any outbound connections made to a public address are denied.
Get the name of the edge-server pod:
Exec inside the running pod using the pod name from above:
Running a command like apt update which makes an outbound request should timeout:
Exit the pod: