Virtalis Reach Help
Technical Document

System Administrator Guide

This document has 4 chapters.
?
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Automated Backup System for Virtalis Reach 2022.1

Virtalis Reach comes with an automated back up system allowing an Administrator to restore to an earlier snapshot in the event of a disaster. We will install Ve...
automated-backup-system-for-virtalis-reach-2022-1
Automated Backup System for Virtalis Reach 2022.1 Overview Virtalis Reach comes with an automated backup system allowing an Administrator to restore to an earlier snapshot in the event of a disaster. We will install Velero to back up the state of your Kubernetes Cluster and uses a custom built solution which leverages Restic to back up the persistent data imported into Virtalis Reach. You should consider creating regular backups of your buckets which hold the backed-up data in case of failure. This will be done through your cloud provider or manually if you host your own bucket. Alternatively, you can consider using your own backup solution. A good option is the PersistentVolumeSnapshot which creates a snapshot of a persistent volume at a point in time. The biggest caveat is that they’re only supported on a number of platforms such as Azure and AWS. If you opt in for a different solution to the one we provide, you have to be mindful of the fact that not all databases used by Virtalis Reach support live backups. This means that the databases have to be taken offline before backing up. The following databases in use by Virtalis Reach must be taken offline for the duration of the backup: • Minio • Neo4j Variables and Commands In this section, variables enclosed in <> arrows should be replaced with the appropriate values. For example: docker login <my_id> <my_password> --verbosity debug Copy becomes docker login admin admin --verbosity debug Copy Commands to execute in a shell are shown as code and each block of code is designed to be a single command that can be copy and pasted: These are commands to be entered in a shell in your clusters administration console This is another block of code \ that uses "\" to escape newlines \ and can be copy and pasted straight into your console Copy Set Up the Deployment Shell Substitute the variable values and export them: export REACH_NAMESPACE=<name of kubernetes namespace Virtalis Reach is deployed in> Navigate to the directory containing Reach installation files: cd /home/root/Reach/k8s Installation Creating a Storage Location Recommended: Follow the “Create S3 Bucket” and “Set permissions for Velero” sections from the link below and ensure that you create the following 2 buckets under your s3 bucket: • reach-restic • reach-velero https://github.com/vmware-tanzu/velero-plugin-for-aws#create-s3-bucket Export the address and port of the bucket you have created: export S3_BUCKET_ADDRESS=<address> #i.e S3_BUCKET_ADDRESS=192.168.1.3, S3_BUCKET_ADDRESS=mydomain.com export S3_BUCKET_PORT=<port> export S3_BUCKET_PROTOCOL=<http or https> export S3_BUCKET_REGION=<region> Not recommended - create an s3 bucket on the same cluster, alongside Virtalis Reach: Customize the persistence.size if the total size of your data exceeds 256gb and change the storage class REACH_SC if needed: export REACH_SC=local-path kubectl create ns reach-backup #check if pwgen is installed for the next step command -v pwgen kubectl create secret generic reach-s3-backup -n reach-backup \ --from-literal='access-key'=$(pwgen 30 1 -s | tr -d '\n') \ --from-literal='secret-key'=$(pwgen 30 1 -s | tr -d '\n') helm upgrade --install reach-s3-backup bitnami/minio \ -n reach-backup --version 3.6.1 \ --set persistence.storageClass=$REACH_SC \ --set persistence.size=256Gi \ --set mode=standalone \ --set resources.requests.memory='150Mi' \ --set resources.requests.cpu='250m' \ --set resources.limits.memory='500Mi' \ --set resources.limits.cpu='500m' \ --set disableWebUI=true \ --set useCredentialsFile=true \ --set volumePermissions.enabled=true \ --set defaultBuckets="reach-velero reach-restic" \ --set global.minio.existingSecret=reach-s3-backup cat <<EOF > credentials-velero [default] aws_access_key_id=$(kubectl get secret reach-s3-backup \ -n reach-backup -o jsonpath="{.data.access-key}" | base64 --decode) aws_secret_access_key=$(kubectl get secret reach-s3-backup \ -n reach-backup -o jsonpath="{.data.secret-key}" | base64 --decode) EOF Export the address and port of the bucket you have created: export S3_BUCKET_ADDRESS=reach-s3-backup-minio.reach-backup.svc.cluster.local export S3_BUCKET_PORT=9000 export S3_BUCKET_PROTOCOL=http export S3_BUCKET_REGION=local Set Up Variables For the duration of this installation, you must navigate to the k8s folder that is downloadable by following the Virtalis Reach Installation Guide. Make scripts executable: sudo chmod +x \ trigger-database-restore.sh \ trigger-database-backup.sh \ trigger-database-backup-prune.sh \ install-backup-restore.sh Export the following variables: export ACR_REGISTRY_NAME=virtaliscustomer Export the address of the reach-restic bucket: export REPO_URL=s3:$S3_BUCKET_PROTOCOL://\ $S3_BUCKET_ADDRESS:$S3_BUCKET_PORT/reach-restic Optional configuration variables: export MANAGED_TAG=<custom image tag for Virtalis Reach services> export DISABLE_CRON<seto to true to install without an automated cronSchedule> Velero Installation The following steps will assume you named your Velero bucket “reach-velero”. Add the VMware helm repository and update: helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts helm repo update Install Velero: helm install velero vmware-tanzu/velero \ --namespace velero \ --create-namespace \ --set-file credentials.secretContents.cloud\ =./credentials-velero \ --set configuration.provider=aws \ --set configuration.backupStorageLocation.name\ =reach-velero \ --set configuration.backupStorageLocation.bucket\ =reach-velero \ --set configuration.backupStorageLocation.config.region\ =$S3_BUCKET_REGION \ --set configuration.backupStorageLocation.config.s3Url\ =$S3_BUCKET_PROTOCOL://$S3_BUCKET_ADDRESS:$S3_BUCKET_PORT \ --set configuration.backupStorageLocation.config.publicUrl\ =$S3_BUCKET_PROTOCOL://$S3_BUCKET_ADDRESS:$S3_BUCKET_PORT \ --set configuration.backupStorageLocation.config.s3ForcePathStyle\ =true \ --set initContainers[0].name=velero-plugin-for-aws \ --set initContainers[0].image=velero/velero-plugin-for-aws:v1.1.0 \ --set initContainers[0].volumeMounts[0].mountPath=/target \ --set initContainers[0].volumeMounts[0].name=plugins \ --set snapshotsEnabled=false \ --version 2.23.1 \ --set deployRestic=true Install the velero cli client: wget https://github.com/vmware-tanzu/velero/releases\ /download/v1.5.3/velero-v1.5.3-linux-amd64.tar.gz tar -xzvf velero-v1.5.3-linux-amd64.tar.gz rm -f velero-v1.5.3-linux-amd64.tar.gz sudo mv $(pwd)/velero-v1.5.3-linux-amd64/velero /usr/bin/ sudo chmod +x /usr/bin/velero Manually create a single backup to verify that the connection to the aws bucket is working: velero backup create test-backup-1 \ --storage-location=reach-velero --include-namespaces $REACH_NAMESPACE Watch the status of the backup until it’s finished, this should show up as complete if everything was set up correctly: watch -n2 velero backup get Create a scheduled backup: velero create schedule cluster-backup --schedule="45 23 * * 6" \ --storage-location=reach-velero --include-namespaces $REACH_NAMESPACE This schedule will run a backup every Saturday at 23:45PM. Restic Integration The custom restic integration uses Kubernetes jobs to mount the data, encrypt it, and send it to a bucket. Kubernetes CustomResourceDefinitions are used to store the information about the restic repositories as well as any created backups. The scheduled data backup runs on every Friday at 23:45PM by default. This can be modified by editing the cronSchedule field in all values.yaml files located in backup-restore/helmCharts/<release_name>/ with the exception of common-lib. All the performed backups are offline backups, therefore Virtalis Reach will be unavailable for that period as a number of databases have to be taken down. Create an AWS bucket with the name “reach-restic” by following the same guide from the Velero section. Replace the keys and create a secret containing the reach-restic bucket credentials: kubectl create secret generic reach-restic-bucket-creds \ -n "$REACH_NAMESPACE" \ --from-literal='AWS_ACCESS_KEY'='<ACCESS_KEY>' \ --from-literal='AWS_SECRET_KEY'='<SECRET_KEY>' If you instead opted in to deploy an s3 bucket on the same cluster, run this instead: kubectl create secret generic reach-restic-bucket-creds \ -n "$REACH_NAMESPACE" \ --from-literal='AWS_ACCESS_KEY'=$(kubectl get secret reach-s3-backup \ -n reach-backup -o jsonpath="{.data.access-key}" | base64 --decode) \ --from-literal='AWS_SECRET_KEY'=$(kubectl get secret reach-s3-backup \ -n reach-backup -o jsonpath="{.data.secret-key}" | base64 --decode) Export the address of the reach-restic bucket: export REPO_URL=s3:$S3_BUCKET_PROTOCOL://\ $S3_BUCKET_ADDRESS:$S3_BUCKET_PORT/reach-restic Run the installation: ./install-backup-restore.sh Check if all the -init-repository- jobs have completed: kubectl get pods -n $REACH_NAMESPACE | grep init-repository Query the list of repositories: kubectl get repository -n $REACH_NAMESPACE The output should look something like this with the status of all repositories showing as Initialized: NAME STATUS SIZE CREATIONDATE artifact-binary-store Initialized 0B 2021-03-01T10:21:53Z artifact-store Initialized 0B 2021-03-01T10:21:57Z job-db Initialized 0B 2021-03-01T10:21:58Z keycloak-db Initialized 0B 2021-03-01T10:21:58Z vrdb-binary-store Initialized 0B 2021-03-01T10:21:58Z vrdb-store Initialized 0B 2021-03-01T10:22:00Z Once you are happy to move on, delete the completed job pods: kubectl delete jobs -n $REACH_NAMESPACE -l app=backup-restore-init-repository Trigger a manual backup: ./trigger-database-backup.sh After a while, all the -triggered-backup- jobs should show up as Completed: kubectl get pods -n "$REACH_NAMESPACE" | grep triggered-backup Query the list of snapshots: kubectl get snapshot -n "$REACH_NAMESPACE" The output should look something like this with the status of all snapshots showing as Completed: NAME STATUS ID CREATIONDATE artifact-binary-store... Completed 62e... 2021... artifact-store-neo4j-... Completed 6ae... 2021... job-db-mysql-master-1... Completed 944... 2021... keycloak-db-mysql-mas... Completed 468... 2021... vrdb-binary-store-min... Completed 729... 2021... vrdb-store-neo4j-core... Completed 1c2... 2021... Once you are happy to move on, delete the completed job pods: kubectl delete jobs -n $REACH_NAMESPACE -l app=backup-restore-triggered-backup Triggering a Manual Backup Consider scheduling system downtime and scaling down the ingress to prevent people from accessing the server during the backup procedure. Note down the replica count for nginx before scaling it down: kubectl get deploy -n ingress-nginx export NGINX_REPLICAS=<CURRENT_REPLICA_COUNT> Scale down the nginx ingress service: kubectl scale deploy --replicas=0 ingress-nginx-ingress-controller \ -n ingress-nginx Create a cluster resource level backup: velero backup create cluster-backup-$(date +"%m-%d-%Y") \ --storage-location=reach-velero --include-namespaces $REACH_NAMESPACE Check the status of the velero backup: watch -n2 velero backup get Create a database level backup: ./trigger-database-backup.sh Check the status of the database backup: watch -n2 kubectl get snapshot -n "$REACH_NAMESPACE" Once you are happy to move on, delete the completed job pods: kubectl delete jobs -n $REACH_NAMESPACE -l app=backup-restore-triggered-backup Restoring Data Restoration Plan Plan your restoration by gathering a list of the snapshot IDs you will be restoring from and export them. Begin by querying the list of repositories: kubectl get repo -n "$REACH_NAMESPACE" NAME STATUS SIZE CREATIONDATE artifact-binary-store Initialized 12K 2021-07-02T12:03:26Z artifact-store Initialized 527M 2021-07-02T12:03:29Z comment-db Initialized 180M 2021-07-02T12:03:37Z job-db Initialized 181M 2021-07-02T12:03:43Z keycloak-db Initialized 193M 2021-07-02T12:03:43Z vrdb-binary-store Initialized 12K 2021-07-02T12:03:46Z vrdb-store Initialized 527M 2021-07-02T12:02:44Z Run a dry run of the restore script to gather a list of the variables you can export: DRY_RUN=true ./trigger-database-restore.sh Sample output: Error: ARTIFACT_BINARY_STORE_RESTORE_ID has not been exported. Please run 'kubectl get snapshot -n develop -l repository=artifact-binary-store' to see a list of available snapshots. Error: ARTIFACT_STORE_RESTORE_ID has not been exported. Please run 'kubectl get snapshot -n develop -l repository=artifact-store'... ... Query available snapshots or use the commands returned in the output above to query by specific repositories: kubectl get snapshot -n "$REACH_NAMESPACE" This should return a list of available snapshots: NAME STATUS ID CREATIONDATE artifact-binary-store-mini... Completed 4a2... 2021-07-0... artifact-store-neo4j-core-... Completed 41d... 2021-07-0... comment-db-mysql-master-16... Completed e72... 2021-07-0... job-db-mysql-master-162522... Completed eb5... 2021-07-0... keycloak-db-mysql-master-1... Completed 919... 2021-07-0... vrdb-binary-store-minio-16... Completed cf0... 2021-07-0... vrdb-store-neo4j-core-1625... Completed 08d... 2021-07-0... It’s strongly advised to restore all the backed-up data using snapshots from the same day to avoid data corruption. Note down the replica count for nginx before scaling it down: kubectl get deploy -n ingress-nginx export NGINX_REPLICAS=<CURRENT_REPLICA_COUNT> Scale down the nginx ingress service to prevent people from accessing Virtalis Reach during the restoration process: kubectl scale deploy --replicas=0 ingress-nginx-ingress-controller \ -n ingress-nginx Run the Restore Script ./trigger-database-restore.sh Get list of backups created with velero: velero backup get Example: NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR test-backup-1 Completed 0 0 2022-0... 29d reach-velero <none> test-backup-2 Completed 0 0 2022-0... 29d reach-velero <none> Restore: velero restore create --from-backup <backup name> Unset exported restore id’s: charts=( $(ls backup-restore/helmCharts/) ); \ for chart in "${charts[@]}"; do if [ $chart == "common-lib" ]; \ then continue; fi; id_var="$(echo ${chart^^} | \ sed 's/-/_/g')_RESTORE_ID"; unset ${id_var}; done After a while, all the -triggered-restore- jobs should show up as Completed: kubectl get pods -n "$REACH_NAMESPACE" | grep triggered-restore Once you are happy to move on, delete the completed job pods: kubectl delete jobs -n $REACH_NAMESPACE \ -l app=backup-restore-triggered-restore Watch and wait for all pods that are running to be Ready: watch -n2 kubectl get pods -n "$REACH_NAMESPACE" Scale back nginx: kubectl scale deploy --replicas="$NGINX_REPLICAS" \ ingress-nginx-ingress-controller -n ingress-nginx Pruning Backups Velero Get list of backups created with velero: velero backup get Example: NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR test-backup-1 Completed 0 0 2022-0... 29d reach-velero <none> test-backup-2 Completed 0 0 2022-0... 29d reach-velero <none> Delete a single backup: velero backup delete <name> Example: velero backup delete test-backup-1 Restic Get list of reach backups: kubectl get snapshot -n $REACH_NAMESPACE Example: artifact-binary-store-minio-1642502735 Completed 5daa1580 2022-01-18T10:45:35Z artifact-binary-store-minio-1642512232 Completed b1fb6a15 2022-01-18T13:23:52Z artifact-binary-store-minio-1642512501 Completed 764c989e 2022-01-18T13:28:21Z artifact-store-neo4j-core-1642502736 Completed f8233016 2022-01-18T10:45:36Z artifact-store-neo4j-core-1642512234 Completed df2606b1 2022-01-18T13:23:54Z artifact-store-neo4j-core-1642512502 Completed a9d9470b 2022-01-18T13:28:22Z comment-db-mysql-master-1642502737 Completed da0de6a7 2022-01-18T10:45:37Z comment-db-mysql-master-1642512237 Completed 37a860e4 2022-01-18T13:23:57Z comment-db-mysql-master-1642512503 Completed d5e04e5e 2022-01-18T13:28:23Z job-db-mysql-master-1642502739 Completed 51db2aa5 2022-01-18T10:45:39Z job-db-mysql-master-1642512235 Completed 40c5fae1 2022-01-18T13:23:55Z job-db-mysql-master-1642512505 Completed dbc0921e 2022-01-18T13:28:25Z keycloak-db-mysql-master-1642502742 Completed 70ab969a 2022-01-18T10:45:42Z keycloak-db-mysql-master-1642512235 Completed 6df99e96 2022-01-18T13:23:55Z keycloak-db-mysql-master-1642512506 Completed 25c93be7 2022-01-18T13:28:26Z product-tree-db-mysql-master-1642502740 Completed ba4edb53 2022-01-18T10:45:40Z product-tree-db-mysql-master-1642512238 Completed 47880e3b 2022-01-18T13:23:58Z product-tree-db-mysql-master-1642512504 Completed 378dd1e1 2022-01-18T13:28:24Z vrdb-binary-store-minio-1642502746 Completed c7109c6b 2022-01-18T10:45:46Z vrdb-binary-store-minio-1642512247 Completed 18f03082 2022-01-18T13:24:07Z vrdb-binary-store-minio-1642512510 Completed 5c47d40c 2022-01-18T13:28:30Z vrdb-store-neo4j-core-1642502748 Completed bd692195 2022-01-18T10:45:48Z vrdb-store-neo4j-core-1642512241 Completed df6cbaf9 2022-01-18T13:24:01Z vrdb-store-neo4j-core-1642512510 Completed 89ae8fcc 2022-01-18T13:28:30Z Prune backups based on a restic policy. Learn more about the different policies here: https://restic.readthedocs.io/en/latest/060_forget.html#removing-snapshots-according-to-a-policy PRUNE_FLAG="<policy>" ./trigger-database-backup-prune.sh Example: Keep the latest snapshot and delete everything else: PRUNE_FLAG="--keep-last 1" ./trigger-database-backup-prune.sh After: NAME STATUS ID CREATIONDATE artifact-binary-store-minio-1642512501 Completed 764c989e 2022-01-18T13:28:21Z artifact-store-neo4j-core-1642512502 Completed a9d9470b 2022-01-18T13:28:22Z comment-db-mysql-master-1642512503 Completed d5e04e5e 2022-01-18T13:28:23Z job-db-mysql-master-1642512505 Completed dbc0921e 2022-01-18T13:28:25Z keycloak-db-mysql-master-1642512506 Completed 25c93be7 2022-01-18T13:28:26Z product-tree-db-mysql-master-1642512504 Completed 378dd1e1 2022-01-18T13:28:24Z vrdb-binary-store-minio-1642512510 Completed 5c47d40c 2022-01-18T13:28:30Z vrdb-store-neo4j-core-1642512510 Completed 89ae8fcc 2022-01-18T13:28:30Z
Read more

Deploying Virtalis Reach 2022.1 on a Kubernetes Cluster

This document describes deploying a complete Virtalis Reach system into a Kubernetes cluster. The content is highly technical, consisting primarily of shell com...
deploying-virtalis-reach-on-a-kubernetes-cluster
Deploying Virtalis Reach 2021.5 on a Kubernetes Cluster Overview This document describes deploying a complete Virtalis Reach system into a Kubernetes cluster. The content is highly technical, consisting primarily of shell commands that should be executed on the cluster administration shell. The commands perform the actions required to deploy Virtalis Reach, however, you should read and understand what these commands do and be aware that your cluster or deployment may have a specific configuration. Virtalis Reach is a configurable platform consisting of many connected micro services allowing the deployment to be configured and adapted for different use-cases and environments. Seek advice if you are unsure of the usage or impact of a particular system command as the improper use of server infrastructure can have serious consequences. Prerequisites Virtalis Reach requires: · Kubernetes cluster (either on premises or in the cloud): o at least version v1.21.3 o 8 cores o At least 64GB of memory available to a single node (128GB total recommended) o 625GB of storage (see the storage section for more information) o Nginx as the cluster ingress controller o Access to the internet during the software deployment and update o A network policy compatible network plugin Virtalis Reach does not require: · A GPU in the server · A connection to the internet following the software deployment The follow administration tools are required along with their recommended tested version: · kubectl v1.21.3 - this package allows us to communicate with a Kubernetes cluster on the command line. · helm 3 v3.6.3 - this package is used to help us install large Kubernetes charts consisting of numerous resources · oras v0.8.1 - this package is used to download an archive from our internal registry containing some configuration files which will be used to deploy Virtalis Reach · azure cli stable - this package is used to authenticate with our internal registry hosted on Azure · jq v1.6 - this package is used to parse and traverse JSON on the command line · yq v4.6.1 - this package is used to parse and traverse YAML on the command line These tools are not installed on the Virtalis Reach server - only on the machine that will communicate with a Kubernetes cluster for the duration of the installation. If using recent versions of Ubuntu, the Azure CLI as installed by Snap is called azure-cli not az which refers to an older version in the Ubuntu repos. Alias azure-cli to az if needed Variables and Commands In this section, variables enclosed in <> arrows should be replaced with the appropriate values. For example: docker login <my_id> <my_password> --verbosity debug becomes docker login admin admin --verbosity debug Commands to execute in a shell are shown as code and each block of code is designed to be a single command that can be copy and pasted: These are commands to be entered in a shell in your clusters administration console This is another block of code \ that uses "\" to escape newlines \ and can be copy and pasted straight into your console Some steps have been included in a single bash script which can be inspected before being run. Pre-installation Set Up the Deployment Shell Make a directory to store temporary installation files: mkdir /home/root/Reach && cd /home/root/Reach Export the following variables: export REACH_VERSION=2021.5.0 export ACR_REGISTRY_NAME=virtaliscustomer export SKIP_MIGRATIONS=1 Substitute the variable values and export them: export REACH_DOMAIN=<the domain Virtalis Reach will be hosted on> export TLS_SECRET_NAME=<the name of the secret containing the tls cert> export REACH_NAMESPACE=<name of kubernetes namespace to deploy Virtalis Reach on> export ACR_USERNAME=<service principal id> export ACR_PASSWORD=<service principal password> export reach_licence__key=<licence xml snippet> export reach_licence__signature=<licence signature> Export the environment variables if Virtalis Reach TLS is/will be configured to use LetsEncrypt: export KEYCLOAK_ANNOTATIONS="--set ingress.annotations\ .cert-manager\.io/cluster-issuer=letsencrypt-prod" export INGRESS_ANNOTATIONS="--set ingress.annotations\ .cert-manager\.io/cluster-issuer=letsencrypt-prod" Optional configuration variables: export MANAGED_TAG=<custom image tag for Virtalis Reach services> export OFFLINE_INSTALL=<when set to true, patch Virtalis Reach so that it can be taken off the internet> export MQ_EXPOSE_INGRESS=<when set to 1, expose rabbitmq on the ingress> export MQ_CLIENT_KEY_PASS=<password that was/will be used for the client_key password, see windchill/teamcenter installation document> Configuring External File Sources For further information on how to configure this section, please refer to Authentication with External Systems <LINK>. Export a JSON object of the external file sources for the Translator Service and the Job API: Example: read -r -d '' IMPORT_TRANSLATOR_SOURCE_CREDENTIALS <<'EOF' { "Hostname": "windchill.local", "AuthType": "Basic", "BasicConfig": { "Username": "windchilluser", "Password": "windchill" }, "Mapping": { "Protocol": "http", "Host": "windchill.local", }, "AllowInsecureRequestUrls": "true" }, { "Hostname": "mynotauthenticatedserver.local", "AuthType": "None", "AllowInsecureRequestUrls": "false" }, EOF export IMPORT_TRANSLATOR_SOURCE_CREDENTIALS=$(echo -n ${IMPORT_TRANSLATOR_SOURCE_CREDENTIALS} | tr -d '\n') read -r -d '' JOB_API_SOURCE_CREDENTIALS <<'EOF' { "Hostname": "mynotauthenticatedserver.local", "AuthType": "None", "AllowInsecureRequestUrls": "false" }, EOF export JOB_API_SOURCE_CREDENTIALS=$(echo -n ${JOB_API_SOURCE_CREDENTIALS} | tr -d '\n') Create network policies for the Translator Service and the Job API to allow them to communicate with the configured URL source. Please note: The port will need to be modified depending on the protocol and/or port of the URL source server. Example: Allowing https traffic (port 443): cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: import-translator-url-source namespace: $REACH_NAMESPACE spec: egress: - ports: - port: 443 protocol: TCP podSelector: matchLabels: app: import-translator policyTypes: - Egress EOF cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: job-api-url-source namespace: $REACH_NAMESPACE spec: egress: - ports: - port: 443 protocol: TCP podSelector: matchLabels: app: job-api policyTypes: - Egress EOF Example: Allow traffic on a non-standard port: cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: import-translator-url-source namespace: $REACH_NAMESPACE spec: egress: - ports: - port: 1337 protocol: TCP podSelector: matchLabels: app: import-translator policyTypes: - Egress EOF Installing File Source Certificates If any of the configured file sources are secured with TLS and use a certificate or certificates signed by a private authority, they have to be installed to make them trusted. For every certificate that you wish to install, create a secret: kubectl create secret generic \ -n $REACH_NAMESPACE \ <certificate secret name> --from-file="<certificate filename>"=<certificate filename> Export a JSON array of the secrets in the following format for the Translator Service and the Job API: read -r -d '' IMPORT_TRANSLATOR_CERTIFICATES <<'EOF' [ {"secretName": "<certificate secret name>", "key": "<certificate filename>"} ] EOF export IMPORT_TRANSLATOR_CERTIFICATES=${IMPORT_TRANSLATOR_CERTIFICATES} read -r -d '' JOB_API_CERTIFICATES <<'EOF' [ {"secretName": "<certificate secret name>", "key": "<certificate filename>"} ] EOF export JOB_API_CERTIFICATES=${JOB_API_CERTIFICATES} For the above steps to take effect, the create-secrets.sh and deploy.sh scripts must be run. Example - installing three different certificates for the import-translator service and only two for the Job API: root@master:~/# ls -latr -rw-r--r-- 1 root root 2065 Nov 25 12:02 fileserver.crt -rw-r--r-- 1 root root 2065 Nov 25 12:02 someother.crt -rw-r--r-- 1 root root 2065 Nov 25 12:02 example.crt drwxr-xr-x 7 root root 4096 Nov 25 12:02 . kubectl create secret generic \ -n $REACH_NAMESPACE \ myfileserver-cert --from-file="fileserver.crt"=fileserver.crt kubectl create secret generic \ -n $REACH_NAMESPACE \ someother-cert --from-file="someother.crt"=someother.crt kubectl create secret generic \ -n $REACH_NAMESPACE \ example-cert --from-file="example.crt"=example.crt read -r -d '' IMPORT_TRANSLATOR_CERTIFICATES <<'EOF' [ {"secretName": "myfileserver-cert", "key": "fileserver.crt"}, {"secretName": "someother-cert", "key": "fileserver.crt"}, {"secretName": "example-cert", "key": "example.crt"} ] EOF export IMPORT_TRANSLATOR_CERTIFICATES=${IMPORT_TRANSLATOR_CERTIFICATES} read -r -d '' JOB_API_CERTIFICATES <<'EOF' [ {"secretName": "someother-cert", "key": "fileserver.crt"}, {"secretName": "example-cert", "key": "example.crt"} ] EOF export JOB_API_CERTIFICATES=${JOB_API_CERTIFICATES} Checking the Nginx Ingress Controller kubectl get pods -n ingress-nginx This should return at least 1 running pod. ingress-nginx nginx-ingress-controller…….. 1/1 Running If Nginx is not installed please contact Virtalis to see if we can support a different ingress controller. Virtalis Reach is currently only compatible with Nginx. If there is no Ingress controller currently installed on the cluster, and you are confident you should install Nginx, you can execute these commands to install it: helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx && \ helm repo update helm install nginx-ingress ingress-nginx/ingress-nginx \ -n ingress-nginx \ --create-namespace Storage Kubernetes supports a wide variety of volume plugins which allow you to provision storage dynamically as well as with constraints depending on your requirements. List of supported volume plugins All PersistentVolumes used by Virtalis Reach reserve 625gb of storage space in total. Please note: This is a provisional amount which will likely change depending on your workload. Default By default, Virtalis Reach is deployed with the local volume plugin which creates volumes on the worker nodes. This is not the recommended way to deploy Virtalis Reach and is only appropriate for test level deployments as all databases are tied to the single disk of the node that they are deployed on which hinders the performance of the system. To assist in dynamic local volume provisioning, we use the Local Path Provisioner service developed by Rancher: kubectl apply -f \ https://raw.githubusercontent.com/rancher/\ local-path-provisioner/master/deploy/local-path-storage.yaml Custom You can customize how storage for Virtalis Reach is provisioned by specifying which storage class you want to use. This must be created by a Kubernetes Administrator beforehand or, in some environments, a default class is also suitable. For example, when deploying to an Azure Kubernetes Service instance, it comes with a default storage class on the cluster which can be used to request storage from Azure. Express If you only want to modify the storage class and leave all other parameters like size as default, export these variables out: export REACH_SC=<name of storage class> export REACH_SC_ARGS=" --set persistence\ .storageClass="${REACH_SC}" --set core\ .persistentVolume.storageClass\ ="${REACH_SC}" --set master.persistence\ .storageClass="${REACH_SC}" " Custom Parameters A list of different databases in use by Virtalis Reach and how to customize their storage is given below. Minio Please refer to the persistence: section found in the values.yaml file in the Bitnami Minio helm chart repository for a list of available parameters to customize such as size, access modes and so on. These values can be added/tweaked in the following files: · k8s/misc/artifact-binary-store/values-prod.yaml · k8s/misc/import-binary-store/values-prod.yaml · k8s/misc/import-folder-binary-store/values-prod.yaml · k8s/misc/vrdb-binary-store/values-prod.yaml Neo4j Please refer to the core: persistentVolume: section found in the values.yaml file in the Neo4j helm chart repository for a list of available parameters to customize such as size, access modes and so on. These values can be added/tweaked in the following files: · k8s/misc/artifact-store/values-prod.yaml · k8s/misc/vrdb-store/values-prod.yaml Alternatively, the Neo4j helm chart configuration documentation can also be found here https://neo4j.com/labs/neo4j-helm/1.0.0/configreference/ Mysql Please refer to the master: persistence: section found in the values.yaml file in the Bitnami Mysql helm chart repository for a list of available parameters to customize such as size, access modes and so on. These values can be added/tweaked in the following files: · k8s/misc/collaboration-session-db/values-prod.yaml · k8s/misc/import-folder-db/values-prod.yaml · k8s/misc/job-db/values-prod.yaml · k8s/misc/background-job-db/values-prod.yaml · k8s/misc/keycloak-db/values-prod.yaml Miscellaneous Please refer to the persistence: section found in the values.yaml file in the Bitnami Rabbitmq helm chart repository for a list of available parameters to customize such as size, access modes and so on. These values can be added/tweaked in the following files: · k8s/misc/message-queue/values-prod.yaml Deploying Virtalis Reach Create a namespace: kubectl create namespace "${REACH_NAMESPACE}" Add namespace labels required by NetworkPolicies: kubectl label namespace ingress-nginx reach-ingress=true; \ kubectl label namespace kube-system reach-egress=true The ‘ingress-nginx’ entry on line 1 will have to be modified if your nginx ingress is deployed to a different namespace in your cluster. Configure Virtalis Reach TLS Manually create a TLS secret from a TLS key and cert or use the LetsEncrypt integration with cert-manager. Manually Creating a TLS Cert Secret kubectl create secret tls -n "${REACH_NAMESPACE}" \ "${TLS_SECRET_NAME}" --key="tls.key" --cert="tls.crt" LetsEncrypt with Cert-manager Requirements: · The machine hosting Virtalis Reach can be reached via a public IP address (used to validate the ownership of your domain) · A domain that you own (cannot be used for domains ending with .local) Create a namespace for cert-manager: kubectl create namespace cert-manager Install the recommended version v1.0.2 of cert-manager: kubectl apply -f https://github.com/jetstack/\ cert-manager/releases/download/v1.0.2/cert-manager.yaml Create a new file: nano prod_issuer.yaml Paste in the following and replace variables wherever appropriate: apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt-prod namespace: cert-manager spec: acme: server: https://acme-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: <your_email_address> privateKeySecretRef: name: <value of the $TLS_SECRET_NAME variable you exported before> solvers: - http01: ingress: class: nginx Press ctrl+o and then enter to save and then press ctrl+x to exit nano, now apply the file: kubectl apply -f prod_issuer.yaml Source: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes If you wish to do so, you can follow the digital ocean guide above and deploy an example service to test cert-manager before using it on Virtalis Reach. Download Installation Files Log in with Oras: oras login "${ACR_REGISTRY_NAME}".azurecr.io \ --username "${ACR_USERNAME}" \ -p "${ACR_PASSWORD}" Pull the Kubernetes deployment file archive from the Virtalis registry and unzip it: oras pull "${ACR_REGISTRY_NAME}"\ .azurecr.io/misc/k8s:$REACH_VERSION && tar -zxvf k8s.tar.gz Make the installation scripts executable: cd k8s && sudo chmod +x *.sh Create and Deploy Secrets Randomised secrets are used to securely interconnect the Virtalis Reach microservices. The script below uses the pwgen package to generate a random string of 30 alphanumeric characters. Before proceeding make sure pwgen is installed on your machine. ./create-secrets.sh Deploy Virtalis Reach and Database Services ./deploy.sh Wait until all pods are showing up as Ready: watch -n2 kubectl get pods -n $REACH_NAMESPACE You will now be able to access the Virtalis Reach frontend client by opening the domain Virtalis Reach was installed on in a web-browser. Install the Automated Backup System Optionally install the automated backup system by referring to the “Automated Backup System for Virtalis Reach” <LINK> document or activate your own backup solution. Retrieving the Keycloak Admin Password Run the following command: kubectl get secret --namespace ${REACH_NAMESPACE} \ keycloak -o jsonpath="{.data.admin_password}" \ | base64 --decode; echo Refer to Virtalis Reach User Management <LINK> for more information on how to administer the system inside Keycloak. Post Deployment Clean-up Unset exported environment variables: unset REACH_DOMAIN && \ unset TLS_SECRET_NAME && \ unset REACH_NAMESPACE && \ unset ACR_USERNAME && \ unset ACR_PASSWORD && \ unset ACR_REGISTRY_NAME && \ unset REACH_SC && \ unset REACH_SC_ARGS && \ unset reach_licence__key && \ unset reach_licence__signature Clear bash history: history -c This will clean up any secrets exported in the system. Test Network Policies Virtalis Reach utilizes NetworkPolicies which restrict the communication of the internal service on a network level. Please note: NetworkPolicies require a supported Kubernetes network plugin such as Cilium. To test these policies, run a temporary pod: kubectl run -it --rm test-pod \ -n ${REACH_NAMESPACE} --image=debian Install the curl package: apt update && apt install curl Run a request to test the connection to one of our backend APIs. This should return a timeout error: curl http://artifact-access-api-service:5000 Exit the pod, which will delete it: exit Additionally, you can test the egress by checking that any outbound connections made to a public address are denied. Get the name of the edge-server pod: kubectl get pods -n ${REACH_NAMESPACE} | grep edge-server Exec inside the running pod using the pod name from above: kubectl exec -it <pod_name> -n ${REACH_NAMESPACE} -- /bin/bash Running a command like apt update which makes an outbound request should timeout: apt update Exit the pod: exit
Read more

Deploying The Virtalis Reach Monitoring Service Stack

Deployment of various monitoring services which allow a Kubernetes Administrator to monitor the health, metrics, and logs for all cluster services...
deploying-the-virtalis-reach-monitoring-service-stack
OverviewThis section describes the deployment of various monitoring services which allow a Kubernetes Administrator to monitor the health, metrics, and logs for all cluster services including Virtalis Reach.List of services to be deployed:Prometheus Stack (health, metrics)GrafanaPrometheusAlertmanagerELK Stack (logging)ElasticsearchKibanaLogstashVariables and CommandsIn this section, variables enclosed in <> arrows should be replaced with the appropriate values. For example:docker login <my_id> <my_password> --verbosity debugbecomes docker login admin admin --verbosity debugCommands to execute in a shell are shown as code and each block of code is designed to be a single command that can be copy and pasted:These are commands to be entered in a shell in your clusters administration consoleThis is another block of code \that uses "\" to escape newlines \and can be copy and pasted straight into your consoleSet Up the Deployment ShellExport some environment variables which will be used throughout the installation:export MONITORING_DOMAIN=<the domain monitoring services will be hosted on>export MONITORING_NAMESPACE=monitoringexport MONITORING_TLS_SECRET=reach-tls-secretCreate a new namespace:kubectl create namespace "${MONITORING_NAMESPACE}"kubectl label ns "${MONITORING_NAMESPACE}" release=prometheus-stackThe command below uses the pwgen package to generate a random string of 30 alphanumeric characters.Before proceeding, make sure pwgen is installed on your machine or use a different package to generate the string replacing the command inside the brackets:$(pwgen 30 1 -s) → $(someOtherPackage --arg1 --arg2)Create SecretsCreate a secret which will store Grafana credentials:kubectl create secret generic grafana \-n "${MONITORING_NAMESPACE}" \--from-literal="user"=$(pwgen 30 1 -s) \--from-literal="password"=$(pwgen 30 1 -s)kubectl create secret generic elastic-credentials -n $MONITORING_NAMESPACE \--from-literal=password=$(pwgen -c -n -s 30 1 | tr -d '\n') \--from-literal=username=elastickubectl create secret generic kibana-credentials -n $MONITORING_NAMESPACE \--from-literal=encryption-key=$(pwgen -c -n -s 32 1 | tr -d '\n')StorageExpressIf you only want to modify the storage class and leave all other parameters such as size as default, export these variables out:export MONITORING_SC=<name of storage class>export ELASTICSEARCH_SC_ARGS="--set \volumeClaimTemplate.storageClassName=${MONITORING_SC}"export LOGSTASH_SC_ARGS="--set \volumeClaimTemplate.storageClassName=${MONITORING_SC}"export PROMETHEUS_SC_ARGS="--set alertmanager.alertmanagerSpec.storage.\volumeClaimTemplate.spec.storageClassName=${MONITORING_SC}--set prometheus.prometheusSpec.storageSpec.\volumeClaimTemplate.spec.storageClassName=${MONITORING_SC}--set grafana.persistence.storageClassName=${MONITORING_SC}"Custom ParametersHere is a list of different monitoring services and how to customize their storage.ElasticsearchPlease refer to the volumeClaimTemplate: section found in the values.yaml file in the elasticsearch helm chart repository for a list of available parameters to customize such as size, access modes and so on.These values can be added/tweaked in the following files:k8s/misc/elk/elasticsearch/values-prod.yamlk8s/misc/elk/elasticsearch/values-common.yamlLogstashPlease refer to the volumeClaimTemplate: sections found in the values.yaml file in the logstash helm chart repository for a list of available parameters to customize such as size, access modes and so on.k8s/misc/elk/logstash/values-prod.yamlk8s/misc/elk/logstash/values-common.yamlPrometheus StackPlease refer to the volumeClaimTemplate: sections found in the values.yaml file in the prometheus-stack helm chart repository for a list of available parameters to customize such as size, access modes and so on.These values can be added/tweaked in the following files:k8s/misc/elk/prometheus/values-prod.yamlk8s/misc/elk/prometheus/values-common.yamlMonitoring TLSManually create a TLS secret from a TLS key and cert or use the LetsEncrypt integration with cert-manager.Manually Creating a TLS Cert Secretkubectl create secret tls -n "${MONITORING_NAMESPACE}" \"${MONITORING_TLS_SECRET}" --key="tls.key" --cert="tls.crt"LetsEncrypt with Cert-managerExport the following:export KIBANA_INGRESS_ANNOTATIONS="--set ingress.annotations\.cert-manager\.io/cluster-issuer=letsencrypt-prod"export PROMETHEUS_INGRESS_ANNOTATIONS="--set prometheus.ingress\.annotations.cert-manager\.io/cluster-issuer=letsencrypt-prod"export GRAFANA_INGRESS_ANNOTATIONS="--set grafana.ingress\.annotations.cert-manager\.io/cluster-issuer=letsencrypt-prod"export ALERTMANAGER_INGRESS_ANNOTATIONS="--set alertmanager.ingress.\annotations.cert-manager\.io/cluster-issuer=letsencrypt-prod"Installing Grafana, Alertmanager, and PrometheusAdd these repos to Helm and update:helm repo add prometheus-community https://\prometheus-community.github.io/helm-charts && \helm repo updateExport the following:export ALERTMANAGER_INGRESS="--set alertmanager.ingress\.hosts[0]=${MONITORING_DOMAIN} --set alertmanager.ingress\.tls[0].secretName=$MONITORING_TLS_SECRET --set alertmanager\.ingress.tls[0].hosts={${MONITORING_DOMAIN}}"export PROMETHEUS_INGRESS="--set prometheus.ingress\.hosts[0]=${MONITORING_DOMAIN} --set prometheus.ingress\.tls[0].secretName=$MONITORING_TLS_SECRET --set prometheus\.ingress.tls[0].hosts={${MONITORING_DOMAIN}}"export GRAFANA_INGRESS="--set grafana.ingress.hosts[0]\=${MONITORING_DOMAIN} --set grafana.ingress.tls[0]\.secretName=$MONITORING_TLS_SECRET --set grafana.ingress\.tls[0].hosts={${MONITORING_DOMAIN}}"Install:helm install prometheus-stack \--namespace "${MONITORING_NAMESPACE}" \--set grafana.admin.existingSecret="grafana" \--set grafana.admin.userKey="user" \--set grafana.admin.passwordKey="password" \--set grafana.'grafana\.ini'.server.root_url\="https://${MONITORING_DOMAIN}/grafana" \--set grafana.'grafana\.ini'.server.domain="${MONITORING_DOMAIN}" \--set grafana.'grafana\.ini'.server.serve_from_sub_path='true' \$ALERTMANAGER_INGRESS \$PROMETHEUS_INGRESS \$GRAFANA_INGRESS \$PROMETHEUS_INGRESS_ANNOTATIONS \$GRAFANA_INGRESS_ANNOTATIONS \$ALERTMANAGER_INGRESS_ANNOTATIONS \$PROMETHEUS_SC_ARGS \-f misc/prometheus/values-common.yaml \-f misc/prometheus/values-prod.yaml \prometheus-community/kube-prometheus-stackCheck the status of deployed pods:kubectl get pods -n "${MONITORING_NAMESPACE}"Accessing the Grafana FrontendRetrieve the Grafana admin user:kubectl get secret --namespace "${MONITORING_NAMESPACE}" \grafana -o jsonpath="{.data.user}" | base64 --decode; echoRetrieve the Grafana admin password:kubectl get secret --namespace "${MONITORING_NAMESPACE}" \grafana -o jsonpath="{.data.password}" | base64 --decode; echoGrafana can now be accessed at https://${MONITORING_DOMAIN}/grafana/ from a web-browser using the admin user and admin password.Installing Elasticsearch, Kibana and LogstashAdd this helm repo and update:helm repo add elastic https://helm.elastic.cohelm repo updateExport this variable:export KIBANA_INGRESS="--set ingress.hosts[0]\=$MONITORING_DOMAIN --set ingress.tls[0].secretName\=$MONITORING_TLS_SECRET --set ingress.tls[0]\.hosts[0]=$MONITORING_DOMAIN"Install Elasticsearchhelm install elasticsearch \--version 7.10 elastic/elasticsearch \-f misc/elk/elasticsearch/values-common.yaml \-f misc/elk/elasticsearch/values-prod.yaml \$ELASTICSEARCH_SC_ARGS \-n $MONITORING_NAMESPACEInstall Kibanahelm install kibana \--version 7.10 elastic/kibana \-n $MONITORING_NAMESPACE \$KIBANA_INGRESS_ANNOTATIONS \$KIBANA_INGRESS \-f misc/elk/kibana/values-common.yaml \-f misc/elk/kibana/values-prod-first-time.yaml \-f misc/elk/kibana/values-prod.yamlPatch Kibanakubectl patch deploy kibana-kibana \-n monitoring -p "$(cat misc/elk/kibana/probe-patch.yaml)"Get the elasticsearch admin password:kubectl get secret elastic-credentials -o jsonpath\="{.data.password}" -n $MONITORING_NAMESPACE | \base64 --decode; echoOpen up kibana in a web browser, log in using the elasticsearch admin password and the username “elastic” and add any additional underprivileged users that you want to have access to the logging system:https://$MONITORING_DOMAIN/kibana/app/management/security/usersInstall Filebeathelm install filebeat \--version 7.10 elastic/filebeat \-n $MONITORING_NAMESPACE \-f misc/elk/filebeat/values-common.yaml \-f misc/elk/filebeat/values-prod.yamlInstall Logstashhelm install logstash \--version 7.10 elastic/logstash \-n $MONITORING_NAMESPACE \$LOGSTASH_SC_ARGS \-f misc/elk/logstash/values-prod.yaml \-f misc/elk/logstash/values-common.yamlClean-up Post Monitoring InstallationUnset environment variables:unset MONITORING_DOMAIN && \unset MONITORING_NAMESPACEClear bash history:history -cThis will clean up any secrets exported in the system.
Read more

Virtalis Reach User Management

Manage Virtalis Reach user details, creating user groups and add selected users to the groups....
virtalis-reach-user-management
OverviewThis section describes how to manage Virtalis Reach user details, creating user groups and add selected users to the groups.Virtalis Reach uses Keycloak for Identity and Access Management (IAM). This section assumes Keycloak has been installed on your system and that you have administration access rights.Accessing the Keycloak Admin PanelNavigate to https://<reach domain>/auth/admin/ replacing <reach domain> with the domain Virtalis Reach is hosted on.Enter the Keycloak administrator credentials that were extracted during the Virtalis Reach deployment.Ensure that the currently selected realm in the top left corner is Reach. If not, select it from the drop-down menu.Managing Users Go to Manage > Users and use this page to:View all users currently in the systemAdd users to the systemEdit the details of a userAdd users to groupsPlease note: AAD users must log in at least once to become visible in the system.Adding a UserTo add a user:Click Add user.Enter the user details.Click Save.Setting User CredentialsTo set the user credentials:Click the Credentials tab and set a password for the user. Set Temporary to OFF if you do not want the user to have to enter a new password when they first log in.Click Set Password.Adding Users to GroupsTo edit the groups a user is in:Select the user you wish to edit.Click the Groups tab.Select a single group from the list that you wish to add/remove the user to/from.Click Join.You will see the groups that the user belongs to on the left-hand side of the page and the available groups that the user can be added to on the right-hand side.Managing GroupsGo to Manage > Groups and use this page to:View all the groups currently in the systemCreate new groups for the purpose of access control on certain assets, projects, or visualisationsVirtalis Reach Specific GroupsVirtalis Reach has three main system groups:data-uploaders - access to /import, controls who can import assets into the system project-authors - access to /hub, controls who can create and publish projectsreach_script_publishers - controls whether a user can enable scripts for their projectType image caption here (optional)Creating a New GroupTo create a new group:Click New to create a new group. Enter a name for the group.Click Save. You will now be able to edit users individually in the system and assign them to the new group.
Read more

Installing Virtalis Reach Translator Plugins

Enable the end-user to import more file formats. This section describes how to install translator plugins into a live Virtalis Reach systems...
installing-virtalis-reach-translator-plugins
OverviewVirtalis Reach supports numerous translator plugins which enable the end-user to import more file formats. This section describes how to install translator plugins into a live Virtalis Reach system.InstallationExport the following:export REACH_NAMESPACE=<name of kubernetes namespace Virtalis Reach is deployed in>Extract the plugin on to a machine with access to the Kubernetes cluster Virtalis Reach is running on.Example: Installing a OpenJT Reader plugin, the OpenJTReader folder will contain .dll files and libraries:root@reach-azure-develop:~/TranslatorPlugins# ls -latrtotal 12drwx------ 18 root root 4096 Aug 17 14:11 ..drwxr-xr-x 2 ubuntu ubuntu 4096 Aug 17 14:11 OpenJTReaderdrwxr-xr-x 3 ubuntu ubuntu 4096 Aug 17 14:27 .Get the full name of a running translator pod:export TRANSLATOR_POD_NAME=$(kubectl get pod \-l app=import-translator -n $REACH_NAMESPACE \-o jsonpath="{.items[0].metadata.name}")Copy the folder containing the plugins onto the persistent plugins folder on the translator pod, this might take a while depending on your connection and the size of the plugin folderkubectl cp format when pushing a file is <source> <namespace>/<pod-name>:<pod-destination-path>kubectl cp PLMXMLReader/ \$REACH_NAMESPACE/$TRANSLATOR_POD_NAME:/app/Translators/After the transfer is complete, restart the translator pod:kubectl delete pods -n reach -l app=import-translatorCheck the logs to verify that the plugin has been loaded:kubectl logs -l app=import-translator -n reachYou should see a log message containing the name of the plugin:[14:41:56 develop@5739eea INF] Adding translator OpenJTReader for extension .jt.‍
Read more

Authentication with External Systems

In addition to users of Virtalis Reach being able to upload files for importing via the Hub’s user interface, Virtalis Reach can also download files from extern...
authentication-with-external-systems
Authentication with External Systems Introduction In addition to users of Virtalis Reach being able to upload files for importing via the Hub’s user interface, Virtalis Reach can also download files from external locations. Examples are PLM systems such as Windchill or TeamCenter, which can be extended to notify Virtalis Reach of data changes. Often these URLs are protected, meaning that in order for the download to succeed, the authentication scheme and credentials must be configured. This document explains how to do this. Supported Authentication Modes Authentication Mode Description None The resource is not protected and can be downloaded without any additional authentication steps Basic The resource is protected with basic authentication. For example, it will require a valid username and password, supplied by the site administrator BearerToken The resource is protected by a bearer token. For example, if the bearer token “MyBearerToken” is provided, the authorisation header sent with the request will read “Authorisation: Bearer MyBearerToken”. CustomAuthHeader The resource is protected by a custom authorisation header. For example, if the header specified is “MyAuthHeader” and the token specified is “MyAuthToken”, the header sent with the request will read “MyAuthHeader: MyAuthToken”. Contrast this with Bearer mode, where the standard Authorisation header would be used instead. ServiceAccount The resource is protected by OAuth client credentials that are internal to Virtalis Reach. For example, using the reach-service account in your own instance of Keycloak to request an access token before presenting that token to the resource. This is used for downloading files from the internal ImportFolder service used when someone uses the Hub to upload a file. Note: This was previously called OAuth but has been renamed to better reflect its purpose and reduce ambiguity between this mode and the new OAuth2 mode described below. OAuth2ClientCredentials Uses OAuth2’s client credentials flow to download the file. This is used when the resource is protected by client credentials that are held in an external OAuth2-compatible identity system. For example, you may have your own instance of Keycloak or Microsoft Identity Server where the client protecting the resource is defined. Configuration Translator Service The Translator Service and Job API perform the download operation and, in order to make a successful request, the host name must be configured in the UrlSourceCredentials section belonging to the services. This is a list of host names and credentials; there should be one entry for each host. An example section from the configuration is shown below: "UrlSourceCredentials": [ { // A server that has no authentication, files can be downloaded "Hostname": "totallyinsecure.com", "AuthType": "None" }, { // The local Import Folder API used when the user imports a file via the Reach Hub "Hostname": "virtalis.platform.importfolder.api", "AuthType": "ServiceAccount" }, { // A server that is protected by BASIC authentication "Hostname": "rootwindchill.virtalis.local", "AuthType": "Basic", "BasicConfig": { "Username": "someuser", "Password": "somepassword" } }, { "Hostname": "afilesource-ccf-file-export-function-app.azurewebsites.net", "AuthType": "OAuth2ClientCredentials", "OAuth2ClientCredentialsConfig": { "ClientId": "<clientId>", "ClientSecret": "<secret>", "AccessTokenEndpoint": "https://login.microsoftonline.com/<tenantId>/oauth2/v2.0/token", "Scope": "api://<scope-value>/.default" } ] Job API After a file has been imported, the Job API attempts to make a call back to the system that provided it as a way of letting it know that the import was successful. For this reason, the Job API also has a UrlSourceCredentials section in its secrets file. In the example below, there are two systems that support the callback functionality, one for localhost which represents the built-in Virtalis Reach Hub callback (for when a user uploads a file through the Virtalis Reach user interface) and another for a provider in Microsoft Azure, which is protected by OAuth2 Client Credentials. "UrlSourceCredentials": [ { "Hostname": "localhost", "AuthType": "ServiceAccount", "AllowInsecureRequestUrls": true }, { "Hostname": "afilesource-ccf-file-export-function-app.azurewebsites.net", "AuthType": "OAuth2ClientCredentials", "OAuth2ClientCredentialsConfig": { "ClientId": "<clientId>", "ClientSecret": "<secret>", "AccessTokenEndpoint": "https://login.microsoftonline.com/<tenantId>/oauth2/v2.0/token", "Scope": "api://<scope-value>/.default" } } ] Common Properties Hostname When Virtalis Reach receives a message that includes a URL to download a file from, it will extract the hostname from it and then look for a matching section in TranslatorServiceSecrets. If no match is found, the message will be rejected. If a match is found, it will use the details in the matching section to configure the authentication required. AuthType This is the authentication mode used by the external system. Refer to Supported Authentication Modes for further information. Basic Mode Properties These should be set in a “BasicConfig” section alongside the “Hostname” and “AuthType” properties. Username The username required to access the resource. Password The password required to access the resource. Bearer Token Mode Properties These should be set in a “BearerTokenConfig” section alongside the “Hostname” and “AuthType” properties. BearerToken The bearer token required to access the resource. Custom Auth Header Mode Properties These should be set in a “CustomAuthHeaderConfig” section alongside the “Hostname” and “AuthType” properties. AuthHeader The custom auth header required to access the resource. AuthToken The header value to specify. ServiceAccount Mode Properties There are no additional properties for this mode. Virtalis Reach is already configured internally with the correct credentials for requesting access tokens with the reach-service client. OAuth2ClientCredentials These should be set in a “OAuth2ClientCredentialsConfig” section alongside the “Hostname” and “AuthType” properties. ClientId This is the name of a client that exists in the external system’s identity server. In the example above, we have specified sunburn-client. A site administrator would be responsible for making sure this client exists. ClientSecret This is the client secret for the above client. AccessTokenEndpoint This is the endpoint Virtalis Reach needs to call when requesting an access token. A site administrator will need to provide this. If this is wrong, you will likely see an exception in the logs saying that an access token could not be retrieved, for example: Failed to obtain an access token. Check that the source credentials for the named resource (including ClientId, ClientSecret and Scope) are valid in the target identity system Note: This URL is also subject to the same checks for HTTP/HTTPS. If the URL is not over HTTPS you must explicitly allow insecure requests by setting AllowInsecureRequestUrls to true. Scope (Optional) If the resource requires a specific claim called ‘scope’ to be included in the request, you can specify it here. When Virtalis Reach requests the access token, it will also specify the scope and the resulting token will include it. AllowInsecureRequestUrls By default, Virtalis Reach does not allow files to be downloaded over HTTP because this is insecure. If a file source is configured with HTTP (via a mapping in the UrlSourceCredentials configuration section) or if a URL for a configured host comes into Virtalis Reach in a queue message and it specifies HTTP rather than HTTPS, an exception will be thrown and the request will not be made. It may be necessary to allow HTTP in some cases however, and this can be done by explicitly setting AllowInsecureRequestUrls to true in the configuration for that specific URL source. For example: { "Hostname": "virtalis.platform.importfolder.api", "AuthType": "ServiceAccount", "AllowInsecureRequestUrls": true }
Read more

Adding Trusted Certificates for External HTTP Requests

This document explains how to enable certificates to be trusted by Virtalis Reach by adding them to the secrets of the Translator Service and the Job API....
adding-trusted-certificates-for-external-http-requests
Introduction To request that Virtalis Reach import a file, you specify the URL that its Translator Service will download the file from and, optionally, the URL that its Job API will notify on completion of the translation. A request to an HTTPS URL that is secured with an untrusted certificate, for example for an internal service in an organisation or for testing purposes, will fail. This document explains how to enable certificates to be trusted by Virtalis Reach by adding them to the secrets of the Translator Service and the Job API. Certificates can be loaded in the secrets of the Translator Service and the Job API to specify they should be accepted as trusted. Each certificate will be logged at start-up as it’s loaded by the Translator Service and, if there are any errors loading them or requesting any resources, that will also be logged appropriately. After the certificates have been loaded, those certificates will be trusted and external requests to servers using those certificates will succeed. Please note: You may need to configure this for both the Translator Service and the Job API, since there are two places where external HTTP requests are made: the translator service can download data from the specified URL, and the Job API can optionally call to an external URL when a submitted job is complete. See also: https://kubernetes.io/docs/concepts/configuration/secret/ for further information if you are not familiar with this feature of Kubernetes. https://www.virtalis.com/chapter/deploying-virtalis-reach-on-a-kubernetes-cluster#toc-create-and-deploy-secrets Create and Deploy Secrets Installing File Source Certificates How to Use Self-signed Certificates in the Platform Adding a certificate assumes that you already have an HTTPS server set up with a self-signed certificate. Certificates in PEM format are supported, for example: 1-----BEGIN CERTIFICATE----- 2(The hex data of the certificate) 3-----END CERTIFICATE----- Once you have your certificate in this format, save it, for example as “example.crt”, and add it to the Translator Service as a secret. Then update the main TranslatorServiceSecrets to include the secret name in the “Certificates” field. For example: 1"Certificates": [ 2 “example.crt”, 3 “anotherExample.crt” 4] You may need to do this to both the Translator Service secrets and the Job API secrets, as noted above. Then, when importing an asset, use your HTTPS server's URL in the translation message to download the file and then optionally use JobCallbackUrl to call the external service to report that the job has completed successfully. Error Messages VirtalisException: Unable to load the certificate called { certificate }, check if the certificate is in the secret folder. If this exception message occurs, check the certificate file specified, as it may have been formatted incorrectly. VirtalisException Unable to load the certificate called { certificate } because the certificate is expired. If this exception message occurs, check the certificate expiration date, as it is most likely expired. The default expiration date is one year for a certificate that is issued by a Stand-alone Certificate Authority CA. Error message: Certificate failed validation due to multiple remote certificate chain errors. Under this error message it should list the chain errors and the reason behind them that have occurred. If an UntrustedRoot error is listed as the only error, then the problem is that the certificate used by the server is not configured as a trusted certificate. Error message: An ssl error occurred when loading {requestUri}: {sslPolicyError} If this error message occurs, the policy error will be either “RemoteCertificateNotAvailable” or “RemoteCertificateNameMismatch”, which indicates that there’s some configuration issue with the remote server’s certificate beyond its trusted status.
Read more

Manually Erasing Virtalis Reach Data

how to access the Hub and Artifact databases using the existing tools for Neo4j and Minio and also includes a Python script to automate typical tasks....
manually-erasing-virtalis-reach-data
OverviewVirtalis Reach does not provide a GUI mechanism to delete visualisations from the artifact store or models from the hub. This section describes how to achieve that outcome by showing how to access the Hub and Artifact databases using the existing tools for Neo4j and Minio and also includes a Python script to automate typical tasks.This section assumes that you have already installed Virtalis Reach and your shell is in the directory containing the files that were downloaded during the installation. This is usually stored in the home directory, for example “/home/root/Reach/k8s”Please Note: The actions in this section directly modify the databases used by the Virtalis Reach services. No consideration is given to the current activity of the system and system-wide transactions are not used. Before performing these actions, prevent access to users of the system by temporarily disabling the ingress server.Pre-installationBefore continuing with the next section, please refer to Virtalis Reach Automated Backup System and perform a full backup of the system.Installing the Serviceexport REACH_NAMESPACE=<namespace>helm install reach-data-eraser -n $REACH_NAMESPACE data-eraser/chart/ Turn On the Servicekubectl scale statefulset -n $REACH_NAMESPACE reach-data-eraser --replicas=1List arguments:kubectl exec -it -n $REACH_NAMESPACE \reach-data-eraser-0 -- /bin/bash -c \"/del-obj.py --help"Output:usage: del-obj.py [-h] [-t TYPE] [-d DELETE] [-l] [-s] [-T]Deletes visualisation artifacts and vrmodels from Virtalis Reachoptional arguments: -h, --help show this help message and exit -t TYPE, --type TYPE Choose data type to erase, either 'artifact' or 'vrmodel' (default artifact) -d DELETE, --delete DELETE Deletes artifact or vrmodel by ID -l, --list List artifacts or vrmodels -s, --size List total size of artifacts or vrmodels. This will increase the time to retrieve the list depending on how much data is currently stored. -T, --test Dry run - test deleteDeleting a Visualisation ArtifactList Artifacts to Extract Artifact IDskubectl exec -it -n $REACH_NAMESPACE \ reach-data-eraser-0 -- /bin/bash -c \ "/del-obj.py --list"Sample output:Connecting to Neo4j bolt://localhost:7687JustAPointList : ID 8f3885c5-03ec-492f-9fca-8119ad2f4962assembled : ID 787eae34-5764-4105-a50f-c441c100f66elight_test_plus_cube : ID 7ae36ec6-ea6b-4639-973f-8fd16179b262template_torusknot : ID ebd7d8fe-a846-4b70-ac86-01c275e5f3b1template_torusknot : ID 81894536-d0d8-454e-816e-3db87d1e58c8The above list will show each revision separately.As you can see, there are 2 revisions of template_torusknot. You can use the UUID to cross-reference which version this refers to so that you can make sure you are deleting the right revision.In a web browser, navigate to the following URL, replacing <UUID> with the UUID of the artifact you want to check and replacing <YOUR_DOMAIN> with the domain of your Reach installation.https://<YOUR_DOMAIN>/viewer/<UUID>Once opened, you can click the “Show all versions” link to bring up a list of all versions along with the information about the current revision.Erase an ArtifactOptional but recommended, use the -T switch to test the deletion procedure without affecting the database.kubectl exec -it -n $REACH_NAMESPACE \ reach-data-eraser-0 -- /bin/bash -c \ "/del-obj.py --test --delete f4a356df-823f-424c-a6c9-2bc763ef9a41"Sample output:Checking cachedxml/3b3a3f44ec5977f17cb5239030945f6063f9cf91b433909b01e779a89d4830beOKDeleting from meshesDeleting from texturesDeleting from cachedxmlDeleting cachedxml/3b3a3f44ec5977f17cb5239030945f6063f9cf91b433909b01e779a89d4830beOKRemove the -T switch to delete the data.Input:kubectl exec -it -n $REACH_NAMESPACE \ reach-data-eraser-0 -- /bin/bash -c \ "/del-obj.py --delete f4a356df-823f-424c-a6c9-2bc763ef9a41"Sample output:Checking cachedxml/3b3a3f44ec5977f17cb5239030945f6063f9cf91b433909b01e779a89d4830beOKDeleting from meshesDeleting from texturesDeleting from cachedxmlDeleting cachedxml/3b3a3f44ec5977f17cb5239030945f6063f9cf91b433909b01e779a89d4830beOKDeleting VRModelsThe process for deleting VRModels is the same as deleting visualisation artifacts except that the object type should be change from the default of artifact to vrmodel using the -t or --type parameter.kubectl exec -it -n $REACH_NAMESPACE \ reach-data-eraser-0 -- /bin/bash -c \ "/del-obj.py --list --type vrmodel"Sample output:JustAPointList : ID a1e0544c-8985-4ca0-a50c-1856a81c7ca5NX_Speedboat : ID 3232ae07-b0bd-4f3b-ac1d-c595126a8b20SYSTEM_FILTER_BOX_WA_1_5T : ID 141d6136-3ba8-4a08-8462-8aa23e63ed5bSolid Edge 853 : ID 3b3ca5ec-589a-4582-bf85-65603872985eTwoModelsSameName : ID 86cbc92c-5159-4260-bd4a-22265debfa58Turn Off the ServiceOnce done, scale down the service:kubectl scale statefulset -n $REACH_NAMESPACE reach-data-eraser --replicas=0On Data Reuse Between Data StoresBinary data items may be referenced by multiple artifacts, for example when a model is reused in different projects or by revisions of a project. Only when deletion of an artifact will result in the related binary data items being unreferences will they be deleted.In the diagram, the deletion of Visualisation A will not result in the deletion of the LOD Binary data because it is also referenced by Visualisation B. If A is first deleted then the LOD binary data will be referenced only by B, then when B is deleted the LOD Binary data also be deleted. 
Read more

Updating the Virtalis Reach Licence Key

how to replace the currently installed licence key with a new one....
updating-the-virtalis-reach-licence-key
OverviewThis section describes how to replace the currently installed licence key with a new one.This section assumes that you have already installed Virtalis Reach and your shell is in the directory containing the files that were downloaded during the installation, this is usually stored in the home directory, for example “/home/root/Reach/k8s”Set Up VariablesSubstitute and export the following variables:export REACH_NAMESPACE=<name of kubernetes namespace Virtalis Reach is deployed in>Load previous configuration:. ./load-install-config.shSubstitute and export the following variables:export reach_licence__key=<reach licence xml snippet>export reach_licence__signature=<reach licence signature>Update SecretsRun a script:./create-secrets.shkubectl get secret reach-install-config \-n $REACH_NAMESPACE -o json | jq -r '.data.reach_licence__key="'\$(echo -n $reach_licence__key | base64 -w 0 | tr -d '\n')'"' \| kubectl apply -f -kubectl get secret reach-install-config \-n $REACH_NAMESPACE -o json | jq -r '.data.reach_licence__signature="'\$(echo -n $reach_licence__signature | base64 -w 0 | tr -d '\n')'"' \| kubectl apply -f -Gracefully restart any running pods for the two services below by doing a rolling restart:kubectl rollout restart deploy artifact-access-api -n $REACH_NAMESPACEkubectl rollout restart deploy project-management-api -n $REACH_NAMESPACE
Read more

Virtalis Reach Mesh Splitting Overview and Configuration

This document is designed to help systems administrators to enable and configure Mesh Splitting in a Virtalis Reach environment....
virtalis-reach-mesh-splitting-overview-and-configuration
IntroductionMesh Splitting in Virtalis Reach is the process of breaking apart high triangle-count meshes into smaller chunks. Currently, the implementation defaults to only splitting meshes that have more than 64 thousand triangles and aims to create balanced splits both in terms of triangle count and 3D dimensions.This document is designed to help systems administrators to enable and configure Mesh Splitting in a Virtalis Reach environment.Level of Detail (LOD)When viewing a visualisation, the renderer chooses the LOD for each mesh such that it can maintain a certain framerate. With large meshes, this means it must choose the LOD for the entire mesh, regardless of how much of it the viewer can see. This can result in poor detail in large meshes because the triangle count is too high for the hardware. When large meshes are broken down into smaller chunks, the renderer can choose a LOD level for each split individually. Because of this, instead of rendering a high LOD for the entire original mesh it can instead choose high LODs only for the splits which are closest to the viewer, or only the splits that may be on screen. ConfigurationA Virtalis Reach systems administrator can configure Mesh Splitting in two ways:Enabled/DisabledTo enable/disable Mesh Splitting, the configuration variable in the TranslatorService can be set to true or false via the following env variable:TranslatorServiceConfiguration__MeshSplittingEnabledAdjusting Split Threshold (Advanced)It is possible to adjust the threshold at which Mesh Splitting is performed. By default, it is set to 64000 triangles and adjusting this value is not recommended. The threshold can, however, be adjusted via the following environment variable:TranslatorServiceConfiguration__MeshSplitTriangleThresholdPlease note: There are no sanity checks on this value. For example, if an administrator sets this to 10 then it will split up practically every single mesh in a scene and result in extremely poor performance of not only the rendering but also importing and publishing.Known IssuesWith Mesh Splitting enabled:Selecting a mesh that has been split will result in only a part of the mesh being highlightedUsing the Fly-to button when selecting a mesh that was split will result in only a part of the mesh being fit to the view
Read more

Upgrading Virtalis Reach from Version 2021.5.0 to 2022.1.0

This document is designed to help a systems administrator upgrade Virtalis Reach to the next available version....
upgrading-virtalis-reach-from-version-2021-5-0-to-2022-1-0
IntroductionThis document is designed to help a systems administrator upgrade Virtalis Reach to the next available version.Pre-InstallationBefore continuing to the next section, please refer to the “Virtalis Reach Automated Backup System” document and do a full backup of the system.Set up variablesSubstitute and export the following variables:Export the following variables:‍Download installation filesLog in with Oras:Make a backup of the old installation files:Make a directory to store installation files:Pull the Kubernetes deployment file archive from the Virtalis registry and unzip it:Make the installation scripts executable:InstallationLoad previous configurationCreate secretsDeploy Reach‍
Read more

Support & Feedback

If support is required, please visit the support portal and knowledgebase at https://support.virtalis.com or email Virtalis Support at support@virtalis.com...
support-feedback-adminstrator
If support is required, please visit the support portal and knowledgebase at https://support.virtalis.com or email Virtalis Support at support@virtalis.com.Feedback is always welcomed so that we can continue to develop and improve Virtalis Reach. Please speak to your Customer Success team.
Read more

Support

If you have questions or need additional support, we are here to help.