This document has 4 chapters.
?
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Automated Backup System for Virtalis Reach 2022.2

Virtalis Reach comes with an automated back up system allowing an Administrator to restore to an earlier snapshot in the event of a disaster. We will install Ve...
automated-backup-system-for-virtalis-reach-2022-1
Automated Backup System for Virtalis Reach 2022.1 Overview Virtalis Reach comes with an automated backup system allowing an Administrator to restore to an earlier snapshot in the event of a disaster. We will install Velero to back up the state of your Kubernetes Cluster and uses a custom built solution which leverages Restic to back up the persistent data imported into Virtalis Reach. You should consider creating regular backups of your buckets which hold the backed-up data in case of failure. This will be done through your cloud provider or manually if you host your own bucket. Alternatively, you can consider using your own backup solution. A good option is the PersistentVolumeSnapshot which creates a snapshot of a persistent volume at a point in time. The biggest caveat is that they’re only supported on a number of platforms such as Azure and AWS. If you opt in for a different solution to the one we provide, you have to be mindful of the fact that not all databases used by Virtalis Reach support live backups. This means that the databases have to be taken offline before backing up. The following databases in use by Virtalis Reach must be taken offline for the duration of the backup: • Minio • Neo4j Variables and Commands In this section, variables enclosed in <> arrows should be replaced with the appropriate values. For example: docker login <my_id> <my_password> --verbosity debug Copy becomes docker login admin admin --verbosity debug Copy Commands to execute in a shell are shown as code and each block of code is designed to be a single command that can be copy and pasted: These are commands to be entered in a shell in your clusters administration console This is another block of code \ that uses "\" to escape newlines \ and can be copy and pasted straight into your console Copy Set Up the Deployment Shell Substitute the variable values and export them: export REACH_NAMESPACE=<name of kubernetes namespace Virtalis Reach is deployed in> Navigate to the directory containing Reach installation files: cd /home/root/Reach/k8s Installation Creating a Storage Location Recommended: Follow the “Create S3 Bucket” and “Set permissions for Velero” sections from the link below and ensure that you create the following 2 buckets under your s3 bucket: • reach-restic • reach-velero https://github.com/vmware-tanzu/velero-plugin-for-aws#create-s3-bucket Export the address and port of the bucket you have created: export S3_BUCKET_ADDRESS=<address> #i.e S3_BUCKET_ADDRESS=192.168.1.3, S3_BUCKET_ADDRESS=mydomain.com export S3_BUCKET_PORT=<port> export S3_BUCKET_PROTOCOL=<http or https> export S3_BUCKET_REGION=<region> Not recommended - create an s3 bucket on the same cluster, alongside Virtalis Reach: Customize the persistence.size if the total size of your data exceeds 256gb and change the storage class REACH_SC if needed: export REACH_SC=local-path kubectl create ns reach-backup #check if pwgen is installed for the next step command -v pwgen kubectl create secret generic reach-s3-backup -n reach-backup \ --from-literal='access-key'=$(pwgen 30 1 -s | tr -d '\n') \ --from-literal='secret-key'=$(pwgen 30 1 -s | tr -d '\n') helm upgrade --install reach-s3-backup bitnami/minio \ -n reach-backup --version 3.6.1 \ --set persistence.storageClass=$REACH_SC \ --set persistence.size=256Gi \ --set mode=standalone \ --set resources.requests.memory='150Mi' \ --set resources.requests.cpu='250m' \ --set resources.limits.memory='500Mi' \ --set resources.limits.cpu='500m' \ --set disableWebUI=true \ --set useCredentialsFile=true \ --set volumePermissions.enabled=true \ --set defaultBuckets="reach-velero reach-restic" \ --set global.minio.existingSecret=reach-s3-backup cat <<EOF > credentials-velero [default] aws_access_key_id=$(kubectl get secret reach-s3-backup \ -n reach-backup -o jsonpath="{.data.access-key}" | base64 --decode) aws_secret_access_key=$(kubectl get secret reach-s3-backup \ -n reach-backup -o jsonpath="{.data.secret-key}" | base64 --decode) EOF Export the address and port of the bucket you have created: export S3_BUCKET_ADDRESS=reach-s3-backup-minio.reach-backup.svc.cluster.local export S3_BUCKET_PORT=9000 export S3_BUCKET_PROTOCOL=http export S3_BUCKET_REGION=local Set Up Variables For the duration of this installation, you must navigate to the k8s folder that is downloadable by following the Virtalis Reach Installation Guide. Make scripts executable: sudo chmod +x \ trigger-database-restore.sh \ trigger-database-backup.sh \ trigger-database-backup-prune.sh \ install-backup-restore.sh Export the following variables: export ACR_REGISTRY_NAME=virtaliscustomer Export the address of the reach-restic bucket: export REPO_URL=s3:$S3_BUCKET_PROTOCOL://\ $S3_BUCKET_ADDRESS:$S3_BUCKET_PORT/reach-restic Optional configuration variables: export MANAGED_TAG=<custom image tag for Virtalis Reach services> export DISABLE_CRON<seto to true to install without an automated cronSchedule> Velero Installation The following steps will assume you named your Velero bucket “reach-velero”. Add the VMware helm repository and update: helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts helm repo update Install Velero: helm install velero vmware-tanzu/velero \ --namespace velero \ --create-namespace \ --set-file credentials.secretContents.cloud\ =./credentials-velero \ --set configuration.provider=aws \ --set configuration.backupStorageLocation.name\ =reach-velero \ --set configuration.backupStorageLocation.bucket\ =reach-velero \ --set configuration.backupStorageLocation.config.region\ =$S3_BUCKET_REGION \ --set configuration.backupStorageLocation.config.s3Url\ =$S3_BUCKET_PROTOCOL://$S3_BUCKET_ADDRESS:$S3_BUCKET_PORT \ --set configuration.backupStorageLocation.config.publicUrl\ =$S3_BUCKET_PROTOCOL://$S3_BUCKET_ADDRESS:$S3_BUCKET_PORT \ --set configuration.backupStorageLocation.config.s3ForcePathStyle\ =true \ --set initContainers[0].name=velero-plugin-for-aws \ --set initContainers[0].image=velero/velero-plugin-for-aws:v1.1.0 \ --set initContainers[0].volumeMounts[0].mountPath=/target \ --set initContainers[0].volumeMounts[0].name=plugins \ --set snapshotsEnabled=false \ --version 2.23.1 \ --set deployRestic=true Install the velero cli client: wget https://github.com/vmware-tanzu/velero/releases\ /download/v1.5.3/velero-v1.5.3-linux-amd64.tar.gz tar -xzvf velero-v1.5.3-linux-amd64.tar.gz rm -f velero-v1.5.3-linux-amd64.tar.gz sudo mv $(pwd)/velero-v1.5.3-linux-amd64/velero /usr/bin/ sudo chmod +x /usr/bin/velero Manually create a single backup to verify that the connection to the aws bucket is working: velero backup create test-backup-1 \ --storage-location=reach-velero --include-namespaces $REACH_NAMESPACE Watch the status of the backup until it’s finished, this should show up as complete if everything was set up correctly: watch -n2 velero backup get Create a scheduled backup: velero create schedule cluster-backup --schedule="45 23 * * 6" \ --storage-location=reach-velero --include-namespaces $REACH_NAMESPACE This schedule will run a backup every Saturday at 23:45PM. Restic Integration The custom restic integration uses Kubernetes jobs to mount the data, encrypt it, and send it to a bucket. Kubernetes CustomResourceDefinitions are used to store the information about the restic repositories as well as any created backups. The scheduled data backup runs on every Friday at 23:45PM by default. This can be modified by editing the cronSchedule field in all values.yaml files located in backup-restore/helmCharts/<release_name>/ with the exception of common-lib. All the performed backups are offline backups, therefore Virtalis Reach will be unavailable for that period as a number of databases have to be taken down. Create an AWS bucket with the name “reach-restic” by following the same guide from the Velero section. Replace the keys and create a secret containing the reach-restic bucket credentials: kubectl create secret generic reach-restic-bucket-creds \ -n "$REACH_NAMESPACE" \ --from-literal='AWS_ACCESS_KEY'='<ACCESS_KEY>' \ --from-literal='AWS_SECRET_KEY'='<SECRET_KEY>' If you instead opted in to deploy an s3 bucket on the same cluster, run this instead: kubectl create secret generic reach-restic-bucket-creds \ -n "$REACH_NAMESPACE" \ --from-literal='AWS_ACCESS_KEY'=$(kubectl get secret reach-s3-backup \ -n reach-backup -o jsonpath="{.data.access-key}" | base64 --decode) \ --from-literal='AWS_SECRET_KEY'=$(kubectl get secret reach-s3-backup \ -n reach-backup -o jsonpath="{.data.secret-key}" | base64 --decode) Export the address of the reach-restic bucket: export REPO_URL=s3:$S3_BUCKET_PROTOCOL://\ $S3_BUCKET_ADDRESS:$S3_BUCKET_PORT/reach-restic Run the installation: ./install-backup-restore.sh Check if all the -init-repository- jobs have completed: kubectl get pods -n $REACH_NAMESPACE | grep init-repository Query the list of repositories: kubectl get repository -n $REACH_NAMESPACE The output should look something like this with the status of all repositories showing as Initialized: NAME STATUS SIZE CREATIONDATE artifact-binary-store Initialized 0B 2021-03-01T10:21:53Z artifact-store Initialized 0B 2021-03-01T10:21:57Z job-db Initialized 0B 2021-03-01T10:21:58Z keycloak-db Initialized 0B 2021-03-01T10:21:58Z vrdb-binary-store Initialized 0B 2021-03-01T10:21:58Z vrdb-store Initialized 0B 2021-03-01T10:22:00Z Once you are happy to move on, delete the completed job pods: kubectl delete jobs -n $REACH_NAMESPACE -l app=backup-restore-init-repository Trigger a manual backup: ./trigger-database-backup.sh After a while, all the -triggered-backup- jobs should show up as Completed: kubectl get pods -n "$REACH_NAMESPACE" | grep triggered-backup Query the list of snapshots: kubectl get snapshot -n "$REACH_NAMESPACE" The output should look something like this with the status of all snapshots showing as Completed: NAME STATUS ID CREATIONDATE artifact-binary-store... Completed 62e... 2021... artifact-store-neo4j-... Completed 6ae... 2021... job-db-mysql-master-1... Completed 944... 2021... keycloak-db-mysql-mas... Completed 468... 2021... vrdb-binary-store-min... Completed 729... 2021... vrdb-store-neo4j-core... Completed 1c2... 2021... Once you are happy to move on, delete the completed job pods: kubectl delete jobs -n $REACH_NAMESPACE -l app=backup-restore-triggered-backup Triggering a Manual Backup Consider scheduling system downtime and scaling down the ingress to prevent people from accessing the server during the backup procedure. Note down the replica count for nginx before scaling it down: kubectl get deploy -n ingress-nginx export NGINX_REPLICAS=<CURRENT_REPLICA_COUNT> Scale down the nginx ingress service: kubectl scale deploy --replicas=0 ingress-nginx-ingress-controller \ -n ingress-nginx Create a cluster resource level backup: velero backup create cluster-backup-$(date +"%m-%d-%Y") \ --storage-location=reach-velero --include-namespaces $REACH_NAMESPACE Check the status of the velero backup: watch -n2 velero backup get Create a database level backup: ./trigger-database-backup.sh Check the status of the database backup: watch -n2 kubectl get snapshot -n "$REACH_NAMESPACE" Once you are happy to move on, delete the completed job pods: kubectl delete jobs -n $REACH_NAMESPACE -l app=backup-restore-triggered-backup Restoring Data Restoration Plan Plan your restoration by gathering a list of the snapshot IDs you will be restoring from and export them. Begin by querying the list of repositories: kubectl get repo -n "$REACH_NAMESPACE" NAME STATUS SIZE CREATIONDATE artifact-binary-store Initialized 12K 2021-07-02T12:03:26Z artifact-store Initialized 527M 2021-07-02T12:03:29Z comment-db Initialized 180M 2021-07-02T12:03:37Z job-db Initialized 181M 2021-07-02T12:03:43Z keycloak-db Initialized 193M 2021-07-02T12:03:43Z vrdb-binary-store Initialized 12K 2021-07-02T12:03:46Z vrdb-store Initialized 527M 2021-07-02T12:02:44Z Run a dry run of the restore script to gather a list of the variables you can export: DRY_RUN=true ./trigger-database-restore.sh Sample output: Error: ARTIFACT_BINARY_STORE_RESTORE_ID has not been exported. Please run 'kubectl get snapshot -n develop -l repository=artifact-binary-store' to see a list of available snapshots. Error: ARTIFACT_STORE_RESTORE_ID has not been exported. Please run 'kubectl get snapshot -n develop -l repository=artifact-store'... ... Query available snapshots or use the commands returned in the output above to query by specific repositories: kubectl get snapshot -n "$REACH_NAMESPACE" This should return a list of available snapshots: NAME STATUS ID CREATIONDATE artifact-binary-store-mini... Completed 4a2... 2021-07-0... artifact-store-neo4j-core-... Completed 41d... 2021-07-0... comment-db-mysql-master-16... Completed e72... 2021-07-0... job-db-mysql-master-162522... Completed eb5... 2021-07-0... keycloak-db-mysql-master-1... Completed 919... 2021-07-0... vrdb-binary-store-minio-16... Completed cf0... 2021-07-0... vrdb-store-neo4j-core-1625... Completed 08d... 2021-07-0... It’s strongly advised to restore all the backed-up data using snapshots from the same day to avoid data corruption. Note down the replica count for nginx before scaling it down: kubectl get deploy -n ingress-nginx export NGINX_REPLICAS=<CURRENT_REPLICA_COUNT> Scale down the nginx ingress service to prevent people from accessing Virtalis Reach during the restoration process: kubectl scale deploy --replicas=0 ingress-nginx-ingress-controller \ -n ingress-nginx Run the Restore Script ./trigger-database-restore.sh Get list of backups created with velero: velero backup get Example: NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR test-backup-1 Completed 0 0 2022-0... 29d reach-velero <none> test-backup-2 Completed 0 0 2022-0... 29d reach-velero <none> Restore: velero restore create --from-backup <backup name> Unset exported restore id’s: charts=( $(ls backup-restore/helmCharts/) ); \ for chart in "${charts[@]}"; do if [ $chart == "common-lib" ]; \ then continue; fi; id_var="$(echo ${chart^^} | \ sed 's/-/_/g')_RESTORE_ID"; unset ${id_var}; done After a while, all the -triggered-restore- jobs should show up as Completed: kubectl get pods -n "$REACH_NAMESPACE" | grep triggered-restore Once you are happy to move on, delete the completed job pods: kubectl delete jobs -n $REACH_NAMESPACE \ -l app=backup-restore-triggered-restore Watch and wait for all pods that are running to be Ready: watch -n2 kubectl get pods -n "$REACH_NAMESPACE" Scale back nginx: kubectl scale deploy --replicas="$NGINX_REPLICAS" \ ingress-nginx-ingress-controller -n ingress-nginx Pruning Backups Velero Get list of backups created with velero: velero backup get Example: NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR test-backup-1 Completed 0 0 2022-0... 29d reach-velero <none> test-backup-2 Completed 0 0 2022-0... 29d reach-velero <none> Delete a single backup: velero backup delete <name> Example: velero backup delete test-backup-1 Restic Get list of reach backups: kubectl get snapshot -n $REACH_NAMESPACE Example: artifact-binary-store-minio-1642502735 Completed 5daa1580 2022-01-18T10:45:35Z artifact-binary-store-minio-1642512232 Completed b1fb6a15 2022-01-18T13:23:52Z artifact-binary-store-minio-1642512501 Completed 764c989e 2022-01-18T13:28:21Z artifact-store-neo4j-core-1642502736 Completed f8233016 2022-01-18T10:45:36Z artifact-store-neo4j-core-1642512234 Completed df2606b1 2022-01-18T13:23:54Z artifact-store-neo4j-core-1642512502 Completed a9d9470b 2022-01-18T13:28:22Z comment-db-mysql-master-1642502737 Completed da0de6a7 2022-01-18T10:45:37Z comment-db-mysql-master-1642512237 Completed 37a860e4 2022-01-18T13:23:57Z comment-db-mysql-master-1642512503 Completed d5e04e5e 2022-01-18T13:28:23Z job-db-mysql-master-1642502739 Completed 51db2aa5 2022-01-18T10:45:39Z job-db-mysql-master-1642512235 Completed 40c5fae1 2022-01-18T13:23:55Z job-db-mysql-master-1642512505 Completed dbc0921e 2022-01-18T13:28:25Z keycloak-db-mysql-master-1642502742 Completed 70ab969a 2022-01-18T10:45:42Z keycloak-db-mysql-master-1642512235 Completed 6df99e96 2022-01-18T13:23:55Z keycloak-db-mysql-master-1642512506 Completed 25c93be7 2022-01-18T13:28:26Z product-tree-db-mysql-master-1642502740 Completed ba4edb53 2022-01-18T10:45:40Z product-tree-db-mysql-master-1642512238 Completed 47880e3b 2022-01-18T13:23:58Z product-tree-db-mysql-master-1642512504 Completed 378dd1e1 2022-01-18T13:28:24Z vrdb-binary-store-minio-1642502746 Completed c7109c6b 2022-01-18T10:45:46Z vrdb-binary-store-minio-1642512247 Completed 18f03082 2022-01-18T13:24:07Z vrdb-binary-store-minio-1642512510 Completed 5c47d40c 2022-01-18T13:28:30Z vrdb-store-neo4j-core-1642502748 Completed bd692195 2022-01-18T10:45:48Z vrdb-store-neo4j-core-1642512241 Completed df6cbaf9 2022-01-18T13:24:01Z vrdb-store-neo4j-core-1642512510 Completed 89ae8fcc 2022-01-18T13:28:30Z Prune backups based on a restic policy. Learn more about the different policies here: https://restic.readthedocs.io/en/latest/060_forget.html#removing-snapshots-according-to-a-policy PRUNE_FLAG="<policy>" ./trigger-database-backup-prune.sh Example: Keep the latest snapshot and delete everything else: PRUNE_FLAG="--keep-last 1" ./trigger-database-backup-prune.sh After: NAME STATUS ID CREATIONDATE artifact-binary-store-minio-1642512501 Completed 764c989e 2022-01-18T13:28:21Z artifact-store-neo4j-core-1642512502 Completed a9d9470b 2022-01-18T13:28:22Z comment-db-mysql-master-1642512503 Completed d5e04e5e 2022-01-18T13:28:23Z job-db-mysql-master-1642512505 Completed dbc0921e 2022-01-18T13:28:25Z keycloak-db-mysql-master-1642512506 Completed 25c93be7 2022-01-18T13:28:26Z product-tree-db-mysql-master-1642512504 Completed 378dd1e1 2022-01-18T13:28:24Z vrdb-binary-store-minio-1642512510 Completed 5c47d40c 2022-01-18T13:28:30Z vrdb-store-neo4j-core-1642512510 Completed 89ae8fcc 2022-01-18T13:28:30Z
Read more

Deploying Virtalis Reach 2022.2 on a Kubernetes Cluster

This document describes deploying a complete Virtalis Reach system into a Kubernetes cluster. The content is highly technical, consisting primarily of shell com...
deploying-virtalis-reach-on-a-kubernetes-cluster
Deploying Virtalis Reach 2021.5 on a Kubernetes Cluster Overview This document describes deploying a complete Virtalis Reach system into a Kubernetes cluster. The content is highly technical, consisting primarily of shell commands that should be executed on the cluster administration shell. The commands perform the actions required to deploy Virtalis Reach, however, you should read and understand what these commands do and be aware that your cluster or deployment may have a specific configuration. Virtalis Reach is a configurable platform consisting of many connected micro services allowing the deployment to be configured and adapted for different use-cases and environments. Seek advice if you are unsure of the usage or impact of a particular system command as the improper use of server infrastructure can have serious consequences. Prerequisites Virtalis Reach requires: · Kubernetes cluster (either on premises or in the cloud): o at least version v1.21.3 o 8 cores o At least 64GB of memory available to a single node (128GB total recommended) o 625GB of storage (see the storage section for more information) o Nginx as the cluster ingress controller o Access to the internet during the software deployment and update o A network policy compatible network plugin Virtalis Reach does not require: · A GPU in the server · A connection to the internet following the software deployment The follow administration tools are required along with their recommended tested version: · kubectl v1.21.3 - this package allows us to communicate with a Kubernetes cluster on the command line. · helm 3 v3.6.3 - this package is used to help us install large Kubernetes charts consisting of numerous resources · oras v0.8.1 - this package is used to download an archive from our internal registry containing some configuration files which will be used to deploy Virtalis Reach · azure cli stable - this package is used to authenticate with our internal registry hosted on Azure · jq v1.6 - this package is used to parse and traverse JSON on the command line · yq v4.6.1 - this package is used to parse and traverse YAML on the command line These tools are not installed on the Virtalis Reach server - only on the machine that will communicate with a Kubernetes cluster for the duration of the installation. If using recent versions of Ubuntu, the Azure CLI as installed by Snap is called azure-cli not az which refers to an older version in the Ubuntu repos. Alias azure-cli to az if needed Variables and Commands In this section, variables enclosed in <> arrows should be replaced with the appropriate values. For example: docker login <my_id> <my_password> --verbosity debug becomes docker login admin admin --verbosity debug Commands to execute in a shell are shown as code and each block of code is designed to be a single command that can be copy and pasted: These are commands to be entered in a shell in your clusters administration console This is another block of code \ that uses "\" to escape newlines \ and can be copy and pasted straight into your console Some steps have been included in a single bash script which can be inspected before being run. Pre-installation Set Up the Deployment Shell Make a directory to store temporary installation files: mkdir /home/root/Reach && cd /home/root/Reach Export the following variables: export REACH_VERSION=2021.5.0 export ACR_REGISTRY_NAME=virtaliscustomer export SKIP_MIGRATIONS=1 Substitute the variable values and export them: export REACH_DOMAIN=<the domain Virtalis Reach will be hosted on> export TLS_SECRET_NAME=<the name of the secret containing the tls cert> export REACH_NAMESPACE=<name of kubernetes namespace to deploy Virtalis Reach on> export ACR_USERNAME=<service principal id> export ACR_PASSWORD=<service principal password> export reach_licence__key=<licence xml snippet> export reach_licence__signature=<licence signature> Export the environment variables if Virtalis Reach TLS is/will be configured to use LetsEncrypt: export KEYCLOAK_ANNOTATIONS="--set ingress.annotations\ .cert-manager\.io/cluster-issuer=letsencrypt-prod" export INGRESS_ANNOTATIONS="--set ingress.annotations\ .cert-manager\.io/cluster-issuer=letsencrypt-prod" Optional configuration variables: export MANAGED_TAG=<custom image tag for Virtalis Reach services> export OFFLINE_INSTALL=<when set to true, patch Virtalis Reach so that it can be taken off the internet> export MQ_EXPOSE_INGRESS=<when set to 1, expose rabbitmq on the ingress> export MQ_CLIENT_KEY_PASS=<password that was/will be used for the client_key password, see windchill/teamcenter installation document> Configuring External File Sources For further information on how to configure this section, please refer to Authentication with External Systems <LINK>. Export a JSON object of the external file sources for the Translator Service and the Job API: Example: read -r -d '' IMPORT_TRANSLATOR_SOURCE_CREDENTIALS <<'EOF' { "Hostname": "windchill.local", "AuthType": "Basic", "BasicConfig": { "Username": "windchilluser", "Password": "windchill" }, "Mapping": { "Protocol": "http", "Host": "windchill.local", }, "AllowInsecureRequestUrls": "true" }, { "Hostname": "mynotauthenticatedserver.local", "AuthType": "None", "AllowInsecureRequestUrls": "false" }, EOF export IMPORT_TRANSLATOR_SOURCE_CREDENTIALS=$(echo -n ${IMPORT_TRANSLATOR_SOURCE_CREDENTIALS} | tr -d '\n') read -r -d '' JOB_API_SOURCE_CREDENTIALS <<'EOF' { "Hostname": "mynotauthenticatedserver.local", "AuthType": "None", "AllowInsecureRequestUrls": "false" }, EOF export JOB_API_SOURCE_CREDENTIALS=$(echo -n ${JOB_API_SOURCE_CREDENTIALS} | tr -d '\n') Create network policies for the Translator Service and the Job API to allow them to communicate with the configured URL source. Please note: The port will need to be modified depending on the protocol and/or port of the URL source server. Example: Allowing https traffic (port 443): cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: import-translator-url-source namespace: $REACH_NAMESPACE spec: egress: - ports: - port: 443 protocol: TCP podSelector: matchLabels: app: import-translator policyTypes: - Egress EOF cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: job-api-url-source namespace: $REACH_NAMESPACE spec: egress: - ports: - port: 443 protocol: TCP podSelector: matchLabels: app: job-api policyTypes: - Egress EOF Example: Allow traffic on a non-standard port: cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: import-translator-url-source namespace: $REACH_NAMESPACE spec: egress: - ports: - port: 1337 protocol: TCP podSelector: matchLabels: app: import-translator policyTypes: - Egress EOF Installing File Source Certificates If any of the configured file sources are secured with TLS and use a certificate or certificates signed by a private authority, they have to be installed to make them trusted. For every certificate that you wish to install, create a secret: kubectl create secret generic \ -n $REACH_NAMESPACE \ <certificate secret name> --from-file="<certificate filename>"=<certificate filename> Export a JSON array of the secrets in the following format for the Translator Service and the Job API: read -r -d '' IMPORT_TRANSLATOR_CERTIFICATES <<'EOF' [ {"secretName": "<certificate secret name>", "key": "<certificate filename>"} ] EOF export IMPORT_TRANSLATOR_CERTIFICATES=${IMPORT_TRANSLATOR_CERTIFICATES} read -r -d '' JOB_API_CERTIFICATES <<'EOF' [ {"secretName": "<certificate secret name>", "key": "<certificate filename>"} ] EOF export JOB_API_CERTIFICATES=${JOB_API_CERTIFICATES} For the above steps to take effect, the create-secrets.sh and deploy.sh scripts must be run. Example - installing three different certificates for the import-translator service and only two for the Job API: root@master:~/# ls -latr -rw-r--r-- 1 root root 2065 Nov 25 12:02 fileserver.crt -rw-r--r-- 1 root root 2065 Nov 25 12:02 someother.crt -rw-r--r-- 1 root root 2065 Nov 25 12:02 example.crt drwxr-xr-x 7 root root 4096 Nov 25 12:02 . kubectl create secret generic \ -n $REACH_NAMESPACE \ myfileserver-cert --from-file="fileserver.crt"=fileserver.crt kubectl create secret generic \ -n $REACH_NAMESPACE \ someother-cert --from-file="someother.crt"=someother.crt kubectl create secret generic \ -n $REACH_NAMESPACE \ example-cert --from-file="example.crt"=example.crt read -r -d '' IMPORT_TRANSLATOR_CERTIFICATES <<'EOF' [ {"secretName": "myfileserver-cert", "key": "fileserver.crt"}, {"secretName": "someother-cert", "key": "fileserver.crt"}, {"secretName": "example-cert", "key": "example.crt"} ] EOF export IMPORT_TRANSLATOR_CERTIFICATES=${IMPORT_TRANSLATOR_CERTIFICATES} read -r -d '' JOB_API_CERTIFICATES <<'EOF' [ {"secretName": "someother-cert", "key": "fileserver.crt"}, {"secretName": "example-cert", "key": "example.crt"} ] EOF export JOB_API_CERTIFICATES=${JOB_API_CERTIFICATES} Checking the Nginx Ingress Controller kubectl get pods -n ingress-nginx This should return at least 1 running pod. ingress-nginx nginx-ingress-controller…….. 1/1 Running If Nginx is not installed please contact Virtalis to see if we can support a different ingress controller. Virtalis Reach is currently only compatible with Nginx. If there is no Ingress controller currently installed on the cluster, and you are confident you should install Nginx, you can execute these commands to install it: helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx && \ helm repo update helm install nginx-ingress ingress-nginx/ingress-nginx \ -n ingress-nginx \ --create-namespace Storage Kubernetes supports a wide variety of volume plugins which allow you to provision storage dynamically as well as with constraints depending on your requirements. List of supported volume plugins All PersistentVolumes used by Virtalis Reach reserve 625gb of storage space in total. Please note: This is a provisional amount which will likely change depending on your workload. Default By default, Virtalis Reach is deployed with the local volume plugin which creates volumes on the worker nodes. This is not the recommended way to deploy Virtalis Reach and is only appropriate for test level deployments as all databases are tied to the single disk of the node that they are deployed on which hinders the performance of the system. To assist in dynamic local volume provisioning, we use the Local Path Provisioner service developed by Rancher: kubectl apply -f \ https://raw.githubusercontent.com/rancher/\ local-path-provisioner/master/deploy/local-path-storage.yaml Custom You can customize how storage for Virtalis Reach is provisioned by specifying which storage class you want to use. This must be created by a Kubernetes Administrator beforehand or, in some environments, a default class is also suitable. For example, when deploying to an Azure Kubernetes Service instance, it comes with a default storage class on the cluster which can be used to request storage from Azure. Express If you only want to modify the storage class and leave all other parameters like size as default, export these variables out: export REACH_SC=<name of storage class> export REACH_SC_ARGS=" --set persistence\ .storageClass="${REACH_SC}" --set core\ .persistentVolume.storageClass\ ="${REACH_SC}" --set master.persistence\ .storageClass="${REACH_SC}" " Custom Parameters A list of different databases in use by Virtalis Reach and how to customize their storage is given below. Minio Please refer to the persistence: section found in the values.yaml file in the Bitnami Minio helm chart repository for a list of available parameters to customize such as size, access modes and so on. These values can be added/tweaked in the following files: · k8s/misc/artifact-binary-store/values-prod.yaml · k8s/misc/import-binary-store/values-prod.yaml · k8s/misc/import-folder-binary-store/values-prod.yaml · k8s/misc/vrdb-binary-store/values-prod.yaml Neo4j Please refer to the core: persistentVolume: section found in the values.yaml file in the Neo4j helm chart repository for a list of available parameters to customize such as size, access modes and so on. These values can be added/tweaked in the following files: · k8s/misc/artifact-store/values-prod.yaml · k8s/misc/vrdb-store/values-prod.yaml Alternatively, the Neo4j helm chart configuration documentation can also be found here https://neo4j.com/labs/neo4j-helm/1.0.0/configreference/ Mysql Please refer to the master: persistence: section found in the values.yaml file in the Bitnami Mysql helm chart repository for a list of available parameters to customize such as size, access modes and so on. These values can be added/tweaked in the following files: · k8s/misc/collaboration-session-db/values-prod.yaml · k8s/misc/import-folder-db/values-prod.yaml · k8s/misc/job-db/values-prod.yaml · k8s/misc/background-job-db/values-prod.yaml · k8s/misc/keycloak-db/values-prod.yaml Miscellaneous Please refer to the persistence: section found in the values.yaml file in the Bitnami Rabbitmq helm chart repository for a list of available parameters to customize such as size, access modes and so on. These values can be added/tweaked in the following files: · k8s/misc/message-queue/values-prod.yaml Deploying Virtalis Reach Create a namespace: kubectl create namespace "${REACH_NAMESPACE}" Add namespace labels required by NetworkPolicies: kubectl label namespace ingress-nginx reach-ingress=true; \ kubectl label namespace kube-system reach-egress=true The ‘ingress-nginx’ entry on line 1 will have to be modified if your nginx ingress is deployed to a different namespace in your cluster. Configure Virtalis Reach TLS Manually create a TLS secret from a TLS key and cert or use the LetsEncrypt integration with cert-manager. Manually Creating a TLS Cert Secret kubectl create secret tls -n "${REACH_NAMESPACE}" \ "${TLS_SECRET_NAME}" --key="tls.key" --cert="tls.crt" LetsEncrypt with Cert-manager Requirements: · The machine hosting Virtalis Reach can be reached via a public IP address (used to validate the ownership of your domain) · A domain that you own (cannot be used for domains ending with .local) Create a namespace for cert-manager: kubectl create namespace cert-manager Install the recommended version v1.0.2 of cert-manager: kubectl apply -f https://github.com/jetstack/\ cert-manager/releases/download/v1.0.2/cert-manager.yaml Create a new file: nano prod_issuer.yaml Paste in the following and replace variables wherever appropriate: apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt-prod namespace: cert-manager spec: acme: server: https://acme-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: <your_email_address> privateKeySecretRef: name: <value of the $TLS_SECRET_NAME variable you exported before> solvers: - http01: ingress: class: nginx Press ctrl+o and then enter to save and then press ctrl+x to exit nano, now apply the file: kubectl apply -f prod_issuer.yaml Source: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes If you wish to do so, you can follow the digital ocean guide above and deploy an example service to test cert-manager before using it on Virtalis Reach. Download Installation Files Log in with Oras: oras login "${ACR_REGISTRY_NAME}".azurecr.io \ --username "${ACR_USERNAME}" \ -p "${ACR_PASSWORD}" Pull the Kubernetes deployment file archive from the Virtalis registry and unzip it: oras pull "${ACR_REGISTRY_NAME}"\ .azurecr.io/misc/k8s:$REACH_VERSION && tar -zxvf k8s.tar.gz Make the installation scripts executable: cd k8s && sudo chmod +x *.sh Create and Deploy Secrets Randomised secrets are used to securely interconnect the Virtalis Reach microservices. The script below uses the pwgen package to generate a random string of 30 alphanumeric characters. Before proceeding make sure pwgen is installed on your machine. ./create-secrets.sh Deploy Virtalis Reach and Database Services ./deploy.sh Wait until all pods are showing up as Ready: watch -n2 kubectl get pods -n $REACH_NAMESPACE You will now be able to access the Virtalis Reach frontend client by opening the domain Virtalis Reach was installed on in a web-browser. Install the Automated Backup System Optionally install the automated backup system by referring to the “Automated Backup System for Virtalis Reach” <LINK> document or activate your own backup solution. Retrieving the Keycloak Admin Password Run the following command: kubectl get secret --namespace ${REACH_NAMESPACE} \ keycloak -o jsonpath="{.data.admin_password}" \ | base64 --decode; echo Refer to Virtalis Reach User Management <LINK> for more information on how to administer the system inside Keycloak. Post Deployment Clean-up Unset exported environment variables: unset REACH_DOMAIN && \ unset TLS_SECRET_NAME && \ unset REACH_NAMESPACE && \ unset ACR_USERNAME && \ unset ACR_PASSWORD && \ unset ACR_REGISTRY_NAME && \ unset REACH_SC && \ unset REACH_SC_ARGS && \ unset reach_licence__key && \ unset reach_licence__signature Clear bash history: history -c This will clean up any secrets exported in the system. Test Network Policies Virtalis Reach utilizes NetworkPolicies which restrict the communication of the internal service on a network level. Please note: NetworkPolicies require a supported Kubernetes network plugin such as Cilium. To test these policies, run a temporary pod: kubectl run -it --rm test-pod \ -n ${REACH_NAMESPACE} --image=debian Install the curl package: apt update && apt install curl Run a request to test the connection to one of our backend APIs. This should return a timeout error: curl http://artifact-access-api-service:5000 Exit the pod, which will delete it: exit Additionally, you can test the egress by checking that any outbound connections made to a public address are denied. Get the name of the edge-server pod: kubectl get pods -n ${REACH_NAMESPACE} | grep edge-server Exec inside the running pod using the pod name from above: kubectl exec -it <pod_name> -n ${REACH_NAMESPACE} -- /bin/bash Running a command like apt update which makes an outbound request should timeout: apt update Exit the pod: exit
Read more

Deploying VirtalisReach 2022.3 on a Kubernetes cluster

This document covers deploying a complete Virtalis Reach 2022.3 system into a Kubernetes cluster. The target audience are system administrators and the content ...
deploying-virtalisreach-2022-3-on-a-kubernetes-cluster
Deploying Virtalis Reach 2022.3 on a Kubernetes cluster Overview This document covers deploying a complete Virtalis Reach 2022.3 system into a Kubernetes cluster. The target audience are system administrators and the content is highly technical, consisting primarily of shell commands that should be executed on the cluster administration shell. The commands perform the actions required to deploy Virtalis Reach, however, you should read and understand what these commands do and be aware that your cluster or deployment may have a specific configuration. Virtalis Reach is a configurable platform consisting of many connected microservices allowing the deployment to be configured and adapted for different use-cases and environments. If you are unsure of the usage or impact of a particular system command then seek advice. Improper use of server infrastructure can have serious consequences. Prerequisites Virtalis Reach requires: Kubernetes cluster (either on premises or in the cloud): · At least version v1.22.7 · 8 cores · At least 64GB of memory available to a single node (128GB total recommended) · 625GB of storage (see the storage section for more information) · Nginx as the cluster ingress controller · Access to the internet during the software deployment and update · A network policy compatible network plugin Virtalis Reach does not require: · A GPU in the server · A connection to the internet following the software deployment The follow administration tools are required along with their recommended tested version: · kubectl v1.22.7 - this package allows us to communicate with a Kubernetes cluster on the command line · ‍helm 3 v3.9.0 - this package is used to help us install large Kubernetes charts consisting of numerous resources · oras v0.8.1 - this package is used to download an archive from our internal registry containing some configuration files which will be used to deploy Virtalis Reach · azure cli stable - this package is used to authenticate with our internal registry hosted on Azure · jq v1.6 - this package is used to parse and traverse JSON on the command line · yq v4.6.1 - this package is used to parse and traverse YAML on the command line These tools are not installed on the Virtalis Reach server but only on the machine that will communicate with a Kubernetes cluster for the duration of the installation. If using recent versions of Ubuntu, the Azure CLI as installed by Snap is called azure-cli not az which refers to an older version in the Ubuntu repos. Alias azure-cli to az if needed Document Style In this document, variables enclosed in angled brackets <VARIABLE> should be replaced with the appropriate values. For example: docker login <my_id> <my_password> becomes docker login admin admin In this document, commands to execute in a shell are shown as code and each block of code is designed to be a single command that can be copied and pasted. These are commands to be entered in a shell in your clusters administration console This is another block of code \ that uses "\" to escape newlines \ and can be copy and pasted straight into your console Some steps have been included in a single bash script which can be inspected before being run. Pre-installation Set Up the Deployment Shell Make a directory to store temporary installation files: sudo mkdir -p /home/root/Reach && \ cd /home/root/Reach && \ sudo chown $(whoami) /home/root/Reach Export the following variables: export REACH_VERSION=2022.3.0 export ACR_REGISTRY_NAME=virtaliscustomer export SKIP_MIGRATIONS=1 export TLS_SECRET_NAME=reach-tls-cert export REACH_NAMESPACE=reach Substitute the variable values and export them: export REACH_DOMAIN=<the domain Virtalis Reach will be hosted on> export ACR_USERNAME=<service principal id> export ACR_PASSWORD=<service principal password> Substitute and export the following variables, wrap the values in single quotes to prevent bash substitution: export reach_licence__key=<reach licence xml snippet> export reach_licence__signature=<reach licence signature> Example: export reach_licence__key='<REACH><expires>123</expires></REACH>' export reach_licence__signature='o8k0niq63bPYOMS53NjgOTUqA0xfaBjfP5uB1uma' Export the environment variables if Virtalis Reach TLS will be configured to use LetsEncrypt: export KEYCLOAK_ANNOTATIONS="--set ingress.annotations\ .cert-manager\.io/cluster-issuer=letsencrypt-prod" export INGRESS_ANNOTATIONS="--set ingress.annotations\ .cert-manager\.io/cluster-issuer=letsencrypt-prod" Optional configuration variables: export MANAGED_TAG=<custom image tag for Virtalis Reach services> export MQ_EXPOSE_INGRESS=<when set to 1, expose rabbitmq on the ingress> export LOW_SPEC<set to true to set memory requests to a \ low amount for low spec machines, used for development> export USE_NEO4J_MEMREC=<set to true to use neo4j memrec, \ use in conjunction with LOW_SPEC> Checking the Nginx Ingress Controller kubectl get pods -n ingress-nginx This should return at least 1 running pod. ingress-nginx nginx-ingress-controller…….. 1/1 Running If Nginx is not installed, then please contact Virtalis to see if we can support a different ingress controller. Virtalis Reach is currently only compatible with Nginx. If there is no Ingress controller currently installed on the cluster, and you are confident you should install Nginx, then you can execute these commands to install it: helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update helm install ingress --create-namespace \ -n ingress-nginx \ bitnami/nginx-ingress-controller --version 9.1.10 Storage Kubernetes supports a wide variety of volume plugins which allow you to provision storage dynamically as well as with constraints depending on your requirements. List of supported volume plugins All PersistentVolumes used by Virtalis Reach reserve 625gb of storage space in total. This is a provisional amount which will likely change depending on your workload. Default By default, Virtalis Reach is deployed with the local volume plugin which creates volumes on the worker nodes. This is not the recommended way to deploy Virtalis Reach and is only appropriate for test level deployments as all databases are tied to the single disk of the node that they’re deployed on which hinders the performance of the system. To assist in dynamic local volume provisioning, we use the Local Path Provisioner service developed by Rancher: kubectl apply -f \ https://raw.githubusercontent.com/rancher/\ local-path-provisioner/master/deploy/local-path-storage.yaml Custom You can customize how storage for Virtalis Reach is provisioned by specifying which storage class you want to use. This must be created by a Kubernetes Administrator beforehand or, in some environments, a default class is also suitable. For example, when deploying to an Azure Kubernetes Service instance, it comes with a default storage class on the cluster which can be used to request storage from Azure. Express If you only want to modify the storage class and leave all other parameters like size as default, export these variables out: export REACH_SC=<name of storage class> export REACH_SC_ARGS=" --set persistence\ .storageClass="${REACH_SC}" --set core\ .persistentVolume.storageClass\ ="${REACH_SC}" --set master.persistence\ .storageClass="${REACH_SC}" " Custom parameters A list of different databases in use by Virtalis Reach and how to customize their storage is shown below. The default values can be found in /home/root/Reach/k8s/misc//values-common.yaml</em> and <em>/home/root/Reach/k8s/misc//values-prod.yaml Minio Please refer to the persistence: section found in the values.yaml file in the Bitnami Minio helm chart repository for a list of available parameters to customize such as size, access modes and so on. values.yaml Neo4j Please refer to the core: persistentVolume: section found in the values.yaml file in the Neo4j helm chart repository for a list of available parameters to customize such as size, access modes and so on. https://github.com/neo4j-contrib/neo4j-helm/blob/4.2.6-1/values.yaml Alternatively, the Neo4j helm chart configuration documentation can also be found here https://neo4j.com/labs/neo4j-helm/1.0.0/configreference/ Mysql Please refer to the master: persistence: section found in the values.yaml file in the Bitnami Mysql helm chart repository for a list of available parameters to customize such as size, access modes and so on. https://github.com/bitnami/charts/blob/eeda6fcba43e1e98f37174479eb994badd2f5241/bitnami/mysql/values.yaml Miscellaneous Please refer to the persistence: section found in the values.yaml file in the Bitnami Rabbitmq helm chart repository for a list of available parameters to customize such as size, access modes and so on. values.yaml ‍Deploying Virtalis Reach Create a namespace: kubectl create namespace "${REACH_NAMESPACE}" Add namespace labels used by NetworkPolicies: kubectl label namespace ingress-nginx reach-ingress=true; \ kubectl label namespace kube-system reach-egress=true The ‘ingress-nginx’ entry on line 1 will have to be modified if your nginx ingress is deployed to a different namespace. Configure Virtalis Reach TLS Manually create a TLS secret from a TLS key and cert or use the LetsEncrypt integration with cert-manager. Manually Creating a TLS Cert Secret kubectl create secret tls -n "${REACH_NAMESPACE}" \ "${TLS_SECRET_NAME}" --key="tls.key" --cert="tls.crt" LetsEncrypt with Cert-manager Requirements: · The machine hosting Virtalis Reach can be reached via a public IP address (used to validate the ownership of your domain) · A domain that you own (cannot be used for domains ending with .local) · Inbound connections on port 80 are allowed Create a namespace for cert-manager: kubectl create namespace cert-manager Install the recommended version of cert-manager: kubectl apply -f https://github.com/jetstack/\ cert-manager/releases/download/v1.7.1/cert-manager.yaml Create a new file: nano prod_issuer.yaml Paste in the following and replace variables wherever appropriate: apiVersion: cert-manager.io/ kind: ClusterIssuer metadata: name: letsencrypt-prod namespace: cert-manager spec: acme: server: https://acme-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: <YOUR_EMAIL_ADDRESS> privateKeySecretRef: name: reach-tls-cert solvers: - http01: ingress: class: nginx Press ctrl+o and then enter to save and then press ctrl+x to exit nano, now apply the file: kubectl apply -f prod_issuer.yaml Source: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes If you wish to do so, you can follow the digital ocean guide above and deploy an example service to test cert-manager before using it on Virtalis Reach. Download Installation Files Log in with Oras: oras login "${ACR_REGISTRY_NAME}".azurecr.io \ --username "${ACR_USERNAME}" \ -p "${ACR_PASSWORD}" Pull the Kubernetes deployment file archive from the Virtalis registry and unzip it: oras pull "${ACR_REGISTRY_NAME}"\ .azurecr.io/misc/k8s:$REACH_VERSION && tar -zxvf k8s.tar.gz Make the installation scripts executable: cd k8s && sudo chmod +x *.sh && sudo chmod +x misc/keycloak/migration/*.sh Create and Deploy Secrets Randomised secrets are used to securely interconnect the Virtalis Reach microservices. The script below uses the pwgen package to generate a random string of 30 alphanumeric characters. Before proceeding, make sure pwgen is installed on your machine. ./create-secrets.sh Deploy Virtalis Reach and Database Services ./deploy.sh Wait until all pods are showing up as Ready: watch -n2 kubectl get pods -n $REACH_NAMESPACE You will now be able to access the Virtalis Reach frontend client by opening the domain Virtalis Reach was installed on in a web-browser. Install the Automated Backup System Optionally install the automated backup system by referring to the Virtalis Reach Automated Backup System document or activate your own backup solution. Retrieving the Keycloak Admin Password Run the following command: kubectl get secret --namespace ${REACH_NAMESPACE} \ keycloak -o jsonpath="{.data.admin_password}" \ | base64 --decode; echo Refer to Virtalis Reach User Management for more information on how to administer the system inside Keycloak. Post Deployment Clean-up unset REACH_DOMAIN && \ unset TLS_SECRET_NAME && \ unset REACH_NAMESPACE && \ unset ACR_USERNAME && \ unset ACR_PASSWORD && \ unset ACR_REGISTRY_NAME && \ unset REACH_SC && \ unset REACH_SC_ARGS && \ unset reach_licence__key && \ unset reach_licence__signature Clear bash history: history -c This will clean up any secrets exported in the system. Test Network Policies Virtalis Reach utilizes NetworkPolicies which restrict the communication of the internal service on a network level. Note: NetworkPolicies require a supported Kubernetes network plugin like Cilium. To test these policies, run a temporary pod: kubectl run -it --rm test-pod \ -n ${REACH_NAMESPACE} --image=debian Install the curl package: apt update && apt install curl Run a request to test the connection to one of our backend apis, this should return a timeout error: curl http://artifact-access-api-service:5000 Exit the pod, which will delete it: exit Additionally, you can test the egress by checking that any outbound connections made to a public address are denied. Get the name of the edge-server pod: kubectl get pods -n ${REACH_NAMESPACE} | grep edge-server Exec inside the running pod using the pod name from above: kubectl exec -it <pod_name> -n ${REACH_NAMESPACE} -- /bin/bash Running a command like apt update which makes an outbound request should timeout: apt update Exit the pod: exit
Read more

Deploying Virtalis Reach 2022.4 on a Kubernates Cluster

This document covers deploying a complete Virtalis Reach 2022.4 system into a Kubernetes cluster. The target audience are system administrators and the content ...
deploying-virtalis-reach-2022-4-on-a-kubernates-cluster
Deploying Virtalis Reach 2022.4 on a Kubernetes cluster Overview This document covers deploying a complete Virtalis Reach system into a Kubernetes cluster. The target audience are system administrators and the content is highly technical, consisting primarily of shell commands that should be executed on the cluster administration shell. The commands perform the actions required to deploy Virtalis Reach, however, you should read and understand what these commands do and be aware that your cluster or deployment may have a specific configuration. Virtalis Reach is a configurable platform consisting of many connected microservices allowing the deployment to be configured and adapted for different use-cases and environments. If you are unsure of the usage or impact of a particular system command then seek advice. Improper use of server infrastructure can have serious consequences. Prerequisites Virtalis Reach requires: Kubernetes cluster (either on premises or in the cloud): • At least version v1.22.7 • 8 cores • At least 64GB of memory available to a single node (128GB total recommended) • 625GB of storage (see the storage section for more information) • Nginx as the cluster ingress controller • Access to the internet during the software deployment and update • A network policy compatible network plugin Virtalis Reach does not require: • A GPU in the server • A connection to the internet following the software deployment The follow administration tools are required along with their recommended tested version: • kubectl v1.22.7 - this package allows us to communicate with a Kubernetes cluster on the command line • helm 3 v3.9.0 - this package is used to help us install large Kubernetes charts consisting of numerous resources • oras v0.8.1 - this package is used to download an archive from our internal registry containing some configuration files which will be used to deploy Virtalis Reach • azure cli stable - this package is used to authenticate with our internal registry hosted on Azure • jq v1.6 - this package is used to parse and traverse JSON on the command line • yq v4.6.1 - this package is used to parse and traverse YAML on the command line These tools are not installed on the Virtalis Reach server but only on the machine that will communicate with a Kubernetes cluster for the duration of the installation. If using recent versions of Ubuntu, the Azure CLI as installed by Snap is called azure-cli not az which refers to an older version in the Ubuntu repos. Alias azure-cli to az if needed Document Style In this document, variables enclosed in angled brackets <VARIABLE> should be replaced with the appropriate values. For example: docker login <my_id> <my_password> becomes docker login admin admin In this document, commands to execute in a shell are shown as code and each block of code is designed to be a single command that can be copied and pasted. These are commands to be entered in a shell in your clusters administration console This is another block of code \ that uses "\" to escape newlines \ and can be copy and pasted straight into your console Some steps have been included in a single bash script which can be inspected before being run. Pre-installation Set Up the Deployment Shell Make a directory to store temporary installation files: sudo mkdir -p /home/root/Reach && \ cd /home/root/Reach && \ sudo chown $(whoami) /home/root/Reach Export the following variables: export REACH_VERSION=2022.4.0 export ACR_REGISTRY_NAME=virtaliscustomer export SKIP_MIGRATIONS=1 export TLS_SECRET_NAME=reach-tls-cert export REACH_NAMESPACE=reach Substitute the variable values and export them: export REACH_DOMAIN=<the domain Virtalis Reach will be hosted on> export ACR_USERNAME=<service principal id> export ACR_PASSWORD=<service principal password> Substitute and export the following variables, wrap the values in single quotes to prevent bash substitution: export reach_licence__key=<reach licence xml snippet> export reach_licence__signature=<reach licence signature> Example: export reach_licence__key='<REACH><expires>123</expires></REACH>' export reach_licence__signature='o8k0niq63bPYOMS53NjgOTUqA0xfaBjfP5uB1uma' Export the environment variables if Virtalis Reach TLS will be configured to use LetsEncrypt: export KEYCLOAK_ANNOTATIONS="--set ingress.annotations\ .cert-manager\.io/cluster-issuer=letsencrypt-prod" export INGRESS_ANNOTATIONS="--set ingress.annotations\ .cert-manager\.io/cluster-issuer=letsencrypt-prod" Optional configuration variables: export MANAGED_TAG=<custom image tag for Virtalis Reach services> export MQ_EXPOSE_INGRESS=<when set to 1, expose rabbitmq on the ingress> export LOW_SPEC<set to true to set memory requests to a \ low amount for low spec machines, used for development> export USE_NEO4J_MEMREC=<set to true to use neo4j memrec, \ use in conjuction with LOW_SPEC> Checking the Nginx Ingress Controller kubectl get pods -n ingress-nginx This should return at least 1 running pod. ingress-nginx nginx-ingress-controller…….. 1/1 Running If Nginx is not installed, then please contact Virtalis to see if we can support a different ingress controller. Virtalis Reach is currently only compatible with Nginx. If there is no Ingress controller currently installed on the cluster, and you are confident you should install Nginx, then you can execute these commands to install it: helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update helm install ingress --create-namespace \ -n ingress-nginx \ bitnami/nginx-ingress-controller --version 9.1.10 Storage Kubernetes supports a wide variety of volume plugins which allow you to provision storage dynamically as well as with constraints depending on your requirements. List of supported volume plugins All PersistentVolumes used by Virtalis Reach reserve 625gb of storage space in total. This is a provisional amount which will likely change depending on your workload. Default By default, Virtalis Reach is deployed with the local volume plugin which creates volumes on the worker nodes. This is not the recommended way to deploy Virtalis Reach and is only appropriate for test level deployments as all databases are tied to the single disk of the node that they’re deployed on which hinders the performance of the system. To assist in dynamic local volume provisioning, we use the Local Path Provisioner service developed by Rancher: kubectl apply -f \ https://raw.githubusercontent.com/rancher/\ local-path-provisioner/master/deploy/local-path-storage.yaml Custom You can customize how storage for Virtalis Reach is provisioned by specifying which storage class you want to use. This must be created by a Kubernetes Administrator beforehand or, in some environments, a default class is also suitable. For example, when deploying to an Azure Kubernetes Service instance, it comes with a default storage class on the cluster which can be used to request storage from Azure. Express If you only want to modify the storage class and leave all other parameters like size as default, export these variables out: export REACH_SC=<name of storage class> export REACH_SC_ARGS=" --set persistence\ .storageClass="${REACH_SC}" --set core\ .persistentVolume.storageClass\ ="${REACH_SC}" --set master.persistence\ .storageClass="${REACH_SC}" " Custom parameters A list of different databases in use by Virtalis Reach and how to customize their storage is shown below. Thedefault values can be found in /home/root/Reach/k8s/misc//values-common.yaml</em> and <em>/home/root/Reach/k8s/misc//values-prod.yaml Minio Please refer to the persistence: section found in the values.yaml file in the Bitnami Minio helm chart repository for a list of available parameters to customize such as size, access modes and so on. values.yaml Neo4j Please refer to the core: persistentVolume: section found in the values.yaml file in the Neo4j helm chart repository for a list of available parameters to customize such as size, access modes and so on. https://github.com/neo4j-contrib/neo4j-helm/blob/4.2.6-1/values.yaml Alternatively, the Neo4j helm chart configuration documentation can also be found here https://neo4j.com/labs/neo4j-helm/1.0.0/configreference/ Mysql Please refer to the master: persistence: section found in the values.yaml file in the Bitnami Mysql helm chart repository for a list of available parameters to customize such as size, access modes and so on. https://github.com/bitnami/charts/blob/eeda6fcba43e1e98f37174479eb994badd2f5241/bitnami/mysql/values.yaml Miscellaneous Please refer to the persistence: section found in the values.yaml file in the Bitnami Rabbitmq helm chart repository for a list of available parameters to customize such as size, access modes and so on. values.yaml Deploying Virtalis Reach Create a namespace: kubectl create namespace "${REACH_NAMESPACE}" Add namespace labels used by NetworkPolicies: kubectl label namespace ingress-nginx reach-ingress=true; \ kubectl label namespace kube-system reach-egress=true The ‘ingress-nginx’ entry on line 1 will have to be modified if your nginx ingress is deployed to a different namespace. Configure Virtalis Reach TLS Manually create a TLS secret from a TLS key and cert or use the LetsEncrypt integration with cert-manager. Manually Creating a TLS Cert Secret kubectl create secret tls -n "${REACH_NAMESPACE}" \ "${TLS_SECRET_NAME}" --key="tls.key" --cert="tls.crt" LetsEncrypt with Cert-manager Requirements: • The machine hosting Virtalis Reach can be reached via a public IP address (used to validate the ownership of your domain) • A domain that you own (cannot be used for domains ending with .local) • Inbound connections on port 80 are allowed Create a namespace for cert-manager: kubectl create namespace cert-manager Install the recommended version of cert-manager: kubectl apply -f https://github.com/jetstack/\ cert-manager/releases/download/v1.7.1/cert-manager.yaml Create a new file: nano prod_issuer.yaml Paste in the following and replace variables wherever appropriate: apiVersion: cert-manager.io/ kind: ClusterIssuer metadata: name: letsencrypt-prod namespace: cert-manager spec: acme: server: https://acme-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: <YOUR_EMAIL_ADDRESS> privateKeySecretRef: name: reach-tls-cert solvers: - http01: ingress: class: nginx Press ctrl+o and then enter to save and then press ctrl+x to exit nano, now apply the file: kubectl apply -f prod_issuer.yaml Source: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes If you wish to do so, you can follow the digital ocean guide above and deploy an example service to test cert-manager before using it on Virtalis Reach. Download Installation Files Log in with Oras: oras login "${ACR_REGISTRY_NAME}".azurecr.io \ --username "${ACR_USERNAME}" \ -p "${ACR_PASSWORD}" Pull the Kubernetes deployment file archive from the Virtalis registry and unzip it: oras pull "${ACR_REGISTRY_NAME}"\ .azurecr.io/misc/k8s:$REACH_VERSION && tar -zxvf k8s.tar.gz Make the installation scripts executable: cd k8s && sudo chmod +x *.sh && sudo chmod +x misc/keycloak/migration/*.sh Create and Deploy Secrets Randomised secrets are used to securely interconnect the Virtalis Reach microservices. The script below uses the pwgen package to generate a random string of 30 alphanumeric characters. Before proceeding, make sure pwgen is installed on your machine. ./create-secrets.sh Deploy Virtalis Reach and Database Services ./deploy.sh Wait until all pods are showing up as Ready: watch -n2 kubectl get pods -n $REACH_NAMESPACE You will now be able to access the Virtalis Reach frontend client by opening the domain Virtalis Reach was installed on in a web-browser. Install the Automated Backup System Optionally install the automated backup system by referring to the Virtalis Reach Automated Backup System document or activate your own backup solution. Retrieving the Keycloak Admin Password Run the following command: kubectl get secret --namespace ${REACH_NAMESPACE} \ keycloak -o jsonpath="{.data.admin_password}" \ | base64 --decode; echo Refer to Virtalis Reach User Management for more information on how to administer the system inside Keycloak. Post Deployment Clean-up unset REACH_DOMAIN && \ unset TLS_SECRET_NAME && \ unset REACH_NAMESPACE && \ unset ACR_USERNAME && \ unset ACR_PASSWORD && \ unset ACR_REGISTRY_NAME && \ unset REACH_SC && \ unset REACH_SC_ARGS && \ unset reach_licence__key && \ unset reach_licence__signature Clear bash history: history -c This will clean up any secrets exported in the system. Test Network Policies Virtalis Reach utilizes NetworkPolicies which restrict the communication of the internal service on a network level. Note: NetworkPolicies require a supported Kubernetes network plugin like Cilium. To test these policies, run a temporary pod: kubectl run -it --rm test-pod \ -n ${REACH_NAMESPACE} --image=debian Install the curl package: apt update && apt install curl Run a request to test the connection to one of our backend apis, this should return a timeout error: curl http://artifact-access-api-service:5000 Exit the pod, which will delete it: exit Additionally, you can test the egress by checking that any outbound connections made to a public address are denied. Get the name of the edge-server pod: kubectl get pods -n ${REACH_NAMESPACE} | grep edge-server Exec inside the running pod using the pod name from above: kubectl exec -it <pod_name> -n ${REACH_NAMESPACE} -- /bin/bash Running a command like apt update which makes an outbound request should timeout: apt update Exit the pod: exit
Read more

Authentication with External Systems

In addition to users of Virtalis Reach being able to upload files for importing via the Hub’s user interface, Virtalis Reach can also download files from extern...
authentication-with-external-systems
Authentication with External Systems Introduction In addition to users of Virtalis Reach being able to upload files for importing via the Hub’s user interface, Virtalis Reach can also download files from external locations. Examples are PLM systems such as Windchill or TeamCenter, which can be extended to notify Virtalis Reach of data changes. Often these URLs are protected, meaning that in order for the download to succeed, the authentication scheme and credentials must be configured. This document explains how to do this. Supported Authentication Modes Authentication Mode Description None The resource is not protected and can be downloaded without any additional authentication steps Basic The resource is protected with basic authentication. For example, it will require a valid username and password, supplied by the site administrator BearerToken The resource is protected by a bearer token. For example, if the bearer token “MyBearerToken” is provided, the authorisation header sent with the request will read “Authorisation: Bearer MyBearerToken”. CustomAuthHeader The resource is protected by a custom authorisation header. For example, if the header specified is “MyAuthHeader” and the token specified is “MyAuthToken”, the header sent with the request will read “MyAuthHeader: MyAuthToken”. Contrast this with Bearer mode, where the standard Authorisation header would be used instead. ServiceAccount The resource is protected by OAuth client credentials that are internal to Virtalis Reach. For example, using the reach-service account in your own instance of Keycloak to request an access token before presenting that token to the resource. This is used for downloading files from the internal ImportFolder service used when someone uses the Hub to upload a file. Note: This was previously called OAuth but has been renamed to better reflect its purpose and reduce ambiguity between this mode and the new OAuth2 mode described below. OAuth2ClientCredentials Uses OAuth2’s client credentials flow to download the file. This is used when the resource is protected by client credentials that are held in an external OAuth2-compatible identity system. For example, you may have your own instance of Keycloak or Microsoft Identity Server where the client protecting the resource is defined. Configuration Translator Service The Translator Service and Job API perform the download operation and, in order to make a successful request, the host name must be configured in the UrlSourceCredentials section belonging to the services. This is a list of host names and credentials; there should be one entry for each host. An example section from the configuration is shown below: "UrlSourceCredentials": [ { // A server that has no authentication, files can be downloaded "Hostname": "totallyinsecure.com", "AuthType": "None" }, { // The local Import Folder API used when the user imports a file via the Reach Hub "Hostname": "virtalis.platform.importfolder.api", "AuthType": "ServiceAccount" }, { // A server that is protected by BASIC authentication "Hostname": "rootwindchill.virtalis.local", "AuthType": "Basic", "BasicConfig": { "Username": "someuser", "Password": "somepassword" } }, { "Hostname": "afilesource-ccf-file-export-function-app.azurewebsites.net", "AuthType": "OAuth2ClientCredentials", "OAuth2ClientCredentialsConfig": { "ClientId": "<clientId>", "ClientSecret": "<secret>", "AccessTokenEndpoint": "https://login.microsoftonline.com/<tenantId>/oauth2/v2.0/token", "Scope": "api://<scope-value>/.default" } ] Job API After a file has been imported, the Job API attempts to make a call back to the system that provided it as a way of letting it know that the import was successful. For this reason, the Job API also has a UrlSourceCredentials section in its secrets file. In the example below, there are two systems that support the callback functionality, one for localhost which represents the built-in Virtalis Reach Hub callback (for when a user uploads a file through the Virtalis Reach user interface) and another for a provider in Microsoft Azure, which is protected by OAuth2 Client Credentials. "UrlSourceCredentials": [ { "Hostname": "localhost", "AuthType": "ServiceAccount", "AllowInsecureRequestUrls": true }, { "Hostname": "afilesource-ccf-file-export-function-app.azurewebsites.net", "AuthType": "OAuth2ClientCredentials", "OAuth2ClientCredentialsConfig": { "ClientId": "<clientId>", "ClientSecret": "<secret>", "AccessTokenEndpoint": "https://login.microsoftonline.com/<tenantId>/oauth2/v2.0/token", "Scope": "api://<scope-value>/.default" } } ] Common Properties Hostname When Virtalis Reach receives a message that includes a URL to download a file from, it will extract the hostname from it and then look for a matching section in TranslatorServiceSecrets. If no match is found, the message will be rejected. If a match is found, it will use the details in the matching section to configure the authentication required. AuthType This is the authentication mode used by the external system. Refer to Supported Authentication Modes for further information. Basic Mode Properties These should be set in a “BasicConfig” section alongside the “Hostname” and “AuthType” properties. Username The username required to access the resource. Password The password required to access the resource. Bearer Token Mode Properties These should be set in a “BearerTokenConfig” section alongside the “Hostname” and “AuthType” properties. BearerToken The bearer token required to access the resource. Custom Auth Header Mode Properties These should be set in a “CustomAuthHeaderConfig” section alongside the “Hostname” and “AuthType” properties. AuthHeader The custom auth header required to access the resource. AuthToken The header value to specify. ServiceAccount Mode Properties There are no additional properties for this mode. Virtalis Reach is already configured internally with the correct credentials for requesting access tokens with the reach-service client. OAuth2ClientCredentials These should be set in a “OAuth2ClientCredentialsConfig” section alongside the “Hostname” and “AuthType” properties. ClientId This is the name of a client that exists in the external system’s identity server. In the example above, we have specified sunburn-client. A site administrator would be responsible for making sure this client exists. ClientSecret This is the client secret for the above client. AccessTokenEndpoint This is the endpoint Virtalis Reach needs to call when requesting an access token. A site administrator will need to provide this. If this is wrong, you will likely see an exception in the logs saying that an access token could not be retrieved, for example: Failed to obtain an access token. Check that the source credentials for the named resource (including ClientId, ClientSecret and Scope) are valid in the target identity system Note: This URL is also subject to the same checks for HTTP/HTTPS. If the URL is not over HTTPS you must explicitly allow insecure requests by setting AllowInsecureRequestUrls to true. Scope (Optional) If the resource requires a specific claim called ‘scope’ to be included in the request, you can specify it here. When Virtalis Reach requests the access token, it will also specify the scope and the resulting token will include it. AllowInsecureRequestUrls By default, Virtalis Reach does not allow files to be downloaded over HTTP because this is insecure. If a file source is configured with HTTP (via a mapping in the UrlSourceCredentials configuration section) or if a URL for a configured host comes into Virtalis Reach in a queue message and it specifies HTTP rather than HTTPS, an exception will be thrown and the request will not be made. It may be necessary to allow HTTP in some cases however, and this can be done by explicitly setting AllowInsecureRequestUrls to true in the configuration for that specific URL source. For example: { "Hostname": "virtalis.platform.importfolder.api", "AuthType": "ServiceAccount", "AllowInsecureRequestUrls": true }
Read more

Deploying The Virtalis Reach Monitoring Service Stack

Deployment of various monitoring services which allow a Kubernetes Administrator to monitor the health, metrics, and logs for all cluster services...
deploying-the-virtalis-reach-monitoring-service-stack
OverviewThis section describes the deployment of various monitoring services which allow a Kubernetes Administrator to monitor the health, metrics, and logs for all cluster services including Virtalis Reach.List of services to be deployed:Prometheus Stack (health, metrics)GrafanaPrometheusAlertmanagerELK Stack (logging)ElasticsearchKibanaLogstashVariables and CommandsIn this section, variables enclosed in <> arrows should be replaced with the appropriate values. For example:docker login <my_id> <my_password> --verbosity debugbecomes docker login admin admin --verbosity debugCommands to execute in a shell are shown as code and each block of code is designed to be a single command that can be copy and pasted:These are commands to be entered in a shell in your clusters administration consoleThis is another block of code \that uses "\" to escape newlines \and can be copy and pasted straight into your consoleSet Up the Deployment ShellExport some environment variables which will be used throughout the installation:export MONITORING_DOMAIN=<the domain monitoring services will be hosted on>export MONITORING_NAMESPACE=monitoringexport MONITORING_TLS_SECRET=reach-tls-secretCreate a new namespace:kubectl create namespace "${MONITORING_NAMESPACE}"kubectl label ns "${MONITORING_NAMESPACE}" release=prometheus-stackThe command below uses the pwgen package to generate a random string of 30 alphanumeric characters.Before proceeding, make sure pwgen is installed on your machine or use a different package to generate the string replacing the command inside the brackets:$(pwgen 30 1 -s) → $(someOtherPackage --arg1 --arg2)Create SecretsCreate a secret which will store Grafana credentials:kubectl create secret generic grafana \-n "${MONITORING_NAMESPACE}" \--from-literal="user"=$(pwgen 30 1 -s) \--from-literal="password"=$(pwgen 30 1 -s)kubectl create secret generic elastic-credentials -n $MONITORING_NAMESPACE \--from-literal=password=$(pwgen -c -n -s 30 1 | tr -d '\n') \--from-literal=username=elastickubectl create secret generic kibana-credentials -n $MONITORING_NAMESPACE \--from-literal=encryption-key=$(pwgen -c -n -s 32 1 | tr -d '\n')StorageExpressIf you only want to modify the storage class and leave all other parameters such as size as default, export these variables out:export MONITORING_SC=<name of storage class>export ELASTICSEARCH_SC_ARGS="--set \volumeClaimTemplate.storageClassName=${MONITORING_SC}"export LOGSTASH_SC_ARGS="--set \volumeClaimTemplate.storageClassName=${MONITORING_SC}"export PROMETHEUS_SC_ARGS="--set alertmanager.alertmanagerSpec.storage.\volumeClaimTemplate.spec.storageClassName=${MONITORING_SC}--set prometheus.prometheusSpec.storageSpec.\volumeClaimTemplate.spec.storageClassName=${MONITORING_SC}--set grafana.persistence.storageClassName=${MONITORING_SC}"Custom ParametersHere is a list of different monitoring services and how to customize their storage.ElasticsearchPlease refer to the volumeClaimTemplate: section found in the values.yaml file in the elasticsearch helm chart repository for a list of available parameters to customize such as size, access modes and so on.These values can be added/tweaked in the following files:k8s/misc/elk/elasticsearch/values-prod.yamlk8s/misc/elk/elasticsearch/values-common.yamlLogstashPlease refer to the volumeClaimTemplate: sections found in the values.yaml file in the logstash helm chart repository for a list of available parameters to customize such as size, access modes and so on.k8s/misc/elk/logstash/values-prod.yamlk8s/misc/elk/logstash/values-common.yamlPrometheus StackPlease refer to the volumeClaimTemplate: sections found in the values.yaml file in the prometheus-stack helm chart repository for a list of available parameters to customize such as size, access modes and so on.These values can be added/tweaked in the following files:k8s/misc/elk/prometheus/values-prod.yamlk8s/misc/elk/prometheus/values-common.yamlMonitoring TLSManually create a TLS secret from a TLS key and cert or use the LetsEncrypt integration with cert-manager.Manually Creating a TLS Cert Secretkubectl create secret tls -n "${MONITORING_NAMESPACE}" \"${MONITORING_TLS_SECRET}" --key="tls.key" --cert="tls.crt"LetsEncrypt with Cert-managerExport the following:export KIBANA_INGRESS_ANNOTATIONS="--set ingress.annotations\.cert-manager\.io/cluster-issuer=letsencrypt-prod"export PROMETHEUS_INGRESS_ANNOTATIONS="--set prometheus.ingress\.annotations.cert-manager\.io/cluster-issuer=letsencrypt-prod"export GRAFANA_INGRESS_ANNOTATIONS="--set grafana.ingress\.annotations.cert-manager\.io/cluster-issuer=letsencrypt-prod"export ALERTMANAGER_INGRESS_ANNOTATIONS="--set alertmanager.ingress.\annotations.cert-manager\.io/cluster-issuer=letsencrypt-prod"Installing Grafana, Alertmanager, and PrometheusAdd these repos to Helm and update:helm repo add prometheus-community https://\prometheus-community.github.io/helm-charts && \helm repo updateExport the following:export ALERTMANAGER_INGRESS="--set alertmanager.ingress\.hosts[0]=${MONITORING_DOMAIN} --set alertmanager.ingress\.tls[0].secretName=$MONITORING_TLS_SECRET --set alertmanager\.ingress.tls[0].hosts={${MONITORING_DOMAIN}}"export PROMETHEUS_INGRESS="--set prometheus.ingress\.hosts[0]=${MONITORING_DOMAIN} --set prometheus.ingress\.tls[0].secretName=$MONITORING_TLS_SECRET --set prometheus\.ingress.tls[0].hosts={${MONITORING_DOMAIN}}"export GRAFANA_INGRESS="--set grafana.ingress.hosts[0]\=${MONITORING_DOMAIN} --set grafana.ingress.tls[0]\.secretName=$MONITORING_TLS_SECRET --set grafana.ingress\.tls[0].hosts={${MONITORING_DOMAIN}}"Install:helm install prometheus-stack \--namespace "${MONITORING_NAMESPACE}" \--set grafana.admin.existingSecret="grafana" \--set grafana.admin.userKey="user" \--set grafana.admin.passwordKey="password" \--set grafana.'grafana\.ini'.server.root_url\="https://${MONITORING_DOMAIN}/grafana" \--set grafana.'grafana\.ini'.server.domain="${MONITORING_DOMAIN}" \--set grafana.'grafana\.ini'.server.serve_from_sub_path='true' \$ALERTMANAGER_INGRESS \$PROMETHEUS_INGRESS \$GRAFANA_INGRESS \$PROMETHEUS_INGRESS_ANNOTATIONS \$GRAFANA_INGRESS_ANNOTATIONS \$ALERTMANAGER_INGRESS_ANNOTATIONS \$PROMETHEUS_SC_ARGS \-f misc/prometheus/values-common.yaml \-f misc/prometheus/values-prod.yaml \prometheus-community/kube-prometheus-stackCheck the status of deployed pods:kubectl get pods -n "${MONITORING_NAMESPACE}"Accessing the Grafana FrontendRetrieve the Grafana admin user:kubectl get secret --namespace "${MONITORING_NAMESPACE}" \grafana -o jsonpath="{.data.user}" | base64 --decode; echoRetrieve the Grafana admin password:kubectl get secret --namespace "${MONITORING_NAMESPACE}" \grafana -o jsonpath="{.data.password}" | base64 --decode; echoGrafana can now be accessed at https://${MONITORING_DOMAIN}/grafana/ from a web-browser using the admin user and admin password.Installing Elasticsearch, Kibana and LogstashAdd this helm repo and update:helm repo add elastic https://helm.elastic.cohelm repo updateExport this variable:export KIBANA_INGRESS="--set ingress.hosts[0]\=$MONITORING_DOMAIN --set ingress.tls[0].secretName\=$MONITORING_TLS_SECRET --set ingress.tls[0]\.hosts[0]=$MONITORING_DOMAIN"Install Elasticsearchhelm install elasticsearch \--version 7.10 elastic/elasticsearch \-f misc/elk/elasticsearch/values-common.yaml \-f misc/elk/elasticsearch/values-prod.yaml \$ELASTICSEARCH_SC_ARGS \-n $MONITORING_NAMESPACEInstall Kibanahelm install kibana \--version 7.10 elastic/kibana \-n $MONITORING_NAMESPACE \$KIBANA_INGRESS_ANNOTATIONS \$KIBANA_INGRESS \-f misc/elk/kibana/values-common.yaml \-f misc/elk/kibana/values-prod-first-time.yaml \-f misc/elk/kibana/values-prod.yamlPatch Kibanakubectl patch deploy kibana-kibana \-n monitoring -p "$(cat misc/elk/kibana/probe-patch.yaml)"Get the elasticsearch admin password:kubectl get secret elastic-credentials -o jsonpath\="{.data.password}" -n $MONITORING_NAMESPACE | \base64 --decode; echoOpen up kibana in a web browser, log in using the elasticsearch admin password and the username “elastic” and add any additional underprivileged users that you want to have access to the logging system:https://$MONITORING_DOMAIN/kibana/app/management/security/usersInstall Filebeathelm install filebeat \--version 7.10 elastic/filebeat \-n $MONITORING_NAMESPACE \-f misc/elk/filebeat/values-common.yaml \-f misc/elk/filebeat/values-prod.yamlInstall Logstashhelm install logstash \--version 7.10 elastic/logstash \-n $MONITORING_NAMESPACE \$LOGSTASH_SC_ARGS \-f misc/elk/logstash/values-prod.yaml \-f misc/elk/logstash/values-common.yamlClean-up Post Monitoring InstallationUnset environment variables:unset MONITORING_DOMAIN && \unset MONITORING_NAMESPACEClear bash history:history -cThis will clean up any secrets exported in the system.
Read more

Manually Erasing Virtalis Reach Data

how to access the Hub and Artifact databases using the existing tools for Neo4j and Minio and also includes a Python script to automate typical tasks....
manually-erasing-virtalis-reach-data
OverviewVirtalis Reach does not provide a GUI mechanism to delete visualisations from the artifact store or models from the hub. This section describes how to achieve that outcome by showing how to access the Hub and Artifact databases using the existing tools for Neo4j and Minio and also includes a Python script to automate typical tasks.This section assumes that you have already installed Virtalis Reach and your shell is in the directory containing the files that were downloaded during the installation. This is usually stored in the home directory, for example “/home/root/Reach/k8s”Please Note: The actions in this section directly modify the databases used by the Virtalis Reach services. No consideration is given to the current activity of the system and system-wide transactions are not used. Before performing these actions, prevent access to users of the system by temporarily disabling the ingress server.Pre-installationBefore continuing with the next section, please refer to Virtalis Reach Automated Backup System and perform a full backup of the system.Installing the Serviceexport REACH_NAMESPACE=<namespace>helm install reach-data-eraser -n $REACH_NAMESPACE data-eraser/chart/ Turn On the Servicekubectl scale statefulset -n $REACH_NAMESPACE reach-data-eraser --replicas=1List arguments:kubectl exec -it -n $REACH_NAMESPACE \reach-data-eraser-0 -- /bin/bash -c \"/del-obj.py --help"Output:usage: del-obj.py [-h] [-t TYPE] [-d DELETE] [-l] [-s] [-T]Deletes visualisation artifacts and vrmodels from Virtalis Reachoptional arguments: -h, --help show this help message and exit -t TYPE, --type TYPE Choose data type to erase, either 'artifact' or 'vrmodel' (default artifact) -d DELETE, --delete DELETE Deletes artifact or vrmodel by ID -l, --list List artifacts or vrmodels -s, --size List total size of artifacts or vrmodels. This will increase the time to retrieve the list depending on how much data is currently stored. -T, --test Dry run - test deleteDeleting a Visualisation ArtifactList Artifacts to Extract Artifact IDskubectl exec -it -n $REACH_NAMESPACE \ reach-data-eraser-0 -- /bin/bash -c \ "/del-obj.py --list"Sample output:Connecting to Neo4j bolt://localhost:7687JustAPointList : ID 8f3885c5-03ec-492f-9fca-8119ad2f4962assembled : ID 787eae34-5764-4105-a50f-c441c100f66elight_test_plus_cube : ID 7ae36ec6-ea6b-4639-973f-8fd16179b262template_torusknot : ID ebd7d8fe-a846-4b70-ac86-01c275e5f3b1template_torusknot : ID 81894536-d0d8-454e-816e-3db87d1e58c8The above list will show each revision separately.As you can see, there are 2 revisions of template_torusknot. You can use the UUID to cross-reference which version this refers to so that you can make sure you are deleting the right revision.In a web browser, navigate to the following URL, replacing <UUID> with the UUID of the artifact you want to check and replacing <YOUR_DOMAIN> with the domain of your Reach installation.https://<YOUR_DOMAIN>/viewer/<UUID>Once opened, you can click the “Show all versions” link to bring up a list of all versions along with the information about the current revision.Erase an ArtifactOptional but recommended, use the -T switch to test the deletion procedure without affecting the database.kubectl exec -it -n $REACH_NAMESPACE \ reach-data-eraser-0 -- /bin/bash -c \ "/del-obj.py --test --delete f4a356df-823f-424c-a6c9-2bc763ef9a41"Sample output:Checking cachedxml/3b3a3f44ec5977f17cb5239030945f6063f9cf91b433909b01e779a89d4830beOKDeleting from meshesDeleting from texturesDeleting from cachedxmlDeleting cachedxml/3b3a3f44ec5977f17cb5239030945f6063f9cf91b433909b01e779a89d4830beOKRemove the -T switch to delete the data.Input:kubectl exec -it -n $REACH_NAMESPACE \ reach-data-eraser-0 -- /bin/bash -c \ "/del-obj.py --delete f4a356df-823f-424c-a6c9-2bc763ef9a41"Sample output:Checking cachedxml/3b3a3f44ec5977f17cb5239030945f6063f9cf91b433909b01e779a89d4830beOKDeleting from meshesDeleting from texturesDeleting from cachedxmlDeleting cachedxml/3b3a3f44ec5977f17cb5239030945f6063f9cf91b433909b01e779a89d4830beOKDeleting VRModelsThe process for deleting VRModels is the same as deleting visualisation artifacts except that the object type should be change from the default of artifact to vrmodel using the -t or --type parameter.kubectl exec -it -n $REACH_NAMESPACE \ reach-data-eraser-0 -- /bin/bash -c \ "/del-obj.py --list --type vrmodel"Sample output:JustAPointList : ID a1e0544c-8985-4ca0-a50c-1856a81c7ca5NX_Speedboat : ID 3232ae07-b0bd-4f3b-ac1d-c595126a8b20SYSTEM_FILTER_BOX_WA_1_5T : ID 141d6136-3ba8-4a08-8462-8aa23e63ed5bSolid Edge 853 : ID 3b3ca5ec-589a-4582-bf85-65603872985eTwoModelsSameName : ID 86cbc92c-5159-4260-bd4a-22265debfa58Turn Off the ServiceOnce done, scale down the service:kubectl scale statefulset -n $REACH_NAMESPACE reach-data-eraser --replicas=0On Data Reuse Between Data StoresBinary data items may be referenced by multiple artifacts, for example when a model is reused in different projects or by revisions of a project. Only when deletion of an artifact will result in the related binary data items being unreferences will they be deleted.In the diagram, the deletion of Visualisation A will not result in the deletion of the LOD Binary data because it is also referenced by Visualisation B. If A is first deleted then the LOD binary data will be referenced only by B, then when B is deleted the LOD Binary data also be deleted. 
Read more

Virtalis Reach User Management

Manage Virtalis Reach user details, creating user groups and add selected users to the groups....
virtalis-reach-user-management
OverviewThis section describes how to manage Virtalis Reach user details, creating user groups and add selected users to the groups.Virtalis Reach uses Keycloak for Identity and Access Management (IAM). This section assumes Keycloak has been installed on your system and that you have administration access rights.Accessing the Keycloak Admin PanelNavigate to https://<reach domain>/auth/admin/ replacing <reach domain> with the domain Virtalis Reach is hosted on.Enter the Keycloak administrator credentials that were extracted during the Virtalis Reach deployment.Ensure that the currently selected realm in the top left corner is Reach. If not, select it from the drop-down menu.Managing Users Go to Manage > Users and use this page to:View all users currently in the systemAdd users to the systemEdit the details of a userAdd users to groupsPlease note: AAD users must log in at least once to become visible in the system.Adding a UserTo add a user:Click Add user.Enter the user details.Click Save.Setting User CredentialsTo set the user credentials:Click the Credentials tab and set a password for the user. Set Temporary to OFF if you do not want the user to have to enter a new password when they first log in.Click Set Password.Adding Users to GroupsTo edit the groups a user is in:Select the user you wish to edit.Click the Groups tab.Select a single group from the list that you wish to add/remove the user to/from.Click Join.You will see the groups that the user belongs to on the left-hand side of the page and the available groups that the user can be added to on the right-hand side.Managing GroupsGo to Manage > Groups and use this page to:View all the groups currently in the systemCreate new groups for the purpose of access control on certain assets, projects, or visualisationsVirtalis Reach Specific GroupsVirtalis Reach has three main system groups:data-uploaders - access to /import, controls who can import assets into the system project-authors - access to /hub, controls who can create and publish projectsreach_script_publishers - controls whether a user can enable scripts for their projectType image caption here (optional)Creating a New GroupTo create a new group:Click New to create a new group. Enter a name for the group.Click Save. You will now be able to edit users individually in the system and assign them to the new group.
Read more

Updating the Virtalis Reach Licence Key

This section describes how to replace the currently installed licence key with a new one....
updating-the-virtalis-reach-licence-key-2
Updating the Virtalis Reach Licence Key Overview This section describes how to replace the currently installed licence key with a new one. This section assumes that you have already installed Virtalis Reach and your shell is in the directory containing the files that were downloaded during the installation, this is usually stored in the home directory, for example “/home/root/Reach/k8s” Set Up Variables Export the following variables: export REACH_NAMESPACE=reach Load previous configuration: . ./load-install-config.sh Substitute and export the following variables: export reach_licence__key=<reach licence xml snippet> export reach_licence__signature=<reach licence signature> Update Secrets Run a script: ./create-secrets.sh kubectl get secret reach-install-config \ -n $REACH_NAMESPACE -o json | jq -r '.data.reach_licence__key="'\ $(echo -n $reach_licence__key | base64 -w 0 | tr -d '\n')'"' \ | kubectl apply -f - kubectl get secret reach-install-config \ -n $REACH_NAMESPACE -o json | jq -r '.data.reach_licence__signature="'\ $(echo -n $reach_licence__signature | base64 -w 0 | tr -d '\n')'"' \ | kubectl apply -f - ‍ Gracefully restart any running pods for the two services below by doing a rolling restart: kubectl rollout restart deploy artifact-access-api -n $REACH_NAMESPACE kubectl rollout restart deploy project-management-api -n $REACH_NAMESPACE
Read more

Installing Virtalis Reach Translator Plugins

Enable the end-user to import more file formats. This section describes how to install translator plugins into a live Virtalis Reach systems...
installing-virtalis-reach-translator-plugins
OverviewVirtalis Reach supports numerous translator plugins which enable the end-user to import more file formats. This section describes how to install translator plugins into a live Virtalis Reach system.InstallationExport the following:export REACH_NAMESPACE=<name of kubernetes namespace Virtalis Reach is deployed in>Extract the plugin on to a machine with access to the Kubernetes cluster Virtalis Reach is running on.Example: Installing a OpenJT Reader plugin, the OpenJTReader folder will contain .dll files and libraries:root@reach-azure-develop:~/TranslatorPlugins# ls -latrtotal 12drwx------ 18 root root 4096 Aug 17 14:11 ..drwxr-xr-x 2 ubuntu ubuntu 4096 Aug 17 14:11 OpenJTReaderdrwxr-xr-x 3 ubuntu ubuntu 4096 Aug 17 14:27 .Get the full name of a running translator pod:export TRANSLATOR_POD_NAME=$(kubectl get pod \-l app=import-translator -n $REACH_NAMESPACE \-o jsonpath="{.items[0].metadata.name}")Copy the folder containing the plugins onto the persistent plugins folder on the translator pod, this might take a while depending on your connection and the size of the plugin folderkubectl cp format when pushing a file is <source> <namespace>/<pod-name>:<pod-destination-path>kubectl cp PLMXMLReader/ \$REACH_NAMESPACE/$TRANSLATOR_POD_NAME:/app/Translators/After the transfer is complete, restart the translator pod:kubectl delete pods -n reach -l app=import-translatorCheck the logs to verify that the plugin has been loaded:kubectl logs -l app=import-translator -n reachYou should see a log message containing the name of the plugin:[14:41:56 develop@5739eea INF] Adding translator OpenJTReader for extension .jt.‍
Read more

Installing the VirtalisPublish Plugin

VirtalisPublish is a plugin for PTC’s Windchill PLM server software. It allows assemblies to be published to Virtalis Reach when Windchill detects that they hav...
installing-the-virtalispublish-plugin-new
Installing the VirtalisPublish Plugin Introduction VirtalisPublish is a plugin for PTC’s Windchill PLM server software. It allows assemblies to be published to Virtalis Reach when Windchill detects that they have been modified. Once installed and correctly configured, the general process is as follows. Prerequisites The processes outlined in this document have been developed and tested on Windchill 11.1. Throughout this document the base installation folder of Windchill will be referred to as <install-folder>, an example would be D:\ptc\Windchill_11.1\Windchill. Creo Elements/Pro (formerly Pro/Engineer) needs to be installed in the Windchill environment. This acts as the ‘Worker’ that Windchill’s Worker Agent will delegate the job of publishing a representation to, after a successful check-in event. It is assumed that you have already installed Virtalis Reach. After installing the plugin or making any configuration changes, the PTC Windchill service must be restarted for the changes to take effect. Pre-Installation You must configure Virtalis Reach to allow for Windchill to talk to the Message Queue. The communication between Windchill and the message queue will be encrypted using TLS. Export the variables replacing anything in <>: export REACH_NAMESPACE=<reach namespace> Load the Previous Configuration . ./load-install-config.sh The next command will generate a password for the client certificates and requires the pwgen package to be available on the command line. This can be substituted for any other command which generates a strong password. export MQ_CLIENT_KEY_PASS="$(pwgen 30 1 -s)" Before running the next steps ensure that openjdk8(java 8 ) is installed on your system where you will be generating the certificates. Running java -version should return something similar to the following: openjdk version "1.8.0_312" OpenJDK Runtime Environment (build 1.8.0_312-8u312-b07-0ubuntu1~21.10-b07) OpenJDK 64-Bit Server VM (build 25.312-b07, mixed mode) If a different version is installed, uninstall it and run the following: sudo apt install openjdk-8-jre-headless Generate self-signed certificates or get them from your site administrator (some of these will also be used when installing Windchill): git clone https://github.com/michaelklishin/tls-gen tls-gen cd tls-gen/basic make CN=$REACH_DOMAIN PASSWORD=$MQ_CLIENT_KEY_PASS cd result Create a keystore for Windchill, use a strong password to secure the keystore: keytool -import -alias server1 -trustcacerts -file server_certificate.pem -keystore rabbitstore.jks Create a rabbitmq-certs secret from the generated certs: kubectl create secret generic rabbitmq-certs -n $REACH_NAMESPACE \ --from-file='ca.crt'=ca_certificate.pem \ --from-file='tls.crt'=server_certificate.pem \ --from-file='tls.key'=server_key.pem Configure RabbitMQ Upgrade RabbitMQ (make sure you are inside the ~/k8s/ directory (default) or wherever else the installation files were downloaded to during the Virtalis Reach installation): MQ_EXPOSE_INGRESS=1 ./install-mq.sh Configure a file source for Windchill by referring to the Configuring External File Sources section of the Deploying Virtalis Reach document and run the script: ./create-secrets.sh Check if port 5671 is exposed in the service definition: kubectl get svc message-queue-rabbitmq -n $REACH_NAMESPACE -o yaml Under the port array you should find amqp-ssl: spec: ...... ports: ....... - name: amqp-ssl port: 5671 protocol: TCP targetPort: amqp-ssl ...... If it is missing, add it to the service and also to the statefulset: - name: amqp-ssl port: 5671 protocol: TCP targetPort: amqp-ssl kubectl edit statefulset message-queue-rabbitmq -n $REACH_NAMESPACE Add the following entry under: · spec · template · spec · containers[0] · ports - containerPort: 5671 name: amqp-ssl protocol: TCP Configure nginx Run the command shown below: cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: name: tcp-services namespace: ingress-nginx data: 5673: "$REACH_NAMESPACE/message-queue-rabbitmq:5671" EOF This tells nginx to listen on port 5673 and proxy the connection to rabbitmq on port 5672. Run: kubectl edit deploy ingress-nginx-ingress-controller -n ingress-nginx Under containers.args add these args: - --tcp-services-configmap=ingress-nginx/tcp-services - --enable-ssl-passthrough Edit the ingress-nginx-controller service and add the new port under spec.ports: kubectl edit svc ingress-nginx-ingress-controller -n ingress-nginx ... ports: - name: mq port: 5673 protocol: TCP targetPort: 5673 ... Create a Windchill User Replace <WINDCHILL USER PASSWORD> with a strong random generated password and replace <REACH NAMESPACE> with the namespace Virtalis Reach is deployed in and then run the command: kubectl exec -n $REACH_NAMESPACE message-queue-rabbitmq-0 -- /bin/bash -c \ "rabbitmqctl add_user 'windchill' '<WINDCHILL USER PASSWORD>' && \ rabbitmqctl set_permissions -p '/' 'windchill' '^$' '^(amq.default|delay.exchange)$' '^$'" Create a Windchill Group In the keycloak admin panel, create a new group with the name “windchill” and add any users to it that will use the Windchill integration. Refer to https://www.virtalis.com/chapter/virtalis-reach-user-management for further information. Installation The plugin comes as a Zip file containing two main files: VirtalisPublish.jar – this contains the compiled plugin code required to publish assemblies to Virtalis Reach. virtalisPublish.properties – a configuration file for tailoring the plugin to your environment. To install, copy the files to the following locations: 1. VirtalisPublish.jar: <install-folder>/codebase/WEB-INF/lib 1. virtalisPublish.properties: <install-folder>/codebase The Zip file also comes with some third-party JAR files (one RabbitMQ AMQP jar required for posting messages to Virtalis Reach’s queue, and three Jackson jar files for JSON serialisation) that will be required if this is a first-time installation. They should also be copied to the <install-folder>/codebase/WEB-INF/lib folder if they aren’t already there. Configuring Windchill To register the plugin with Windchill and instruct it to execute the plugin at the correct time, make the following modifications to these existing Windchill files. wt.properties Location: <install-folder>/codebase/wt.properties The main class in the VirtalisPublisher plugin is called StandardCopyRepsService. It extends the Windchill class wt.service.StandardManager, allowing integration with Windchill’s infrastructure. This class must be registered as a new service within Windchill. In the example below, it is registered as service number 8000. See here for some important notes on the numbering used. It is important to pick a number that’s higher than all the built-in Windchill services that it might need to work. Add an entry similar to this one: wt.services.service.8000=co.uk.rootsolutions.virtalis.CopyRepsService/co.uk.rootsolutions.virtalis.StandardCopyRepsService <site-specific>.xconf Location: <install-folder> All installations have a site.xconf file in <install-folder>. This is one of the main Windchill configuration files. However, changes aren’t allowed to be made to this file because future upgrades could overwrite them. Therefore, installations will have customer-specific configuration files alongside site.xconf, such as virtalis.xconf in our instance. To locate your file, open site.xconf and look for a ConfigurationRef entry similar to the one below. This will tell you which file to make the next changes to. <ConfigurationRef xlink:href="D:\ptc\Windchill_11.1\Windchill\virtalis.xconf"/> If such a file doesn’t exist, you may need to create one. Use the attached virtalis.xconf file as an example, and refer to the documentation here. Locate the entry with a property name set to publish.afterloadermethod, and edit it to look like this. If such an entry doesn’t exist, add one. <Property name="publish.afterloadermethod" overridable="true" targetFile="codebase/WEB-INF/conf/wvs.properties" value="co.uk.rootsolutions.virtalis.StandardCopyRepsService/copyReps"/> Next, add an entry for the new service making sure the number (8000 in this case) matches what was specified in wt.properties. <Property name="wt.services.service.8000" overridable="true" targetFile="codebase/wt.properties" value="co.uk.rootsolutions.virtalis.CopyRepsService/co.uk.rootsolutions.virtalis.StandardCopyRepsService"/> wvs.properties Location: <install-folder>/codebase/WEB-INF/conf This file also requires a reference to the Virtalis service. Add the following entry: publish.afterloadermethod=co.uk.rootsolutions.virtalis.StandardCopyRepsService/copyReps Configuring the Plugin The file virtalisPublish.properties allows various configuration options to be set, which are described in the table below. Property name Description Example value copy.logger.location Sets location for logger, if left without a value logs will be saved to <install-folder>\logs e.g. D:\\ptc\\VirtalisPublish.log D:/ptc/Virtalis_Download/logs/publisher.txt log.verbose Set log.verbose=true for debugging purposes true download.metadata If true this will create a file for EPMDocument metadata true attribute.names If download.metadata=true set the list of attributes required here - use internal name & comma delimited. name,number,revision,iterationInfo.identifier.iterationId,state.state metadata.file.extension Set filetype for metadata file e.g. txt or csv txt download.use.http If true, the download URL that gets built up and then sent in the message to Reach will begin with http. Otherwise (and also default) it will begin with https true/false downloadToLocal.dir set location for files and metadata to be downloaded to, include \ escape character, can be unc path e.g.\\\\vmware-host\\Shared Folders\\share D:\\ptc\\Virtalis_Download check.for.state If true this will only download files/metadata if published EPMDocument is at a state specified in copy.publish.states property. true copy.publish.states States considered if check.for.state is true InWork,Released,Controlled force.republish If a positioning assembly representation already exists, when this property is set to true will overwrite the existing representation true queue.host The IP address or hostname of the Virtalis Reach queue to which NotificationMessages will be published after an assembly is checked-in. If using peer-verification (see below) this value must match either the CN or SAN fields in the server’s SSL certificate. 123.456.789.098 or a hostname queue.port The port number needed to connect to the Virtalis Reach queue 1234 queue,name The name of the Virtalis Reach queue to publish messages on. The value should never need changing from ‘changes’. changes queue.username The username required to connect to the Virtalis Reach queue (this is windchill by default if you follow the Pre-installation steps) <queue-username> queue.password The password required to connect to the Virtalis Reach queue <queue-password> queue.ssl Whether to use SSL to connect to the Virtalis Reach queue. Setting this alone to true will encrypt the traffic between the Plugin and Virtalis Reach but will not perform peer-verification checks (see below). Nevertheless, this adds a layer of security. true/false queue.usePeerVerification If set to true (and queue.ssl is also true), will attempt to perform peer verification when connecting to the Virtalis Reach queue. This adds more security but requires client keys to be installed. See the section on SSL/TLS. true/false queue.client.key The client key. This file should be provided by a site administrator who provided the certificate used to secure the Virtalis Reach queue. D:/ptc/Virtalis_Download/truststore/client_key.p12 queue.trust.store This is where the Windchill plugin will store it’s certificates & keys etc. Should be a Java keystore file. D:/ptc/Virtalis_Download/truststore/rabbitstore.jks client.key.password The password for the client key specified in queue.client.key <client-key-password> trust.store.password The password for the trust store specified when generating certificates under the pre-installation section under “Create a keystore for windchill” <trust-store-password> exportable.item.filters Regular expressions to limit which assemblies will be sent to Virtalis Reach. See the Filtering section for more information .*\.asm .*\.asm;.*\.prt .*Aviation.*\.asm; groups A comma-separated list of Virtalis Reach groups that the resulting asset will be available to (in Virtalis Reach) windchill windchill,engineers Verifying the Installation Once installed and configured, restart the PTC Windchill service. Locate the folder D:\ptc\Windchill_11.1\Windchill\logs and monitor the most recent MethodServer-<date>-log4j.log file. You should see messages appear like this: 2021-03-18 23:01:01,242 INFO [main] wt.system.out - VirtalisPublish - Loading properties 2021-03-18 23:01:01,245 INFO [main] wt.system.out - VirtalisPublish - Setting up logger 2021-03-18 23:01:01,245 INFO [main] wt.system.out - VirtalisPublish - logger setup complete This indicates that the plugin has been successfully registered as a service within Windchill. Note, these are low-level log messages logged to stdout. After these messages appear, all future log messages from the plugin will be logged to the location specified by the configuration property copy.logger.location, which by default is D:\ptc\Virtalis_Download\logs\publisher.txt. Testing the Plugin Make a change to an assembly in Windchill and perform a check-in. Shortly after, you should see some activity in the log file specified as copy.logger.location (publisher.txt by default). There should be a message similar to the following [2021-03-19 14:39:19.834] - co.uk.rootsolutions.virtalis.StandardCopyRepsService - co.uk.rootsolutions.virtalis.MessageSender Message sent This indicates that Windchill has posted a message to the Virtalis Reach queue and you should see an import job running in the jobs screen in Virtalis Reach If not, please refer to publisher.txt to see if there are any obvious error messages, the main Windchill logs, and the troubleshooting guide below. Troubleshooting Test the Connection We strongly recommend that before proceeding you test the Windchlll queue connection using the following commands. If there are connectivity issues in the future, the following can also assist with troubleshooting. Test the connection with the rabbitmq python client. Replace the variables wrapped in the # comment block and save file as mq.py: #!/usr/bin/env python3 import pika import logging import ssl logging.basicConfig(level=logging.INFO) ### ### Replace everything in <> below cafile_path="<full path to dir>/ca_certificate.pem" client_cert_path="<full path to dir>/client_certificate.pem" client_key_path="<full path to dir>/client_key.pem" mq_domain="<rabbit message queue domain/address>" windchill_mq_password="<windchill user password>" ### ### context = ssl.create_default_context( cafile=cafile_path) context.load_cert_chain(client_cert_path, client_key_path) ssl_options = pika.SSLOptions(context, mq_domain) credentials = pika.PlainCredentials("windchill", windchill_mq_password) connection = pika.BlockingConnection(pika.ConnectionParameters(host=mq_domain, port=5673, virtual_host='/', credentials=credentials, ssl_options=ssl_options )) channel = connection.channel() Install dependencies: pip3 install pika chmod +x mq.py ./mq.py The connection should be instantly accepted and then closed: root@master:~# ./mq.py INFO:pika.adapters.utils.connection_workflow:Pika version 1.1.0 connecting to ('10.209.65.143', 5673) INFO:pika.adapters.utils.io_services_utils:Socket connected: <socket.socket fd=6, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('10.209.65.143', 45154), raddr=('10.209.65.143', 5673)> INFO:pika.adapters.utils.io_services_utils:SSL handshake completed successfully: <ssl.SSLSocket fd=6, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('10.209.65.143', 45154), raddr=('10.209.65.143', 5673)> INFO:pika.adapters.utils.connection_workflow:Streaming transport linked up: (<pika.adapters.utils.io_services_utils._AsyncSSLTransport object at 0x7fd7c7550910>, _StreamingProtocolShim: <SelectConnection PROTOCOL transport=<pika.adapters.utils.io_services_utils._AsyncSSLTransport object at 0x7fd7c7550910> params=<ConnectionParameters host=reach.local port=5673 virtual_host=/ ssl=True>>). INFO:pika.adapters.utils.connection_workflow:AMQPConnector - reporting success: <SelectConnection OPEN transport=<pika.adapters.utils.io_services_utils._AsyncSSLTransport object at 0x7fd7c7550910> params=<ConnectionParameters host=reach.local port=5673 virtual_host=/ ssl=True>> INFO:pika.adapters.utils.connection_workflow:AMQPConnectionWorkflow - reporting success: <SelectConnection OPEN transport=<pika.adapters.utils.io_services_utils._AsyncSSLTransport object at 0x7fd7c7550910> params=<ConnectionParameters host=reach.local port=5673 virtual_host=/ ssl=True>> INFO:pika.adapters.blocking_connection:Connection workflow succeeded: <SelectConnection OPEN transport=<pika.adapters.utils.io_services_utils._AsyncSSLTransport object at 0x7fd7c7550910> params=<ConnectionParameters host=reach.local port=5673 virtual_host=/ ssl=True>> INFO:pika.adapters.blocking_connection:Created channel=1 Windchill Doesn’t Start After Installing the Plugin Check the main Windchill logs, which can be found in <install-folder>/logs. There are lots of different log files, you should first check MethodServer-xxxxxxxxxxxx.log and BackgroundMethodServer-xxxxxxxxxxxx.log. Nothing Happens After Performing a Check-in on an Assembly Check the main logs in <install-folder>/logs for errors. Check the log file of the VirtalisPublish plugin for errors. This is the file specified by the copy.logger.location configuration property in virtalisPublish.properties. Check the Worker Agent to see if the job was processed. You can access the worker agent jobs by logging into Windchill as an admin user and choosing Browse > Site > Utilities. You can see the general health of the Worker Agent in the window that opens up: There should be a PROE worker registered. This performs the work after a check-in and is required for the VirtalisPublish plugin to be triggered. Ask your Windchill server administrator to assist if the worker is not there. If it is listed but says anything other than ‘Available’ in the Job column, or the check against ‘On-Line’ is missing, you can try restarting it using the green flag icon in the appropriate row. Check the WVS Job Monitor to see if the job failed. You can access this via Quick Links > WVS Job Monitor. You might see a message saying that the publishing job failed. Note, this is Windchill’s internal publishing job (performed by the Worker). If you see an error similar to the one shown below, you will need to contact the Windchill server administrator to find out why the PROE Worker Agent isn’t working. It could be that the PROE licence has expired or any number of other reasons. Worker Agent Fails to Start The worker agent can also fail to start if the license for Creo Parametric has expired. You may see these errors in the worker agent logs as it tries to start: [2022-02-16 10:28:30] timestamps added to logging from command argument. [2022-02-16 10:28:30] workerhelper - Version : Creo 6.0.0.0 - (16.0.0.24) x86e_win64 [2022-02-16 10:28:30] Increasing initial timeout to 180 seconds from command line to accomodate helper startup sleep time. [2022-02-16 10:28:30] Connecting to server "rootwindchill.virtalis.local" on port 5600 [2022-02-16 10:28:30] Connection established [2022-02-16 10:28:30] Helper keep alive set - 300000 milliseconds [2022-02-16 10:28:31] Worker Cache Cleared [2022-02-16 10:28:31] Started worker [2022-02-16 10:28:36] Worker process 9004 has exited unexpectedly. Shutting down helper. [2022-02-16 10:28:36] Worker helper shutting down. [2022-02-16 10:28:36] WorkerHelper completed, exiting These are indicative of a Creo Parametric license error which you can then confirm by running Creo Parametric. If Creo Parametric generates a license error at startup then contact the Windchill server administrator.
Read more

Adding Trusted Certificates for External HTTP Requests

This document explains how to enable certificates to be trusted by Virtalis Reach by adding them to the secrets of the Translator Service and the Job API....
adding-trusted-certificates-for-external-http-requests
Introduction To request that Virtalis Reach import a file, you specify the URL that its Translator Service will download the file from and, optionally, the URL that its Job API will notify on completion of the translation. A request to an HTTPS URL that is secured with an untrusted certificate, for example for an internal service in an organisation or for testing purposes, will fail. This document explains how to enable certificates to be trusted by Virtalis Reach by adding them to the secrets of the Translator Service and the Job API. Certificates can be loaded in the secrets of the Translator Service and the Job API to specify they should be accepted as trusted. Each certificate will be logged at start-up as it’s loaded by the Translator Service and, if there are any errors loading them or requesting any resources, that will also be logged appropriately. After the certificates have been loaded, those certificates will be trusted and external requests to servers using those certificates will succeed. Please note: You may need to configure this for both the Translator Service and the Job API, since there are two places where external HTTP requests are made: the translator service can download data from the specified URL, and the Job API can optionally call to an external URL when a submitted job is complete. See also: https://kubernetes.io/docs/concepts/configuration/secret/ for further information if you are not familiar with this feature of Kubernetes. https://www.virtalis.com/chapter/deploying-virtalis-reach-on-a-kubernetes-cluster#toc-create-and-deploy-secrets Create and Deploy Secrets Installing File Source Certificates How to Use Self-signed Certificates in the Platform Adding a certificate assumes that you already have an HTTPS server set up with a self-signed certificate. Certificates in PEM format are supported, for example: 1-----BEGIN CERTIFICATE----- 2(The hex data of the certificate) 3-----END CERTIFICATE----- Once you have your certificate in this format, save it, for example as “example.crt”, and add it to the Translator Service as a secret. Then update the main TranslatorServiceSecrets to include the secret name in the “Certificates” field. For example: 1"Certificates": [ 2 “example.crt”, 3 “anotherExample.crt” 4] You may need to do this to both the Translator Service secrets and the Job API secrets, as noted above. Then, when importing an asset, use your HTTPS server's URL in the translation message to download the file and then optionally use JobCallbackUrl to call the external service to report that the job has completed successfully. Error Messages VirtalisException: Unable to load the certificate called { certificate }, check if the certificate is in the secret folder. If this exception message occurs, check the certificate file specified, as it may have been formatted incorrectly. VirtalisException Unable to load the certificate called { certificate } because the certificate is expired. If this exception message occurs, check the certificate expiration date, as it is most likely expired. The default expiration date is one year for a certificate that is issued by a Stand-alone Certificate Authority CA. Error message: Certificate failed validation due to multiple remote certificate chain errors. Under this error message it should list the chain errors and the reason behind them that have occurred. If an UntrustedRoot error is listed as the only error, then the problem is that the certificate used by the server is not configured as a trusted certificate. Error message: An ssl error occurred when loading {requestUri}: {sslPolicyError} If this error message occurs, the policy error will be either “RemoteCertificateNotAvailable” or “RemoteCertificateNameMismatch”, which indicates that there’s some configuration issue with the remote server’s certificate beyond its trusted status.
Read more

Virtalis Reach Mesh Splitting Overview and Configuration

This document is designed to help systems administrators to enable and configure Mesh Splitting in a Virtalis Reach environment....
virtalis-reach-mesh-splitting-overview-and-configuration
IntroductionMesh Splitting in Virtalis Reach is the process of breaking apart high triangle-count meshes into smaller chunks. Currently, the implementation defaults to only splitting meshes that have more than 64 thousand triangles and aims to create balanced splits both in terms of triangle count and 3D dimensions.This document is designed to help systems administrators to enable and configure Mesh Splitting in a Virtalis Reach environment.Level of Detail (LOD)When viewing a visualisation, the renderer chooses the LOD for each mesh such that it can maintain a certain framerate. With large meshes, this means it must choose the LOD for the entire mesh, regardless of how much of it the viewer can see. This can result in poor detail in large meshes because the triangle count is too high for the hardware. When large meshes are broken down into smaller chunks, the renderer can choose a LOD level for each split individually. Because of this, instead of rendering a high LOD for the entire original mesh it can instead choose high LODs only for the splits which are closest to the viewer, or only the splits that may be on screen. ConfigurationA Virtalis Reach systems administrator can configure Mesh Splitting in two ways:Enabled/DisabledTo enable/disable Mesh Splitting, the configuration variable in the TranslatorService can be set to true or false via the following env variable:TranslatorServiceConfiguration__MeshSplittingEnabledAdjusting Split Threshold (Advanced)It is possible to adjust the threshold at which Mesh Splitting is performed. By default, it is set to 64000 triangles and adjusting this value is not recommended. The threshold can, however, be adjusted via the following environment variable:TranslatorServiceConfiguration__MeshSplitTriangleThresholdPlease note: There are no sanity checks on this value. For example, if an administrator sets this to 10 then it will split up practically every single mesh in a scene and result in extremely poor performance of not only the rendering but also importing and publishing.
Read more

Changing the Domain of an Existing Virtalis Reach Installation

This document describes how to reconfigure an existing installation of Virtalis Reach to use a different domain. ...
changing-the-domain-of-an-existing-virtalis-reach-installation
Changing the Domain of an Existing Virtalis Reach Installation Introduction This document describes how to reconfigure an existing installation of Virtalis Reach to use a different domain. This document assumes that you have already installed Virtalis Reach and your shell is in the directory containing the files that were downloaded during the installation. This is usually stored in the home directory, for example “/home/root/Reach/k8s” Set up variables Export the following variables: export REACH_NAMESPACE=reach export SKIP_MIGRATIONS=1 export REACH_VERSION=$(kubectl get secret reach-version -n $REACH_NAMESPACE -o json | jq ".data.version" -r | base64 -d) export ACR_REGISTRY_NAME=$(kubectl get secret reach-install-config -n $REACH_NAMESPACE -o json | jq ".data.ACR_REGISTRY_NAME" -r | base64 -d) export ACR_USERNAME=$(kubectl get secret reach-install-config -n $REACH_NAMESPACE -o json | jq ".data.ACR_USERNAME" -r | base64 -d) export ACR_PASSWORD=$(kubectl get secret reach-install-config -n $REACH_NAMESPACE -o json | jq ".data.ACR_PASSWORD" -r | base64 -d) ‍ Load previous configuration: . ./load-install-config.sh Substitute and export the following variables: export REACH_DOMAIN=<new domain> Apply Change ./deploy.sh Applying Manual Changes in Keycloak Navigate to https://<new domain>/auth/admin in your browser and log in to access the Keycloak admin panel. Navigate to https://<new domain>/auth/admin/master/console/#/realms/reach/clients Open and edit the following clients from the list: · file-import · job-status · project-author · reach-client Staying on the default Settings section, edit any fields that contain the old domain to reflect the new domain, ensuring to keep the protocol and any arguments in the field. For example, edit fields containing reach.local to a new domain, reachnew.local Press Save at the bottom of the individual client settings page. Updating tls Certificates (Optional) Depending on your configuration, if you provision a tls certificate for your ingress it will need to be updated with a new certificate signed for the new domain. Delete old certificate kubectl delete secret "${TLS_SECRET_NAME}" -n "${REACH_NAMESPACE}" Create new certificate kubectl create secret tls -n "${REACH_NAMESPACE}" \ "${TLS_SECRET_NAME}" --key="tls.key" --cert="tls.crt"
Read more

Upgrading Virtalis Reach from Version 2022.2.0 to 2022.3.0

This document is designed to help a systems administrator upgrade Virtalis Reach to the next available version....
upgrading-virtalis-reach-from-version-2022-2-0-to-2022-3-0
Upgrading Virtalis Reach from Version 2022.2.0 to 2022.3.0 Introduction This document is designed to help a systems administrator upgrade Virtalis Reach to the next available version. Notable Changes • Required Helm version 3.6.3 → 3.9.0 Pre-Installation Before continuing to the next section, please refer to Automated Backup System for Virtalis Reach 2022.2 and perform a full backup of the system. Upgrading Helm to version 3.9.0 Run the following or alternatively follow the official Helm installation instructions: wget https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz tar -zxvf helm-v3.9.0-linux-amd64.tar.gz sudo mv linux-amd64/helm /usr/local/bin/helm Set Up Variables Export the following variables: export REACH_NAMESPACE=reach export REACH_VERSION=2022.3.0 export SKIP_MIGRATIONS=0 export ACR_REGISTRY_NAME=$(kubectl get secret reach-install-config -n $REACH_NAMESPACE -o json | jq ".data.ACR_REGISTRY_NAME" -r | base64 -d) export ACR_USERNAME=$(kubectl get secret reach-install-config -n $REACH_NAMESPACE -o json | jq ".data.ACR_USERNAME" -r | base64 -d) export ACR_PASSWORD=$(kubectl get secret reach-install-config -n $REACH_NAMESPACE -o json | jq ".data.ACR_PASSWORD" -r | base64 -d) Download Installation Files Log in with Oras: oras login "${ACR_REGISTRY_NAME}".azurecr.io \ --username "${ACR_USERNAME}" \ -p "${ACR_PASSWORD}" Make a backup of the old installation files: sudo mv /home/root/Reach /home/root/.Reach Make a directory to store installation files: sudo mkdir -p /home/root/Reach && \ cd /home/root/Reach && \ sudo chown $(whoami) /home/root/Reach Pull the Kubernetes deployment file archive from the Virtalis registry and unzip it: oras pull "${ACR_REGISTRY_NAME}".azurecr.io/misc/k8s:$REACH_VERSION && tar -zxvf k8s.tar.gz Make the installation scripts executable: cd k8s && sudo chmod +x *.sh && sudo chmod +x misc/keycloak/migration/*.sh Installation Load Previous Configuration . ./load-install-config.sh Create Secrets ./create-secrets.sh Deploy Virtalis Reach ./deploy.sh
Read more

Upgrading Virtalis Reach from Version 2022.3.0 to 2022.4.0

This document is designed to help a systems administrator upgrade Virtalis Reach to the next available version....
upgrading-virtalis-reach-from-version-2022-3-0-to-2022-4-0
Upgrading Virtalis Reach from Version 2022.3.0 to 2022.4.0 Introduction This document is designed to help a systems administrator upgrade Virtalis Reach to the next available version. Notable Changes • Keycloak version upgraded from 14.0 → 18.0 Pre-Installation Before continuing to the next section, please refer to Automated Backup System for Virtalis Reach 2022.2 and perform a full backup of the system. Set Up Variables Export the following variables: export REACH_NAMESPACE=reach export REACH_VERSION=2022.4.0 export SKIP_MIGRATIONS=0 export ACR_REGISTRY_NAME=$(kubectl get secret reach-install-config -n $REACH_NAMESPACE -o json | jq ".data.ACR_REGISTRY_NAME" -r | base64 -d) export ACR_USERNAME=$(kubectl get secret reach-install-config -n $REACH_NAMESPACE -o json | jq ".data.ACR_USERNAME" -r | base64 -d) export ACR_PASSWORD=$(kubectl get secret reach-install-config -n $REACH_NAMESPACE -o json | jq ".data.ACR_PASSWORD" -r | base64 -d) Download Installation Files Log in with Oras: oras login "${ACR_REGISTRY_NAME}".azurecr.io \ --username "${ACR_USERNAME}" \ -p "${ACR_PASSWORD}" Make a backup of the old installation files: sudo mv /home/root/Reach /home/root/.Reach Make a directory to store installation files: sudo mkdir -p /home/root/Reach && \ cd /home/root/Reach && \ sudo chown $(whoami) /home/root/Reach Pull the Kubernetes deployment file archive from the Virtalis registry and unzip it: oras pull "${ACR_REGISTRY_NAME}".azurecr.io/misc/k8s:$REACH_VERSION && tar -zxvf k8s.tar.gz Make the installation scripts executable: cd k8s && sudo chmod +x *.sh && sudo chmod +x misc/keycloak/migration/*.sh Installation Load Previous Configuration . ./load-install-config.sh Create Secrets ./create-secrets.sh Deploy Virtalis Reach ./deploy.sh
Read more

Upgrading Virtalis Reach from Version 2022.1.0 to 2022.2.0

This document is designed to help a systems administrator upgrade Virtalis Reach to the next available version....
upgrading-virtalis-reach-from-version-2022-1-0-to-2022-2-0
Upgrading Virtalis Reach from Version 2022.1.0 to 2022.2.0 Introduction This document is designed to help a systems administrator upgrade Virtalis Reach to the next available version. Notable Changes · Required Kubernetes version 1.21.3 → 1.22.7 · Required Kubectl version 1.21.3 → 1.22.7 · ingress-nginx helm chart repository has been switched from https://kubernetes.github.io/ingress-nginx to https://charts.bitnami.com/bitnami · Required cert-manager version 1.0.2 → 1.7.1 · Replaced Cilium with Calico for the self managed node network plugin Pre-Installation Before continuing to the next section, please refer to the “Virtalis Reach Automated Backup System” document and do a full backup of the system. Upgrading Kubernetes from version 1.21.x to 1.22.7 Upgrading cloud managed Kubernetes Refer to the guide set out by your cloud provider Upgrading self managed single node Kubernetes cluster Optionally create an etcd backup Download etcdctl wget https://github.com/etcd-io/etcd/releases/download/v3.5.2/etcd-v3.5.2-linux-amd64.tar.gz tar -xzvf etcd-v3.5.2-linux-amd64.tar.gz sudo mv etcd-v3.5.2-linux-amd64/etcdctl /usr/bin/ sudo chmod +x /usr/bin/etcdctl Create a snapshot sudo ETCDCTL_API=3 etcdctl snapshot save $(date +%Y%m%d)-etcd-cluster-state-backup.db --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --cacert=/etc/kubernetes/pki/etcd/ca.crt Drain master node kubectl drain master --ignore-daemonsets --delete-emptydir-data Update kubeadm apt-mark unhold kubeadm && \ apt-get update && apt-get install -y kubeadm=1.22.7-00 && \ apt-mark hold kubeadm Verify the version kubeadm version Expected output: Check if we can upgrade to 1.22.7 kubeadm upgrade plan Run the upgrade sudo kubeadm upgrade apply v1.22.7 Update kubelet and kubectl apt-mark unhold kubelet kubectl && \ apt-get update && apt-get install -y kubelet=1.22.7-00 kubectl=1.22.7-00 && \ apt-mark hold kubelet kubectl sudo systemctl daemon-reload sudo systemctl restart kubelet Uncordon the master node kubectl uncordon master Check node version and status kubectl get nodes Expected output NAME STATUS ROLES AGE VERSION master Ready control-plane,master 5h45m v1.22.7 Uninstall cilium helm uninstall cilium -n kube-system Install calico helm repo add projectcalico https://projectcalico.docs.tigera.io/charts helm install calico -n calico \ --create-namespace projectcalico/tigera-operator --version v3.22.1 Switch ingress-nginx repository Uninstall current chart helm uninstall ingress -n ingress-nginx Install new chart helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update helm install ingress \ --create-namespace \ -n ingress-nginx \ --set service.externalIPs[0]=$(ip -4 addr show eth0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}') \ bitnami/nginx-ingress-controller --version 9.1.10 The --set service.externalIPs flag sets the listen address of the ingress to the IP of the eth0 interface, change it if necessary Update cert-manager kubectl apply -f https://github.com/jetstack/\ cert-manager/releases/download/v1.7.1/cert-manager.yaml Create a new cert issuer kubectl delete ClusterIssuer -n cert-manager letsencrypt-prod nano prod_issuer.yaml Paste in the following and replace variables wherever appropriate: apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-prod namespace: cert-manager spec: acme: server: https://acme-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: <YOUR_EMAIL_ADDRESS> privateKeySecretRef: name: reach-tls-cert solvers: - http01: ingress: class: nginx Press ctrl+o and then enter to save and then press ctrl+x to exit nano, now apply the file: kubectl apply -f prod_issuer.yaml Set up variables Export the following variables: export REACH_NAMESPACE=reach export REACH_VERSION=2022.2.0 export SKIP_MIGRATIONS=0 export ACR_REGISTRY_NAME=$(kubectl get secret reach-install-config -n $REACH_NAMESPACE -o json | jq ".data.ACR_REGISTRY_NAME" -r | base64 -d) export ACR_USERNAME=$(kubectl get secret reach-install-config -n $REACH_NAMESPACE -o json | jq ".data.ACR_USERNAME" -r | base64 -d) export ACR_PASSWORD=$(kubectl get secret reach-install-config -n $REACH_NAMESPACE -o json | jq ".data.ACR_PASSWORD" -r | base64 -d) ‍ Download installation files Log in with Oras: oras login "${ACR_REGISTRY_NAME}".azurecr.io \ --username "${ACR_USERNAME}" \ -p "${ACR_PASSWORD}" Make a backup of the old installation files: mv /home/root/Reach /home/root/.Reach Make a directory to store installation files: mkdir /home/root/Reach && cd /home/root/Reach Pull the Kubernetes deployment file archive from the Virtalis registry and unzip it: oras pull "${ACR_REGISTRY_NAME}".azurecr.io/misc/k8s:$REACH_VERSION && tar -zxvf k8s.tar.gz Make the installation scripts executable: cd k8s && sudo chmod +x *.sh && sudo chmod +x misc/keycloak/migration/*.sh Installation Load previous configuration . ./load-install-config.sh Create secrets ./create-secrets.sh Deploy Reach ./deploy.sh
Read more

Support & Feedback

If support is required, please visit the support portal and knowledgebase at https://support.virtalis.com or email Virtalis Support at support@virtalis.com...
support-feedback-adminstrator
If support is required, please visit the support portal and knowledgebase at https://support.virtalis.com or email Virtalis Support at support@virtalis.com.Feedback is always welcomed so that we can continue to develop and improve Virtalis Reach. Please speak to your Customer Success team.
Read more

Upgrading Virtalis Reach from 2022.4.0 to 2023.1.0

This document is designed to help a systems administrator upgrade Virtalis Reach to the next available version....
upgrading-virtalis-reach-from-2022-4-0-to-2023-1-0
This document is designed to help a systems administrator upgrade Virtalis Reach to the next available version.
Read more

Deploying Virtalis Reach 2023.1.0 on a Kubernetes cluster

...
deploying-virtalis-reach-2023-1-0-on-a-kubernetes-cluster
Deploying Virtalis Reach 2023.1.0 on a Kubernetes cluster
Read more

Upgrading Virtalis Reach from Version 2023.1.0 to 2023.2.0

This document is designed to help a systems administrator upgrade Virtalis Reach to the next available version....
upgrading-virtalis-reach-from-version-2023-1-0-to-2023-2-0
This document is designed to help a systems administrator upgrade Virtalis Reach to the next available version.
Read more

Support

If you have questions or need additional support, we are here to help.