Virtalis Reach Help
Help

System Administrator Guide

This document has 4 chapters.
?
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Deploying Virtalis Reach on a Kubernetes Cluster

OverviewThis section describes deploying a complete Virtalis Reach system into a Kubernetes cluster. The content is highly technical, consisting primarily of shell commands that should be executed on the cluster administration shell. The commands perform the actions required to deploy Virtalis Reach, however, you should read and understand what these commands do and be aware that your cluster or deployment may have a specific configuration. Virtalis Reach is a configurable platform consisting of many connected microservices allowing the deployment to be configured and adapted for different use-cases and environments.If you are unsure of the usage or impact of a particular system command then seek advice, improper use of server infrastructure can have serious consequences.PrerequisitesVirtalis Reach requires:Kubernetes cluster (either on prem or in the cloud):at least version v1.21.38 coresAt least 64GB of memory available to a single node (128GB total recommended)625GB of storage (see the storage section for more information)Nginx as the cluster ingress controllerAccess to the internet during the software deployment and updateA network policy compatible network pluginVirtalis Reach does not require:A GPU in the serverA connection to the internet following the software deploymentThe follow administration tools are required along with their recommended tested version:kubectl v1.21.3 - this package allows us to communicate with a Kubernetes cluster on the command line.helm 3 v3.6.3 - this package is used to help us install large Kubernetes charts consisting of numerous resourcesoras v0.8.1 - this package is used to download an archive from our internal registry containing some configuration files which will be used to deploy Virtalis Reachazure cli stable - this package is used to authenticate with our internal registry hosted on Azurejq v1.6 - this package is used to parse and traverse JSON on the command lineyq v4.6.1 - this package is used to parse and traverse YAML on the command lineThese tools are not installed on the Virtalis Reach server - only on the machine that will communicate with a Kubernetes cluster for the duration of the installation.If using recent versions of Ubuntu, the Azure CLI as installed by Snap is called azure-cli not az which refers to an older version in the Ubuntu repos. Alias azure-cli to az if needed.Variables and CommandsIn this section, variables enclosed in <> arrows should be replaced with the appropriate values. For example:docker login <my_id> <my_password> --verbosity debugbecomes docker login admin admin --verbosity debugCommands to execute in a shell are shown as code and each block of code is designed to be a single command that can be copy and pasted:These are commands to be entered in a shell in your clusters administration consoleThis is another block of code \that uses "\" to escape newlines \and can be copy and pasted straight into your consoleSome steps have been included in a single bash script which can be inspected before being run.Pre-installationSet Up the Deployment ShellMake a directory to store temporary installation files:mkdir /home/root/Reach && cd /home/root/ReachExport the following variables:export REACH_VERSION=2021.4.0export ACR_REGISTRY_NAME=virtaliscustomerexport SKIP_MIGRATIONS=1Substitute the variable values and export them:export REACH_DOMAIN=<the domain Virtalis Reach will be hosted on>export TLS_SECRET_NAME=<the name of the secret containing the tls cert>export REACH_NAMESPACE=<name of kubernetes namespace to deploy Virtalis Reach on>export ACR_USERNAME=<service principal id>export ACR_PASSWORD=<service principal password>export reach_licence__key=<licence xml snippet>export reach_licence__signature=<licence signature>Export the environment variables if Virtalis Reach TLS is/will be configured to use LetsEncrypt:export KEYCLOAK_ANNOTATIONS="--set ingress.annotations\.cert-manager\.io/cluster-issuer=letsencrypt-prod"export INGRESS_ANNOTATIONS="--set ingress.annotations\.cert-manager\.io/cluster-issuer=letsencrypt-prod"Optional configuration variables:export MANAGED_TAG=<custom image tag for Virtalis Reach services>export OFFLINE_INSTALL=<when set to true, patch Virtalis Reach so that it can be taken off the internet>export MQ_WINDCHILL=<when set to 1, deploy rabbitmq with windchill support>export MQ_TEAMCENTER=<when set to 1, deploy rabbitmq with teamcenter support>export MQ_CLIENT_KEY_PASS=<password that was/will be used for the client_key password, see windchill/teamcenter installation document>If both windchill and teamcenter are enabled, the certificates and client key pass must be the same for both instances.Windchill Pre-installationIf Virtalis Reach is going to be configured to connect to Windchill then run the following commands:export MQ_WINDCHILL=1export WINDCHILL_HOSTNAME=<hostname of the windchill instance>export WINDCHILL_AUTHTYPE=Basicexport WINDCHILL_USERNAME=<windchill auth username>export WINDCHILL_PASSWORD=<windchill auth password>cat <<EOF | kubectl apply -f -apiVersion: v1kind: Secretmetadata: name: windchill namespace: $REACH_NAMESPACEtype: OpaquestringData: hostname: ${WINDCHILL_HOSTNAME} authType: ${WINDCHILL_AUTHTYPE} username: ${WINDCHILL_USERNAME} password: ${WINDCHILL_PASSWORD}EOFTeamcenter Pre-installationIf Virtalis Reach is going to be configured to connect to Teamcenter then run the following commands:export MQ_TEAMCENTER=1export TEAMCENTER_HOSTNAME=<hostname of the teamcenter instance>cat <<EOF | kubectl apply -f -apiVersion: v1kind: Secretmetadata: name: teamcenter namespace: $REACH_NAMESPACEtype: OpaquestringData: hostname: ${TEAMCENTER_HOSTNAME} authType: OAuthEOFChecking the Nginx Ingress Controllerkubectl get pods -n ingress-nginxThis should return at least 1 running pod:ingress-nginx nginx-ingress-controller…….. 1/1 Running If Nginx is not installed then please contact Virtalis to see if we can support a different ingress controller. Virtalis Reach is currently only compatible with Nginx.If there is no Ingress controller currently installed on the cluster, and you are confident you should install Nginx, then you can execute these commands to install it:helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx && \helm repo updatehelm install nginx-ingress ingress-nginx/ingress-nginx \-n ingress-nginx \--create-namespaceStorageKubernetes supports a wide variety of volume plugins which allow you to provision storage dynamically as well as with constraints depending on your requirements.List of supported volume pluginsAll PersistentVolumes used by Virtalis Reach reserve 625gb of storage space in total, this is a provisional amount which will likely change depending on your workload. DefaultBy default, Virtalis Reach is deployed with the local volume plugin which creates volumes on the worker nodes. This is not the recommended way to deploy Virtalis Reach and is only appropriate for test level deployments as all databases are tied to the single disk of the node that they’re deployed on which hinders the performance of the system.To assist in dynamic local volume provisioning, we use the Local Path Provisioner service developed by Rancher:kubectl apply -f \https://raw.githubusercontent.com/rancher/\local-path-provisioner/master/deploy/local-path-storage.yamlCustomYou can customize how storage for Virtalis Reach is provisioned by specifying which storage class you want to use. This must be created by a Kubernetes Administrator beforehand or, in some environments, a default class is also suitable. For example, when deploying to an Azure Kubernetes Service instance, it comes with a default storage class on the cluster which can be used to request storage from Azure.ExpressIf you only want to modify the storage class and leave all other parameters such as size as default, export these variables out:export REACH_SC=<name of storage class>export REACH_SC_ARGS=" --set persistence\.storageClass="${REACH_SC}" --set core\.persistentVolume.storageClass\="${REACH_SC}" --set master.persistence\.storageClass="${REACH_SC}" "Custom ParametersHere is a list of different databases in use by Virtalis Reach and how to customize their storage.MinioPlease refer to the persistence: section found in the values.yaml file in the Bitnami Minio helm chart repository for a list of available parameters to customize such as size, access modes and so on.These values can be added/tweaked in the following files:k8s/misc/artifact-binary-store/values-prod.yamlk8s/misc/import-binary-store/values-prod.yamlk8s/misc/import-folder-binary-store/values-prod.yamlk8s/misc/vrdb-binary-store/values-prod.yamlNeo4jPlease refer to the core: persistentVolume: section found in the values.yaml file in the Neo4j helm chart repository for a list of available parameters to customize such as size, access modes and so on.These values can be added/tweaked in the following files:k8s/misc/artifact-store/values-prod.yamlk8s/misc/vrdb-store/values-prod.yamlAlternatively, the Neo4j helm chart configuration documentation can also be found here: https://neo4j.com/labs/neo4j-helm/1.0.0/configreference/MysqlPlease refer to the master: persistence: section found in the values.yaml file in the Bitnami Mysql helm chart repository for a list of available parameters to customize such as size, access modes and so on. These values can be added/tweaked in the following files:k8s/misc/collaboration-session-db/values-prod.yamlk8s/misc/import-folder-db/values-prod.yamlk8s/misc/job-db/values-prod.yamlk8s/misc/background-job-db/values-prod.yamlk8s/misc/keycloak-db/values-prod.yamlMiscellaneousPlease refer to the persistence: section found in the values.yaml file in the Bitnami Rabbitmq helm chart repository for a list of available parameters to customize such as size, access modes and so on. These values can be added/tweaked in the following files:k8s/misc/message-queue/values-prod.yamlDeploying Virtalis ReachCreate a namespace:kubectl create namespace "${REACH_NAMESPACE}"Add namespace labels required by NetworkPolicies:kubectl label namespace ingress-nginx reach-ingress=true; \kubectl label namespace kube-system reach-egress=trueThe ‘ingress-nginx’ entry on line 1 will have to be modified if your nginx ingress is deployed to a different namespace in your cluster.Configure Virtalis Reach TLSManually create a TLS secret from a TLS key and cert or use the LetsEncrypt integration with cert-manager.Manually Creating a TLS Cert Secretkubectl create secret tls -n "${REACH_NAMESPACE}" \"${TLS_SECRET_NAME}" --key="tls.key" --cert="tls.crt"LetsEncrypt with Cert-managerRequirements:The machine hosting Virtalis Reach can be reached via a public IP address (used to validate the ownership of your domain)A domain that you own (cannot be used for domains ending with .local)Create a namespace for cert-manager:kubectl create namespace cert-managerInstall the recommended version v1.0.2 of cert-manager:kubectl apply -f https://github.com/jetstack/\cert-manager/releases/download/v1.0.2/cert-manager.yamlCreate a new file:nano prod_issuer.yamlPaste in the following and replace variables wherever appropriate:apiVersion: cert-manager.io/v1alpha2kind: ClusterIssuermetadata: name: letsencrypt-prod namespace: cert-managerspec: acme: server: https://acme-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: <your_email_address> privateKeySecretRef: name: <value of the $TLS_SECRET_NAME variable you exported before> solvers: - http01: ingress: class: nginxPress ctrl+o and then enter to save and then press ctrl+x to exit nano, now apply the file:kubectl apply -f prod_issuer.yamlSource: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetesIf you wish to do so, you can follow the digital ocean guide above and deploy an example service to test cert-manager before using it on Virtalis Reach.Download Installation FilesLog in with Oras;oras login "${ACR_REGISTRY_NAME}".azurecr.io \--username "${ACR_USERNAME}" \-p "${ACR_PASSWORD}"Pull the Kubernetes deployment file archive from the Virtalis registry and unzip it:oras pull "${ACR_REGISTRY_NAME}"\.azurecr.io/misc/k8s:$REACH_VERSION &&tar -zxvf k8s.tar.gzMake the installation scripts executable:cd k8s && sudo chmod +x *.shCreate and Deploy SecretsRandomised secrets are used to securely interconnect the Virtalis Reach microservices.The script below uses the pwgen package to generate a random string of 30 alphanumeric characters. Before proceeding make sure pwgen is installed on your machine../create-secrets.shDeploy Virtalis Reach and Database Services./deploy.shWait until all pods are showing up as Ready:watch -n2 kubectl get pods -n $REACH_NAMESPACEYou will now be able to access the Virtalis Reach frontend client by opening the domain Virtalis Reach was installed on in a web-browser.Install the Automated Backup SystemOptionally, install the automated backup system by referring to Virtalis Reach Automated Backup System or activate your own backup solution.Retrieving the Keycloak Admin PasswordRun the following command:kubectl get secret --namespace ${REACH_NAMESPACE} \keycloak -o jsonpath="{.data.admin_password}" \| base64 --decode; echoRefer to Virtalis Reach User Management for more information on how to administer the system inside Keycloak.Post Deployment Clean-upUnset exported environment variables:unset REACH_DOMAIN && \unset TLS_SECRET_NAME && \unset REACH_NAMESPACE && \unset ACR_USERNAME && \unset ACR_PASSWORD && \unset ACR_REGISTRY_NAME && \unset REACH_SC && \unset REACH_SC_ARGS && \unset reach_licence__key && \unset reach_licence__signatureClear bash history:history -cThis will clean up any secrets exported in the system.Test Network PoliciesVirtalis Reach utilizes NetworkPolicies which restrict the communication of the internal service on a network level. Please note: NetworkPolicies require a supported Kubernetes network plugin like Cilium.To test these policies, run a temporary pod:kubectl run -it --rm test-pod \-n ${REACH_NAMESPACE} --image=debianInstall the curl package:apt update && apt install curlRun a request to test the connection to one of our backend APIs. This should return a timeout errorcurl http://artifact-access-api-service:5000Exit the pod which will delete it:ExitAdditionally, you can test the egress by checking that any outbound connections made to a public address are denied.Get the name of the edge-server pod:kubectl get pods -n ${REACH_NAMESPACE} | grep edge-serverExec inside the running pod using the pod name from above:kubectl exec -it <pod_name> -n ${REACH_NAMESPACE} -- /bin/bashRunning a command like apt update which makes an outbound request should timeout:apt updateExit the pod:exit‍
Read more

Deploying the Virtalis Reach Monitoring Service Stack

OverviewThis section describes the deployment of various monitoring services which allow a Kubernetes Administrator to monitor the health, metrics, and logs for all cluster services including Virtalis Reach.List of services to be deployed:Prometheus Stack (health, metrics)GrafanaPrometheusAlertmanagerELK Stack (logging)ElasticsearchKibanaLogstashVariables and CommandsIn this section, variables enclosed in <> arrows should be replaced with the appropriate values. For example:docker login <my_id> <my_password> --verbosity debugbecomes docker login admin admin --verbosity debugCommands to execute in a shell are shown as code and each block of code is designed to be a single command that can be copy and pasted:These are commands to be entered in a shell in your clusters administration consoleThis is another block of code \that uses "\" to escape newlines \and can be copy and pasted straight into your consoleSet Up the Deployment ShellExport some environment variables which will be used throughout the installation:export MONITORING_DOMAIN=<the domain monitoring services will be hosted on>export MONITORING_NAMESPACE=monitoringexport MONITORING_TLS_SECRET=reach-tls-secretCreate a new namespace:kubectl create namespace "${MONITORING_NAMESPACE}"kubectl label ns "${MONITORING_NAMESPACE}" release=prometheus-stackThe command below uses the pwgen package to generate a random string of 30 alphanumeric characters.Before proceeding, make sure pwgen is installed on your machine or use a different package to generate the string replacing the command inside the brackets:$(pwgen 30 1 -s) → $(someOtherPackage --arg1 --arg2)Create SecretsCreate a secret which will store Grafana credentials:kubectl create secret generic grafana \-n "${MONITORING_NAMESPACE}" \--from-literal="user"=$(pwgen 30 1 -s) \--from-literal="password"=$(pwgen 30 1 -s)kubectl create secret generic elastic-credentials -n $MONITORING_NAMESPACE \--from-literal=password=$(pwgen -c -n -s 30 1 | tr -d '\n') \--from-literal=username=elastickubectl create secret generic kibana-credentials -n $MONITORING_NAMESPACE \--from-literal=encryption-key=$(pwgen -c -n -s 32 1 | tr -d '\n')StorageExpressIf you only want to modify the storage class and leave all other parameters such as size as default, export these variables out:export MONITORING_SC=<name of storage class>export ELASTICSEARCH_SC_ARGS="--set \volumeClaimTemplate.storageClassName=${MONITORING_SC}"export LOGSTASH_SC_ARGS="--set \volumeClaimTemplate.storageClassName=${MONITORING_SC}"export PROMETHEUS_SC_ARGS="--set alertmanager.alertmanagerSpec.storage.\volumeClaimTemplate.spec.storageClassName=${MONITORING_SC}--set prometheus.prometheusSpec.storageSpec.\volumeClaimTemplate.spec.storageClassName=${MONITORING_SC}--set grafana.persistence.storageClassName=${MONITORING_SC}"Custom ParametersHere is a list of different monitoring services and how to customize their storage.ElasticsearchPlease refer to the volumeClaimTemplate: section found in the values.yaml file in the elasticsearch helm chart repository for a list of available parameters to customize such as size, access modes and so on.These values can be added/tweaked in the following files:k8s/misc/elk/elasticsearch/values-prod.yamlk8s/misc/elk/elasticsearch/values-common.yamlLogstashPlease refer to the volumeClaimTemplate: sections found in the values.yaml file in the logstash helm chart repository for a list of available parameters to customize such as size, access modes and so on.k8s/misc/elk/logstash/values-prod.yamlk8s/misc/elk/logstash/values-common.yamlPrometheus StackPlease refer to the volumeClaimTemplate: sections found in the values.yaml file in the prometheus-stack helm chart repository for a list of available parameters to customize such as size, access modes and so on.These values can be added/tweaked in the following files:k8s/misc/elk/prometheus/values-prod.yamlk8s/misc/elk/prometheus/values-common.yamlMonitoring TLSManually create a TLS secret from a TLS key and cert or use the LetsEncrypt integration with cert-manager.Manually Creating a TLS Cert Secretkubectl create secret tls -n "${MONITORING_NAMESPACE}" \"${MONITORING_TLS_SECRET}" --key="tls.key" --cert="tls.crt"LetsEncrypt with Cert-managerExport the following:export KIBANA_INGRESS_ANNOTATIONS="--set ingress.annotations\.cert-manager\.io/cluster-issuer=letsencrypt-prod"export PROMETHEUS_INGRESS_ANNOTATIONS="--set prometheus.ingress\.annotations.cert-manager\.io/cluster-issuer=letsencrypt-prod"export GRAFANA_INGRESS_ANNOTATIONS="--set grafana.ingress\.annotations.cert-manager\.io/cluster-issuer=letsencrypt-prod"export ALERTMANAGER_INGRESS_ANNOTATIONS="--set alertmanager.ingress.\annotations.cert-manager\.io/cluster-issuer=letsencrypt-prod"Installing Grafana, Alertmanager, and PrometheusAdd these repos to Helm and update:helm repo add prometheus-community https://\prometheus-community.github.io/helm-charts && \helm repo updateExport the following:export ALERTMANAGER_INGRESS="--set alertmanager.ingress\.hosts[0]=${MONITORING_DOMAIN} --set alertmanager.ingress\.tls[0].secretName=$MONITORING_TLS_SECRET --set alertmanager\.ingress.tls[0].hosts={${MONITORING_DOMAIN}}"export PROMETHEUS_INGRESS="--set prometheus.ingress\.hosts[0]=${MONITORING_DOMAIN} --set prometheus.ingress\.tls[0].secretName=$MONITORING_TLS_SECRET --set prometheus\.ingress.tls[0].hosts={${MONITORING_DOMAIN}}"export GRAFANA_INGRESS="--set grafana.ingress.hosts[0]\=${MONITORING_DOMAIN} --set grafana.ingress.tls[0]\.secretName=$MONITORING_TLS_SECRET --set grafana.ingress\.tls[0].hosts={${MONITORING_DOMAIN}}"Install:helm install prometheus-stack \--namespace "${MONITORING_NAMESPACE}" \--set grafana.admin.existingSecret="grafana" \--set grafana.admin.userKey="user" \--set grafana.admin.passwordKey="password" \--set grafana.'grafana\.ini'.server.root_url\="https://${MONITORING_DOMAIN}/grafana" \--set grafana.'grafana\.ini'.server.domain="${MONITORING_DOMAIN}" \--set grafana.'grafana\.ini'.server.serve_from_sub_path='true' \$ALERTMANAGER_INGRESS \$PROMETHEUS_INGRESS \$GRAFANA_INGRESS \$PROMETHEUS_INGRESS_ANNOTATIONS \$GRAFANA_INGRESS_ANNOTATIONS \$ALERTMANAGER_INGRESS_ANNOTATIONS \$PROMETHEUS_SC_ARGS \-f misc/prometheus/values-common.yaml \-f misc/prometheus/values-prod.yaml \prometheus-community/kube-prometheus-stackCheck the status of deployed pods:kubectl get pods -n "${MONITORING_NAMESPACE}"Accessing the Grafana FrontendRetrieve the Grafana admin user:kubectl get secret --namespace "${MONITORING_NAMESPACE}" \grafana -o jsonpath="{.data.user}" | base64 --decode; echoRetrieve the Grafana admin password:kubectl get secret --namespace "${MONITORING_NAMESPACE}" \grafana -o jsonpath="{.data.password}" | base64 --decode; echoGrafana can now be accessed at https://${MONITORING_DOMAIN}/grafana/ from a web-browser using the admin user and admin password.Installing Elasticsearch, Kibana and LogstashAdd this helm repo and update:helm repo add elastic https://helm.elastic.cohelm repo updateExport this variable:export KIBANA_INGRESS="--set ingress.hosts[0]\=$MONITORING_DOMAIN --set ingress.tls[0].secretName\=$MONITORING_TLS_SECRET --set ingress.tls[0]\.hosts[0]=$MONITORING_DOMAIN"Install Elasticsearchhelm install elasticsearch \--version 7.10 elastic/elasticsearch \-f misc/elk/elasticsearch/values-common.yaml \-f misc/elk/elasticsearch/values-prod.yaml \$ELASTICSEARCH_SC_ARGS \-n $MONITORING_NAMESPACEInstall Kibanahelm install kibana \--version 7.10 elastic/kibana \-n $MONITORING_NAMESPACE \$KIBANA_INGRESS_ANNOTATIONS \$KIBANA_INGRESS \-f misc/elk/kibana/values-common.yaml \-f misc/elk/kibana/values-prod-first-time.yaml \-f misc/elk/kibana/values-prod.yamlPatch Kibanakubectl patch deploy kibana-kibana \-n monitoring -p "$(cat misc/elk/kibana/probe-patch.yaml)"Get the elasticsearch admin password:kubectl get secret elastic-credentials -o jsonpath\="{.data.password}" -n $MONITORING_NAMESPACE | \base64 --decode; echoOpen up kibana in a web browser, log in using the elasticsearch admin password and the username “elastic” and add any additional underprivileged users that you want to have access to the logging system:https://$MONITORING_DOMAIN/kibana/app/management/security/usersInstall Filebeathelm install filebeat \--version 7.10 elastic/filebeat \-n $MONITORING_NAMESPACE \-f misc/elk/filebeat/values-common.yaml \-f misc/elk/filebeat/values-prod.yamlInstall Logstashhelm install logstash \--version 7.10 elastic/logstash \-n $MONITORING_NAMESPACE \$LOGSTASH_SC_ARGS \-f misc/elk/logstash/values-prod.yaml \-f misc/elk/logstash/values-common.yamlClean-up Post Monitoring InstallationUnset environment variables:unset MONITORING_DOMAIN && \unset MONITORING_NAMESPACEClear bash history:history -cThis will clean up any secrets exported in the system.
Read more

Virtalis Reach Automated Backup System

OverviewVirtalis Reach comes with an optional automated back-up system allowing an administrator to restore to an earlier snapshot in the event of a disaster. We will install Velero to back-up the state ofyour Kubernetes Cluster and use a custom-built solution which leverages Restic to back-up thepersistent data imported into Virtalis Reach.Alternatively, you can consider using your own backup solution, for examplePersistentVolumeSnapshot, which creates a snapshot of a persistent volume at a point in time. Youshould be aware, however, that they may only be supported on a limited number of platforms suchas Azure and AWS.If you decide to use a different solution to the one provided by Virtalis, you should be aware that notall databases used by Virtalis Reach support live backups. This means that the databases must betaken offline before backups are performed.You should consider creating regular backups of your buckets which hold the backed-up data incase of failure, this will be done through your cloud provider or manually if you host your ownbucket.Please note: The following databases used by Virtalis Reach can only be backed-up while offline:MinioNeo4jVariables and CommandsIn this section, variables enclosed in <> arrows should be replaced with the appropriate values. For example:docker login <my_id> <my_password> --verbosity debugbecomes docker login admin admin --verbosity debugCommands to execute in a shell are shown as code and each block of code is designed to be a single command that can be copy and pasted:These are commands to be entered in a shell in your clusters administration consoleThis is another block of code \that uses "\" to escape newlines \and can be copy and pasted straight into your consoleInstallationCreating a Storage LocationRecommendedFollow the “Create S3 Bucket” and “Set permissions for Velero” sections from and make sure that you create the following 2 buckets under your s3 bucket:reach-resticreach-velerohttps://github.com/vmware-tanzu/velero-plugin-for-aws#create-s3-bucket Export the address and port of the bucket you have created:export S3_BUCKET_ADDRESS=<address>#i.e S3_BUCKET_ADDRESS=192.168.1.3, S3_BUCKET_ADDRESS=mydomain.comexport S3_BUCKET_PORT=<port>export S3_BUCKET_PROTOCOL=<http or https>Not Recommended - Create an S3 Bucket on the Same Cluster, Alongside Virtalis ReachCustomize the persistence.size if the total size of your data exceeds 256gb and change the storage class REACH_SC if needed.export REACH_SC=local-pathkubectl create ns reach-backup#check if pwgen is installed for the next stepcommand -v pwgenkubectl create secret generic reach-s3-backup -n reach-backup \--from-literal='access-key'=$(pwgen 30 1 -s | tr -d '\n') \--from-literal='secret-key'=$(pwgen 30 1 -s | tr -d '\n')helm upgrade --install reach-s3-backup bitnami/minio \-n reach-backup --version 3.6.1 \--set persistence.storageClass=$REACH_SC \--set persistence.size=256Gi \--set mode=standalone \--set resources.requests.memory='150Mi' \--set resources.requests.cpu='250m' \--set resources.limits.memory='500Mi' \--set resources.limits.cpu='500m' \--set disableWebUI=true \--set useCredentialsFile=true \--set volumePermissions.enabled=true \--set defaultBuckets="reach-velero reach-restic" \--set global.minio.existingSecret=reach-s3-backupcat <<EOF > credentials-velero[default]aws_access_key_id=$(kubectl get secret reach-s3-backup \-n reach-backup -o jsonpath="{.data.access-key}" | base64 --decode)aws_secret_access_key=$(kubectl get secret reach-s3-backup \-n reach-backup -o jsonpath="{.data.secret-key}" | base64 --decode)EOFExport the Address and Port of the Bucket You Have Createdexport S3_BUCKET_ADDRESS=reach-s3-backup-minio.reach-backup.svc.cluster.localexport S3_BUCKET_PORT=9000export S3_BUCKET_PROTOCOL=httpexport S3_BUCKET_REGION=localSet Up VariablesFor the duration of this installation, you have to navigate to the k8s folder that is downloadable by following the Virtalis Reach Installation GuideMake scripts executable:sudo chmod +x \trigger-database-restore.sh \trigger-database-backup.sh \install-backup-restore.shExport out the following variables:export ACR_REGISTRY_NAME=virtaliscustomerExport the address of the reach-restic bucket:export REPO_URL=s3:$S3_BUCKET_PROTOCOL://\$S3_BUCKET_ADDRESS:$S3_BUCKET_PORT/reach-resticSubstitute the variable values and export them:export REACH_NAMESPACE=<name of kubernetes namespace Virtalis Reach is deployed in>Optional Configuration Variablesexport MANAGED_TAG=<custom image tag for Virtalis Reach services>export DISABLE_CRON<seto to true to install without an automated cronSchedule>Velero InstallationThe following steps will assume you named your Velero bucket “reach-velero”.Add the VMware helm repository and update:helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-chartshelm repo updateInstall Velerohelm install velero vmware-tanzu/velero \--namespace velero \--create-namespace \--set-file credentials.secretContents.cloud\=./credentials-velero \--set configuration.provider=aws \--set configuration.backupStorageLocation.name\=reach-velero \--set configuration.backupStorageLocation.bucket\=reach-velero \--set configuration.backupStorageLocation.config.region\=$S3_BUCKET_REGION \--set configuration.backupStorageLocation.config.s3Url\=$S3_BUCKET_PROTOCOL://$S3_BUCKET_ADDRESS:$S3_BUCKET_PORT \--set configuration.backupStorageLocation.config.publicUrl\=$S3_BUCKET_PROTOCOL://$S3_BUCKET_ADDRESS:$S3_BUCKET_PORT \--set configuration.backupStorageLocation.config.s3ForcePathStyle\=true \--set initContainers[0].name=velero-plugin-for-aws \--set initContainers[0].image=velero/velero-plugin-for-aws:v1.1.0 \--set initContainers[0].volumeMounts[0].mountPath=/target \--set initContainers[0].volumeMounts[0].name=plugins \--set snapshotsEnabled=false \--version 2.23.1 \--set deployRestic=trueInstall the Velero CLI Clientwget https://github.com/vmware-tanzu/velero/releases\/download/v1.5.3/velero-v1.5.3-linux-amd64.tar.gztar -xzvf velero-v1.5.3-linux-amd64.tar.gzrm -f velero-v1.5.3-linux-amd64.tar.gzsudo mv $(pwd)/velero-v1.5.3-linux-amd64/velero /usr/bin/sudo chmod +x /usr/bin/veleroManually create a single backup to verify that the connection to the aws bucket is working:velero backup create test-backup-1 \--storage-location=reach-velero --include-namespaces $REACH_NAMESPACEWatch the status of the backup until it’s finished, this should show up as complete if everything was set up correctly:watch -n2 velero backup getCreate a scheduled backup:velero create schedule cluster-backup --schedule="45 23 * * 6" \--storage-location=reach-velero --include-namespaces $REACH_NAMESPACEThis schedule will run a backup every Saturday at 23:45PMRestic IntegrationThe custom restic integration uses Kubernetes jobs to mount the data, encrypt it, and send it to a bucket. Kubernetes CustomResourceDefinitions are used to store the information about the restic repositories as well as any created backups.By default the scheduled data backup runs on every Friday at 23:45PM, this can be modified by editing the cronSchedule field in all values.yaml files located in backup-restore/helmCharts/<release_name>/ with the exception of common-lib.All the performed backups are offline backups therefore Virtalis Reach will be unavailable for that period as a number of databases have to be taken down.Create an AWS bucket with the name “reach-restic” by following the same guide from the Velero sectionReplace the keys and create a secret containing the reach-restic bucket credentials:kubectl create secret generic reach-restic-bucket-creds \-n "$REACH_NAMESPACE" \--from-literal='AWS_ACCESS_KEY'='<ACCESS_KEY>' \--from-literal='AWS_SECRET_KEY'='<SECRET_KEY>'If you instead opted in to deploy an s3 bucket on the same cluster, run this instead:kubectl create secret generic reach-restic-bucket-creds \-n "$REACH_NAMESPACE" \--from-literal='AWS_ACCESS_KEY'=$(kubectl get secret reach-s3-backup \-n reach-backup -o jsonpath="{.data.access-key}" | base64 --decode) \--from-literal='AWS_SECRET_KEY'=$(kubectl get secret reach-s3-backup \-n reach-backup -o jsonpath="{.data.secret-key}" | base64 --decode)Export the address of the reach-restic bucket:export REPO_URL=s3:$S3_BUCKET_PROTOCOL://\$S3_BUCKET_ADDRESS:$S3_BUCKET_PORT/reach-resticRun the installation:./install-backup-restore.shCheck if all the -init-repository- jobs have completed:kubectl get pods -n $REACH_NAMESPACE | grep init-repositoryQuery the list of repositories:kubectl get repository -n $REACH_NAMESPACEThe output should look something like this with the status of all repositories showing as Initialized:NAME STATUS SIZE CREATIONDATEartifact-binary-store Initialized 0B 2021-03-01T10:21:53Zartifact-store Initialized 0B 2021-03-01T10:21:57Zjob-db Initialized 0B 2021-03-01T10:21:58Zkeycloak-db Initialized 0B 2021-03-01T10:21:58Zvrdb-binary-store Initialized 0B 2021-03-01T10:21:58Zvrdb-store Initialized 0B 2021-03-01T10:22:00ZOnce you are happy to move on, delete the completed job pods:kubectl delete jobs -n $REACH_NAMESPACE -l app=backup-restore-init-repositoryTrigger a manual backup:./trigger-database-backup.shAfter a while, all the -triggered-backup- jobs should show up as Completed:kubectl get pods -n "$REACH_NAMESPACE" | grep triggered-backupQuery the list of snapshots:kubectl get snapshot -n "$REACH_NAMESPACE"The output should look something like this with the status of all snapshots showing as Completed:NAME STATUS ID CREATIONDATEartifact-binary-store... Completed 62e... 2021...artifact-store-neo4j-... Completed 6ae... 2021...job-db-mysql-master-1... Completed 944... 2021...keycloak-db-mysql-mas... Completed 468... 2021...vrdb-binary-store-min... Completed 729... 2021...vrdb-store-neo4j-core... Completed 1c2... 2021...Once you are happy to move on, delete the completed job pods:kubectl delete jobs -n $REACH_NAMESPACE -l app=backup-restore-triggered-backupTriggering a Manual BackupSet Up VariablesSubstitute the variable values and export them:export REACH_NAMESPACE=<name of kubernetes namespace Virtalis Reach is deployed in>Run the BackupConsider scheduling system downtime and scaling down the ingress to prevent people from accessing the server during the backup procedure.Make a note of the replica count for nginx before scaling it down:kubectl get deploy -n ingress-nginx export NGINX_REPLICAS=<CURRENT_REPLICA_COUNT>Scale down the nginx ingress service:kubectl scale deploy --replicas=0 ingress-nginx-ingress-controller \-n ingress-nginxCreate a cluster resource level backup:velero backup create cluster-backup-$(date +"%m-%d-%Y") \--storage-location=reach-velero --include-namespaces $REACH_NAMESPACECheck the status of the velero backup: watch -n2 velero backup getCreate a database level backup:./trigger-database-backup.shCheck the status of the database backup:watch -n2 kubectl get snapshot -n "$REACH_NAMESPACE"Restoring DataSet Up VariablesSubstitute the variable values and export them:export REACH_NAMESPACE=<name of kubernetes namespace Virtalis Reach is deployed in>Restoration PlanPlan your restoration by gathering a list of the snapshot IDs you will be restoring from and export themBegin by querying the list of repositories:kubectl get repo -n "$REACH_NAMESPACE"NAME STATUS SIZE CREATIONDATEartifact-binary-store Initialized 12K 2021-07-02T12:03:26Zartifact-store Initialized 527M 2021-07-02T12:03:29Zcomment-db Initialized 180M 2021-07-02T12:03:37Zjob-db Initialized 181M 2021-07-02T12:03:43Zkeycloak-db Initialized 193M 2021-07-02T12:03:43Zvrdb-binary-store Initialized 12K 2021-07-02T12:03:46Zvrdb-store Initialized 527M 2021-07-02T12:02:44ZPerform a dry run of the restore script to gather a list of the variables you have to export:DRY_RUN=true ./trigger-database-restore.shSample output:Error: ARTIFACT_BINARY_STORE_RESTORE_ID has not been exported. Please run 'kubectl get snapshot -n develop -l repository=artifact-binary-store' to see a list of available snapshots.Error: ARTIFACT_STORE_RESTORE_ID has not been exported. Please run 'kubectl get snapshot -n develop -l repository=artifact-store'......Query available snapshots or use the commands returned in the output above to query by specific repositories:kubectl get snapshot -n "$REACH_NAMESPACE"This should return a list of available snapshots:NAME STATUS ID CREATIONDATEartifact-binary-store-mini... Completed 4a2... 2021-07-0...artifact-store-neo4j-core-... Completed 41d... 2021-07-0...comment-db-mysql-master-16... Completed e72... 2021-07-0...job-db-mysql-master-162522... Completed eb5... 2021-07-0...keycloak-db-mysql-master-1... Completed 919... 2021-07-0...vrdb-binary-store-minio-16... Completed cf0... 2021-07-0...vrdb-store-neo4j-core-1625... Completed 08d... 2021-07-0...It’s strongly advised to restore all the backed-up data using snapshots from the same day to avoid any missing/inaccessible data.Note down the replica count for nginx before scaling it down:kubectl get deploy -n ingress-nginxexport NGINX_REPLICAS=<CURRENT_REPLICA_COUNT>Scale down the nginx ingress service to prevent people from accessing Virtalis Reach during the restoration process:kubectl scale deploy --replicas=0 ingress-nginx-ingress-controller \-n ingress-nginxRun the Restore Script./trigger-database-restore.shUnset exported out restore id’s:charts=( $(ls backup-restore/helmCharts/) ); \for chart in "${charts[@]}"; do if [ $chart == "common-lib" ]; \then continue; fi; id_var="$(echo ${chart^^} | \sed 's/-/_/g')_RESTORE_ID"; unset ${id_var}; doneAfter a while, all the -triggered-restore- jobs should show up as Completed:kubectl get pods -n "$REACH_NAMESPACE" | grep triggered-restoreOnce you are happy to move on, delete the completed job pods:kubectl delete jobs -n $REACH_NAMESPACE \-l app=backup-restore-triggered-restoreWatch and wait for all pods that are running to be Ready:watch -n2 kubectl get pods -n "$REACH_NAMESPACE"Scale back nginx:kubectl scale deploy --replicas="$NGINX_REPLICAS" \ingress-nginx-ingress-controller -n ingress-nginxVerify that everything is working by testing standard functionality such asimporting a file or viewing a visualization.
Read more

Virtalis Reach User Management

OverviewThis section describes how to manage Virtalis Reach user details, creating user groups and add selected users to the groups.Virtalis Reach uses Keycloak for Identity and Access Management (IAM). This section assumes Keycloak has been installed on your system and that you have administration access rights.Accessing the Keycloak Admin PanelNavigate to https://<reach domain>/auth/admin/ replacing <reach domain> with the domain Virtalis Reach is hosted on.Enter the Keycloak administrator credentials that were extracted during the Virtalis Reach deployment.Ensure that the currently selected realm in the top left corner is Reach. If not, select it from the drop-down menu.Managing Users Go to Manage > Users and use this page to:View all users currently in the systemAdd users to the systemEdit the details of a userAdd users to groupsPlease note: AAD users must log in at least once to become visible in the system.Adding a UserTo add a user:Click Add user.Enter the user details.Click Save.Setting User CredentialsTo set the user credentials:Click the Credentials tab and set a password for the user. Set Temporary to OFF if you do not want the user to have to enter a new password when they first log in.Click Set Password.Adding Users to GroupsTo edit the groups a user is in:Select the user you wish to edit.Click the Groups tab.Select a single group from the list that you wish to add/remove the user to/from.Click Join.You will see the groups that the user belongs to on the left-hand side of the page and the available groups that the user can be added to on the right-hand side.Managing GroupsGo to Manage > Groups and use this page to:View all the groups currently in the systemCreate new groups for the purpose of access control on certain assets, projects, or visualisationsVirtalis Reach Specific GroupsVirtalis Reach has three main system groups:data-uploaders - access to /import, controls who can import assets into the system project-authors - access to /hub, controls who can create and publish projectsreach_script_publishers - controls whether a user can enable scripts for their projectType image caption here (optional)Creating a New GroupTo create a new group:Click New to create a new group. Enter a name for the group.Click Save. You will now be able to edit users individually in the system and assign them to the new group.
Read more

Installing Virtalis Reach Translator Plugins

OverviewVirtalis Reach supports numerous translator plugins which enable the end-user to import more file formats. This section describes how to install translator plugins into a live Virtalis Reach system.InstallationExport the following:export REACH_NAMESPACE=<name of kubernetes namespace Virtalis Reach is deployed in>Extract the plugin on to a machine with access to the Kubernetes cluster Virtalis Reach is running on.Example: Installing a OpenJT Reader plugin, the OpenJTReader folder will contain .dll files and libraries:root@reach-azure-develop:~/TranslatorPlugins# ls -latrtotal 12drwx------ 18 root root 4096 Aug 17 14:11 ..drwxr-xr-x 2 ubuntu ubuntu 4096 Aug 17 14:11 OpenJTReaderdrwxr-xr-x 3 ubuntu ubuntu 4096 Aug 17 14:27 .Get the full name of a running translator pod:export TRANSLATOR_POD_NAME=$(kubectl get pod \-l app=import-translator -n $REACH_NAMESPACE \-o jsonpath="{.items[0].metadata.name}")Copy the folder containing the plugins onto the persistent plugins folder on the translator pod, this might take a while depending on your connection and the size of the plugin folderkubectl cp format when pushing a file is <source> <namespace>/<pod-name>:<pod-destination-path>kubectl cp PLMXMLReader/ \$REACH_NAMESPACE/$TRANSLATOR_POD_NAME:/app/Translators/After the transfer is complete, restart the translator pod:kubectl delete pods -n reach -l app=import-translatorCheck the logs to verify that the plugin has been loaded:kubectl logs -l app=import-translator -n reachYou should see a log message containing the name of the plugin:[14:41:56 develop@5739eea INF] Adding translator OpenJTReader for extension .jt.‍
Read more

Manually Erasing Data

OverviewVirtalis Reach does not provide a GUI mechanism to delete visualisations from the artifact store or models from the hub. This section describes how to achieve that outcome by showing how to access the Hub and Artifact databases using the existing tools for Neo4j and Minio and also includes a Python script to automate typical tasks.This section assumes that you have already installed Virtalis Reach and your shell is in the directory containing the files that were downloaded during the installation. This is usually stored in the home directory, for example “/home/root/Reach/k8s”Please Note: The actions in this section directly modify the databases used by the Virtalis Reach services. No consideration is given to the current activity of the system and system-wide transactions are not used. Before performing these actions, prevent access to users of the system by temporarily disabling the ingress server.Pre-installationBefore continuing with the next section, please refer to Virtalis Reach Automated Backup System and perform a full backup of the system.Installing the Serviceexport REACH_NAMESPACE=<namespace>helm install reach-data-eraser -n $REACH_NAMESPACE data-eraser/chart/ Turn On the Servicekubectl scale statefulset -n $REACH_NAMESPACE reach-data-eraser --replicas=1List arguments:kubectl exec -it -n $REACH_NAMESPACE \reach-data-eraser-0 -- /bin/bash -c \"/del-obj.py --help"Output:usage: del-obj.py [-h] [-t TYPE] [-d DELETE] [-l] [-s] [-T]Deletes visualisation artifacts and vrmodels from Virtalis Reachoptional arguments: -h, --help show this help message and exit -t TYPE, --type TYPE Choose data type to erase, either 'artifact' or 'vrmodel' (default artifact) -d DELETE, --delete DELETE Deletes artifact or vrmodel by ID -l, --list List artifacts or vrmodels -s, --size List total size of artifacts or vrmodels. This will increase the time to retrieve the list depending on how much data is currently stored. -T, --test Dry run - test deleteDeleting a Visualisation ArtifactList Artifacts to Extract Artifact IDskubectl exec -it -n $REACH_NAMESPACE \ reach-data-eraser-0 -- /bin/bash -c \ "/del-obj.py --list"Sample output:Connecting to Neo4j bolt://localhost:7687JustAPointList : ID 8f3885c5-03ec-492f-9fca-8119ad2f4962assembled : ID 787eae34-5764-4105-a50f-c441c100f66elight_test_plus_cube : ID 7ae36ec6-ea6b-4639-973f-8fd16179b262template_torusknot : ID ebd7d8fe-a846-4b70-ac86-01c275e5f3b1template_torusknot : ID 81894536-d0d8-454e-816e-3db87d1e58c8The above list will show each revision separately.As you can see, there are 2 revisions of template_torusknot. You can use the UUID to cross-reference which version this refers to so that you can make sure you are deleting the right revision.In a web browser, navigate to the following URL, replacing <UUID> with the UUID of the artifact you want to check and replacing <YOUR_DOMAIN> with the domain of your Reach installation.https://<YOUR_DOMAIN>/viewer/<UUID>Once opened, you can click the “Show all versions” link to bring up a list of all versions along with the information about the current revision.Erase an ArtifactOptional but recommended, use the -T switch to test the deletion procedure without affecting the database.kubectl exec -it -n $REACH_NAMESPACE \ reach-data-eraser-0 -- /bin/bash -c \ "/del-obj.py --test --delete f4a356df-823f-424c-a6c9-2bc763ef9a41"Sample output:Checking cachedxml/3b3a3f44ec5977f17cb5239030945f6063f9cf91b433909b01e779a89d4830beOKDeleting from meshesDeleting from texturesDeleting from cachedxmlDeleting cachedxml/3b3a3f44ec5977f17cb5239030945f6063f9cf91b433909b01e779a89d4830beOKRemove the -T switch to delete the data.Input:kubectl exec -it -n $REACH_NAMESPACE \ reach-data-eraser-0 -- /bin/bash -c \ "/del-obj.py --delete f4a356df-823f-424c-a6c9-2bc763ef9a41"Sample output:Checking cachedxml/3b3a3f44ec5977f17cb5239030945f6063f9cf91b433909b01e779a89d4830beOKDeleting from meshesDeleting from texturesDeleting from cachedxmlDeleting cachedxml/3b3a3f44ec5977f17cb5239030945f6063f9cf91b433909b01e779a89d4830beOKDeleting VRModelsThe process for deleting VRModels is the same as deleting visualisation artifacts except that the object type should be change from the default of artifact to vrmodel using the -t or --type parameter.kubectl exec -it -n $REACH_NAMESPACE \ reach-data-eraser-0 -- /bin/bash -c \ "/del-obj.py --list --type vrmodel"Sample output:JustAPointList : ID a1e0544c-8985-4ca0-a50c-1856a81c7ca5NX_Speedboat : ID 3232ae07-b0bd-4f3b-ac1d-c595126a8b20SYSTEM_FILTER_BOX_WA_1_5T : ID 141d6136-3ba8-4a08-8462-8aa23e63ed5bSolid Edge 853 : ID 3b3ca5ec-589a-4582-bf85-65603872985eTwoModelsSameName : ID 86cbc92c-5159-4260-bd4a-22265debfa58Turn Off the ServiceOnce done, scale down the service:kubectl scale statefulset -n $REACH_NAMESPACE reach-data-eraser --replicas=0On Data Reuse Between Data StoresBinary data items may be referenced by multiple artifacts, for example when a model is reused in different projects or by revisions of a project. Only when deletion of an artifact will result in the related binary data items being unreferences will they be deleted.In the diagram, the deletion of Visualisation A will not result in the deletion of the LOD Binary data because it is also referenced by Visualisation B. If A is first deleted then the LOD binary data will be referenced only by B, then when B is deleted the LOD Binary data also be deleted. 
Read more

Updating the Virtalis Reach Licence Key

OverviewThis section describes how to replace the currently installed licence key with a new one.This section assumes that you have already installed Virtalis Reach and your shell is in the directory containing the files that were downloaded during the installation, this is usually stored in the home directory, for example “/home/root/Reach/k8s”Set Up VariablesSubstitute and export the following variables:export REACH_NAMESPACE=<name of kubernetes namespace Virtalis Reach is deployed in>Load previous configuration:. ./load-install-config.shSubstitute and export the following variables:export reach_licence__key=<reach licence xml snippet>export reach_licence__signature=<reach licence signature>Update SecretsRun a script:./create-secrets.shkubectl get secret reach-install-config \-n $REACH_NAMESPACE -o json | jq -r '.data.reach_licence__key="'\$(echo -n $reach_licence__key | base64 -w 0 | tr -d '\n')'"' \| kubectl apply -f -kubectl get secret reach-install-config \-n $REACH_NAMESPACE -o json | jq -r '.data.reach_licence__signature="'\$(echo -n $reach_licence__signature | base64 -w 0 | tr -d '\n')'"' \| kubectl apply -f -Gracefully restart any running pods for the two services below by doing a rolling restart:kubectl rollout restart deploy artifact-access-api -n $REACH_NAMESPACEkubectl rollout restart deploy project-management-api -n $REACH_NAMESPACE
Read more

Upgrading Virtalis Reach from Version 2021.3.1 to 2021.4.0

Introduction This document is designed to help a systems administrator upgrade Virtalis Reach from version 2021.3.1 to 2021.4.0 Pre-installation Before continuing to the next section, please refer to the “Virtalis Reach Automated Backup System” document and do a full backup of the system. Set up variables Substitute and export the following variables: export REACH_NAMESPACE=<name of kubernetes namespace Virtalis Reach is deployed in> Export the following variables: export REACH_VERSION=2021.4.0 export SKIP_MIGRATIONS=0 export ACR_REGISTRY_NAME=$(kubectl get secret reach-install-config -n $REACH_NAMESPACE -o json | jq ".data.ACR_REGISTRY_NAME" -r | base64 -d) export ACR_USERNAME=$(kubectl get secret reach-install-config -n $REACH_NAMESPACE -o json | jq ".data.ACR_USERNAME" -r | base64 -d) export ACR_PASSWORD=$(kubectl get secret reach-install-config -n $REACH_NAMESPACE -o json | jq ".data.ACR_PASSWORD" -r | base64 -d) Download installation files Log in with Oras: oras login "${ACR_REGISTRY_NAME}".azurecr.io \ --username "${ACR_USERNAME}" \ -p "${ACR_PASSWORD}" Make a backup of the old installation files: mv /home/root/Reach /home/root/.Reach Make a directory to store installation files: mkdir /home/root/Reach && cd /home/root/Reach Pull the Kubernetes deployment file archive from the Virtalis registry and unzip it: oras pull "${ACR_REGISTRY_NAME}".azurecr.io/misc/k8s:$REACH_VERSION && tar -zxvf k8s.tar.gz Make the installation scripts executable: cd k8s && sudo chmod +x *.sh Installation Load previous configuration . ./load-install-config.sh Create secrets ./create-secrets.sh Deploy Reach ./deploy.sh
Read more

Virtalis Reach Mesh Splitting Overview and Configuration

IntroductionMesh Splitting in Virtalis Reach is the process of breaking apart high triangle-count meshes into smaller chunks. Currently, the implementation defaults to only splitting meshes that have more than 64 thousand triangles and aims to create balanced splits both in terms of triangle count and 3D dimensions.This document is designed to help systems administrators to enable and configure Mesh Splitting in a Virtalis Reach environment.Level of Detail (LOD)When viewing a visualisation, the renderer chooses the LOD for each mesh such that it can maintain a certain framerate. With large meshes, this means it must choose the LOD for the entire mesh, regardless of how much of it the viewer can see. This can result in poor detail in large meshes because the triangle count is too high for the hardware. When large meshes are broken down into smaller chunks, the renderer can choose a LOD level for each split individually. Because of this, instead of rendering a high LOD for the entire original mesh it can instead choose high LODs only for the splits which are closest to the viewer, or only the splits that may be on screen. ConfigurationA Virtalis Reach systems administrator can configure Mesh Splitting in two ways:Enabled/DisabledTo enable/disable Mesh Splitting, the configuration variable in the TranslatorService can be set to true or false via the following env variable:TranslatorServiceConfiguration__MeshSplittingEnabledAdjusting Split Threshold (Advanced)It is possible to adjust the threshold at which Mesh Splitting is performed. By default, it is set to 64000 triangles and adjusting this value is not recommended. The threshold can, however, be adjusted via the following environment variable:TranslatorServiceConfiguration__MeshSplitTriangleThresholdPlease note: There are no sanity checks on this value. For example, if an administrator sets this to 10 then it will split up practically every single mesh in a scene and result in extremely poor performance of not only the rendering but also importing and publishing.Known IssuesWith Mesh Splitting enabled:Selecting a mesh that has been split will result in only a part of the mesh being highlightedUsing the Fly-to button when selecting a mesh that was split will result in only a part of the mesh being fit to the view
Read more

Support & Feedback

If support is required, please visit the support portal and knowledgebase at https://support.virtalis.com or email Virtalis Support at support@virtalis.com.Feedback is always welcomed so that we can continue to develop and improve Virtalis Reach. Please speak to your Customer Success team.
Read more

Support

If you have questions or need additional support, we are here to help.