The steps mentioned in this document is deprecated. Kindly use this document https://getvisibility.atlassian.net/wiki/x/BoByBg to install the Synergy/Forcepoint Classification product.
Contents
Introduction
This document outlines the steps to install and update K3s servers and how to deploy and backup Synergy services.
K3s Installation - Client
Refer to the following page for the installation details: Synergy/Forcepoint Classification Installation Guide v2.0
Deploy Synergy – Reseller
Go to Rancher dashboard and wait for the new cluster to become Active:
Select the cluster name and go to Apps > Charts and install the GetVisibility Essentials Helm chart:
Go to Apps > Charts and install the GetVisibility Monitoring Helm chart:
Install into Project: Default
4. Go to the global menu Continuous Delivery > Clusters and click on Edit config for the cluster:
5. Add 2 labels product=synergy environment=prod and press Save.
Update – Client
Synergy backend services
Updates and custom settings are automatically applied to all Synergy backend services as long as the cluster has access to the public internet and can connect to the management server.
In case there’s no internet connection or the management server is down, the cluster agent will keep trying to reach the management server until a connection can be established.
K3s cluster
To upgrade K3s from an older version to a specific version you can run the following command:
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=vX.Y.Z-rc1 sh - |
Stop the old k3s binary (e.g. systemctl stop k3s) and start it again (e.g. systemctl start k3s). For more details please refer to the official documentation.
Certificates
By default, certificates in K3s expire in 12 months. If the certificates are expired or have fewer than 90 days remaining before they expire, the certificates are rotated when K3s is restarted.
Backup - Client
Consul
Find the IP of the server where Consul is running (in case you have a multi-node cluster):
kubectl get pod/gv-essentials-consul-server-0 -o jsonpath='{.spec.nodeName}' |
Log into the server using SSH and execute the following command to take a snapshot of Consul:
kubectl exec -it gv-essentials-consul-server-0 -- consul snapshot save /consul/data/backup.snap |
Find the path where the snapshot has been save to:
kubectl get pvc/data-default-gv-essentials-consul-server-0 -o jsonpath='{.spec.volumeName}' | xargs -I{} kubectl get pv/{} -o jsonpath='{.spec.hostPath.path}' |
Copy the snapshot file to a safe place.
PostgreSQL
Find the IP of the server where the PostgreSQL master is running (in case you have a multi-node cluster):
kubectl get pod/gv-postgresql-0 -o jsonpath='{.spec.nodeName}' |
Log into the server using SSH and execute the following command to backup all databases:
kubectl exec -it gv-postgresql-0 -- bash -c "pg_dumpall -U gv | gzip > /home/postgres/pgdata/backup.sql.gz" |
Find the path where the backup has been save to:
kubectl get pvc/pgdata-gv-postgresql-0 -o jsonpath='{.spec.volumeName}' | xargs -I{} kubectl get pv/{} -o jsonpath='{.spec.hostPath.path}' |
Copy the backup file to a safe place.