You can install Synergy and Focus DSPM in an air-gapped environment that is not directly connected to the Internet.
...
Note |
---|
Make sure you have |
Info |
---|
The commands have been tested on Ubuntu Server 20.04 LTS, SUSE Linux Enterprise Server 15 SP4 and RHEL 8.6. |
Note |
---|
For RHEL, K3s needs the following package to be installed: Other SUSE, CentOS, RedHat prerequisites: Prerequisites for k3s on RHEL/CentOS/Oracle Linux Ubuntu prerequisites: Prerequisites for k3s on Ubuntu Linux |
The steps below you guide you through the air-gap installation of K3s, a lightweight Kubernetes distribution created by Rancher Labs:
Extract the downloaded file:
Code Block |
---|
tar -xf gv-platform-$VERSION.tar # (replace$VERSION according to downloaded file) |
Prepare K3s for air-gap installation:
Code Block | ||||
---|---|---|---|---|
|
...
sudo su - mkdir -p /var/lib/rancher/k3s/agent/images/ |
...
gunzip -c assets/k3s-airgap-images-amd64.tar.gz > /var/lib/rancher/k3s/agent/images/airgap-images.tar
|
...
cp assets/k3s /usr/local/bin && chmod +x /usr/local/bin/k3s |
...
tar -xzf assets/helm-v3.8.2-linux-amd64.tar.gz
|
...
cp linux-amd64/helm /usr/local/bin |
...
Before installation, it’s recommended to run automatic checks (as root;
PRODUCT_NAME
is either “synergy” (endpoint agent) or “dspm“ (dspm without endpoint agent) or “ultimate“ (dspm + endpoint agent). If unsure use “ultimate“):Code Block
...
language | bash |
---|
...
cat scripts/k3s.sh |
...
PRODUCT_NAME=ultimate ONLY_PRECHECK=true bash -
Install K3s:
Code Block | ||
---|---|---|
| ||
cat scripts/k3s.sh | INSTALL_K3S_SKIP_DOWNLOAD=true K3S_KUBECONFIG_MODE="644" \ SKIP_NETWORK_CHECKS=true sh -s - server --node-name=local-01 |
...
Wait for the 30s and check if K3s is running with the command: kubectl get pods -A
and systemctl status k3s.service
Import Docker images
The steps below will manually deploy the necessary images to the cluster.
Import Docker images locally:
Code Block language bash $ mkdir /tmp/import $ for f in images/*.gz; do IMG=$(basename "${f}" .gz); gunzip -c "${f}" > /tmp/import/"${IMG}"; done $ for f in /tmp/import/*.tar; do ctr -n=k8s.io images import "${f}"; done
Install Helm charts
The following steps guide you through the installation of the dependencies required by Focus and Synergy.
Info |
---|
Replace |
Install Getvisibility Essentials and set the daily UTC backup hour (0-23) for performing backups.
If you are installing Focus or Enterprise append--set eck-operator.enabled=true
to the command in order to enable ElasticSearch.Code Block $ helm upgrade --install gv-essentials charts/gv-essentials-$VERSION.tgz --wait \ --timeout=10m0s --kubeconfig /etc/rancher/k3s/k3s.yaml \ --set backup.hour=1
Install Monitoring CRD:
Code Block $ helm upgrade --install rancher-monitoring-crd charts/rancher-monitoring-crd-$VERSION.tgz --wait \ --kubeconfig /etc/rancher/k3s/k3s.yaml \ --namespace=cattle-monitoring-system \ --create-namespace
Install Monitoring:
Code Block $ helm upgrade --install rancher-monitoring charts/rancher-monitoring-$VERSION.tgz --wait \ --kubeconfig /etc/rancher/k3s/k3s.yaml \ --namespace=cattle-monitoring-system \ --set k3sServer.enabled=true \ --set k3sControllerManager.enabled=true \ --set k3sScheduler.enabled=true \ --set k3sProxy.enabled=true
Check all pods are
Running
with the command:kubectl get pods -A
Install Focus/Synergy Helm Chart
Replace the following variables:
$VERSION
with the version that is present in the bundle that has been downloaded$RESELLER
with the reseller code (eithergetvisibility
orforcepoint
)$PRODUCT
with the product being installed (synergy
orfocus
orenterprise
)
Code Block |
---|
Info |
Few more arguments that can be used to customize the execution of the k3s script:
Example:
|
Wait for the 30s and check if K3s is running with the command:
kubectl get pods -A
andsystemctl status k3s.service
Import Docker images
The steps below will manually deploy the necessary images to the cluster.
Import Docker images locally:
Code Block | ||||
---|---|---|---|---|
| ||||
mkdir /tmp/import
for f in images/*.gz; do IMG=$(basename "${f}" .gz); gunzip -c "${f}" > /tmp/import/"${IMG}"; done
for f in /tmp/import/*.tar; do ctr -n=k8s.io images import "${f}"; done |
Install Helm charts
The following steps guide you through the installation of the dependencies required by DSPM and Synergy (Endpoint Agent).
Info |
---|
Replace Replace IPADDRESS/DNS/FQDN with IP Adress or FQDN or DNS name for Keycloak in formats like below https://192.168.10.1 or https://gv.domain.local or https://gv.getvisibility.com |
Install Getvisibility Essentials and set the daily UTC backup hour (0-23) for performing backups.
Code Block helm upgrade --install gv-essentials charts/gv-essentials-$VERSION.tgz --wait \ --timeout=10m0s --kubeconfig /etc/rancher/k3s/k3s.yaml \ --set backup.hour=1 \ --set eck-operator.enabled=true \ --set updateclusterid.enabled=false \ --set keycloak.url=https://(IPADDRESS|DNS|FQDN)
Install Monitoring CRD:
Code Block helm upgrade --install rancher-monitoring-crd charts/rancher-monitoring-crd-$VERSION.tgz \ --wait \ --
...
kubeconfig /etc/rancher/k3s/k3s.yaml \ --namespace=cattle-
...
monitoring-
...
system \ --create-namespace
Install Monitoring:
Code Block helm upgrade --install rancher-monitoring charts/rancher-monitoring-$VERSION.tgz \ --wait \ --kubeconfig /etc/rancher/k3s/k3s.yaml \ --namespace=cattle-monitoring-system \ --set
...
Install custom artifact bundles
Models and other artifacts, like custom agent versions or custom consul configuration can be shipped inside auto deployable bundles. These bundles are docker images that contain the artifacts to be deployed alongside scripts to deploy them. To create a new bundle or modify an existing one follow this guide first: https://getvisibility.atlassian.net/wiki/spaces/GS/pages/65372391/Model+deployment+guide#1.-Create-a-new-model-bundle-or-modify-an-existing-one . The list of all the available bundles is inside the bundles/ directory on the models-ci project on github.
After the model bundle is published, for example images.master.k3s.getvisibility.com/models:company-1.0.1 You’ll have to generate a public link to this image by running the k3s-air-gap Publish ML models GitHub CI task. The task will ask you for the docker image URL.
Info |
---|
We are still using the images.master.k3s.getvisibility.com/models repo because the bundles were only used to deploy custom models at first. |
Once the task is complete you’ll get a public URL to download the artifact on the summary of the task. After that you have to execute the following commands.
Replace the following variables:
$URL
with the URL to the model bundle provided by the task$BUNDLE
with the name of the artifact, in this case company-1.0.1
Code Block |
---|
mkdir custom
wget -O custom/$BUNDLE.tar.gz $URL
gunzip custom/$BUNDLE.tar.gz
ctr -n=k8s.io images import models/$BUNDLE.tar |
Now you’ll need to execute the artifact deployment job. This job will unpack the artifacts from the docker image into a MinIO bucket inside the on premise cluster and restart any services that use them.
Replace the following variables:
$GV_DEPLOYER_VERSION
with the version of the model deployer available under charts/$BUNDLE_VERSION
with the version of the artifact, in this case company-1.0.1
Code Block |
---|
helm upgrade \
--install gv-model-deployer charts/gv-model-deployer-$GV_DEPLOYER_VERSION.tgz \
--wait --timeout=10m0s --kubeconfig /etc/rancher/k3s/k3s.yaml \
--set models.version="$BUNDLE_VERSION" |
You should be able to verify that everything went alright by locating the ml-model job that was launched. The logs should look like this:
Code Block |
---|
root@ip-172-31-9-140:~# kubectl logs -f ml-model-0jvaycku9prx-84nbf
Uploading models
Added `myminio` successfully.
`/models/AIP-1.0.0.zip` -> `myminio/models-data/AIP-1.0.0.zip`
`/models/Commercial-1.0.0.zip` -> `myminio/models-data/Commercial-1.0.0.zip`
`/models/Default-1.0.0.zip` -> `myminio/models-data/Default-1.0.0.zip`
`/models/classifier-6.1.2.zip` -> `myminio/models-data/classifier-6.1.2.zip`
`/models/lm-full-en-2.1.2.zip` -> `myminio/models-data/lm-full-en-2.1.2.zip`
`/models/sec-mapped-1.0.0.zip` -> `myminio/models-data/sec-mapped-1.0.0.zip`
Total: 0 B, Transferred: 297.38 MiB, Speed: 684.36 MiB/s
Restart classifier
deployment.apps/classifier-focus restarted
root@ip-172-31-9-140:~# |
In addition you can enter the different services that consume these artifacts to check if they have been correctly deployed. For example for the models you can open a shell inside the classifier containers and check the /models directory or check the models-data bucket inside MinIO. Both should contain the expected models.
Multiple Node Installation (High Availability)
Prerequisites
Firewall Rules for Internal Communication
Info |
---|
We recommend running the K3s nodes in a 10Gb low latency private network for the maximum security and performance. |
K3s needs the following ports to be accessible (Inbound and Outbound) by all other nodes running in the same cluster:
...
Protocol
...
Port
...
Description
...
TCP
...
6443
...
Kubernetes API Server
...
⚠️ UDP
...
8472
...
Required for Flannel VXLAN
...
TCP
...
2379-2380
...
embedded etcd
...
TCP
...
10250
...
metrics-server for HPA
...
TCP
...
9796
...
Prometheus node exporter
...
TCP
...
80
...
Private Docker Registry
Note |
---|
The ports above should not be publicly exposed as they will open up your cluster to be accessed by anyone. Make sure to always run your nodes behind a firewall/security group/private network that disables external access to the ports mentioned above. |
All nodes in the cluster must have:
Domain Name Service (DNS) configured
Network Time Protocol (NTP) configured
Fixed private IPv4 address
Globally unique node name (use
--node-name
when installing K3s in a VM to set a static node name)
k3sServer.enabled=true \ --set k3sControllerManager.enabled=true \ --set k3sScheduler.enabled=true \ --set k3sProxy.enabled=true \ --set prometheus.retention=5 \
Info |
---|
To expose Grafana via an ingress on the path |
Code Block |
---|
--set global.grafana_ingress.enabled=true |
Check all pods are
Running
with the command:kubectl get pods -A
Install DSPM/Synergy (Endpoint Agent) Helm Chart
Replace the following variables:
$VERSION
with the version that is present in the bundle that has been downloaded$RESELLER
with the reseller code (eithergetvisibility
orforcepoint
)$PRODUCT
with the product being installed (synergy, dspm, enterprise or ultimate
)
Code Block |
---|
helm upgrade --install gv-platform charts/gv-platform-$VERSION.tgz --wait \
--timeout=10m0s --kubeconfig /etc/rancher/k3s/k3s.yaml \
--set-string clusterLabels.environment=prod \
--set-string clusterLabels.cluster_reseller=$RESELLER \
--set-string clusterLabels.cluster_name=mycluster \
--set-string clusterLabels.product=$PRODUCT |
Info | ||
---|---|---|
In case if you experience 404 error for accessing to Keycloak or UI and use 1.26 (default) version of K3s ensure that traefik patch is applied
|
Install custom artifact bundles
Models and other artifacts, like custom agent versions or custom consul configuration can be shipped inside auto deployable bundles. These bundles are docker images that contain the artifacts to be deployed alongside scripts to deploy them. To create a new bundle or modify an existing one follow this guide first: https://getvisibility.atlassian.net/wiki/spaces/GS/pages/65372391/Model+deployment+guide#1.-Create-a-new-model-bundle-or-modify-an-existing-one . The list of all the available bundles is inside the bundles/ directory on the models-ci project on github.
After the model bundle is published, for example images.master.k3s.getvisibility.com/models:company-1.0.1 You’ll have to generate a public link to this image by running the k3s-air-gap Publish ML models GitHub CI task. The task will ask you for the docker image URL.
Info |
---|
We are still using the images.master.k3s.getvisibility.com/models repo because the bundles were only used to deploy custom models at first. |
Once the task is complete you’ll get a public URL to download the artifact on the summary of the task. After that you have to execute the following commands.
Replace the following variables:
$URL
with the URL to the model bundle provided by the task$BUNDLE
with the name of the artifact, in this case company-1.0.1
Code Block |
---|
mkdir custom
wget -O custom/$BUNDLE.tar.gz $URL
gunzip custom/$BUNDLE.tar.gz
ctr -n=k8s.io images import models/$BUNDLE.tar |
Now you’ll need to execute the artifact deployment job. This job will unpack the artifacts from the docker image into a MinIO bucket inside the on premise cluster and restart any services that use them.
Replace the following variables:
$GV_DEPLOYER_VERSION
with the version of the model deployer available under charts/$BUNDLE_VERSION
with the version of the artifact, in this case company-1.0.1
Code Block |
---|
helm upgrade \
--install gv-model-deployer charts/gv-model-deployer-$GV_DEPLOYER_VERSION.tgz \
--wait --timeout=10m0s --kubeconfig /etc/rancher/k3s/k3s.yaml \
--set models.version="$BUNDLE_VERSION" |
You should be able to verify that everything went alright by locating the ml-model job that was launched. The logs should look like this:
Code Block |
---|
root@ip-172-31-9-140:~# kubectl logs -f ml-model-0jvaycku9prx-84nbf
Uploading models
Added `myminio` successfully.
`/models/AIP-1.0.0.zip` -> `myminio/models-data/AIP-1.0.0.zip`
`/models/Commercial-1.0.0.zip` -> `myminio/models-data/Commercial-1.0.0.zip`
`/models/Default-1.0.0.zip` -> `myminio/models-data/Default-1.0.0.zip`
`/models/classifier-6.1.2.zip` -> `myminio/models-data/classifier-6.1.2.zip`
`/models/lm-full-en-2.1.2.zip` -> `myminio/models-data/lm-full-en-2.1.2.zip`
`/models/sec-mapped-1.0.0.zip` -> `myminio/models-data/sec-mapped-1.0.0.zip`
Total: 0 B, Transferred: 297.38 MiB, Speed: 684.36 MiB/s
Restart classifier
deployment.apps/classifier-focus restarted
root@ip-172-31-9-140:~# |
In addition you can enter the different services that consume these artifacts to check if they have been correctly deployed. For example for the models you can open a shell inside the classifier containers and check the /models directory or check the models-data bucket inside MinIO. Both should contain the expected models.
Multiple Node Installation (High Availability)
Prerequisites
Firewall Rules for Internal Communication
Info |
---|
We recommend running the K3s nodes in a 10Gb low latency private network for the maximum security and performance. |
K3s needs the following ports to be accessible (Inbound and Outbound) by all other nodes running in the same cluster:
Protocol | Port | Description |
---|---|---|
TCP | 6443 | Kubernetes API Server |
⚠️ UDP | 8472 | Required for Flannel VXLAN |
TCP | 2379-2380 | embedded etcd |
TCP | 10250 | metrics-server for HPA |
TCP | 9796 | Prometheus node exporter |
TCP | 80 | Private Docker Registry |
Note |
---|
The ports above should not be publicly exposed as they will open up your cluster to be accessed by anyone. Make sure to always run your nodes behind a firewall/security group/private network that disables external access to the ports mentioned above. |
All nodes in the cluster must have:
Domain Name Service (DNS) configured
Network Time Protocol (NTP) configured
Fixed private IPv4 address
Globally unique node name (use
--node-name
when installing K3s in a VM to set a static node name)
Firewall Rules for External Communication
The following port must be publicly exposed in order to allow users to access Synergy (Endpoint Agent) or Focus DSPM product:
Protocol | Port | Description |
---|---|---|
TCP | 443 |
DSPM/Synergy (Endpoint Agent) backend |
The user must not access the K3s nodes directly, instead, there should be a load balancer sitting between the end user and all the K3s nodes (master and worker nodes):
...
At least 4 machines are required to provide high availability of the Getvisibility platform. The HA setup supports a single-node failure.
Install K3s
Note |
---|
Make sure you have |
Info |
---|
The commands have been tested on Ubuntu Server 20.04 LTS, SUSE Linux Enterprise Server 15 SP4 and RHEL 8.6. |
Note |
---|
For RHEL, K3s needs the following package to be installed: |
The steps below you guide you through the air-gap installation of K3s, a lightweight Kubernetes distribution created by Rancher Labs:
...
Create at least 4 VMs with the same specs
...
Extract the downloaded file: tar -xf gv-platform-$VERSION.tar
to all the VMs
...
Create a local DNS entry private-docker-registry.local
across all the nodes resolving to the master1 node :
Code Block |
---|
cat >> /etc/hosts << EOF
<Master1_node_VM_IP> private-docker-registry.local
EOF |
...
Prepare the K3s for air-gap installation files:
Code Block | ||
---|---|---|
| ||
$ mkdir -p /var/lib/rancher/k3s/agent/images/
$ gunzip -c assets/k3s-airgap-images-amd64.tar.gz > /var/lib/rancher/k3s/agent/images/airgap-images.tar
$ cp assets/k3s /usr/local/bin && chmod +x /usr/local/bin/k3s
$ tar -xzf assets/helm-v3.8.2-linux-amd64.tar.gz && cp linux-amd64/helm /usr/local/bin |
...
Update the registries.yaml
file across all the nodes.
Code Block |
---|
$ mkdir -p /etc/rancher/k3s
$ cp assets/registries.yaml /etc/rancher/k3s/ |
Install K3s in the 1st master node:
To get started launch a server node using the cluster-init
flag:
single-node failure.
Install K3s
Note |
---|
Make sure you have |
Info |
---|
The commands have been tested on Ubuntu Server 20.04 LTS, SUSE Linux Enterprise Server 15 SP4 and RHEL 8.6. |
Note |
---|
For RHEL, K3s needs the following package to be installed: |
The steps below you guide you through the air-gap installation of K3s, a lightweight Kubernetes distribution created by Rancher Labs:
Create at least 4 VMs with the same specs
Extract the downloaded file:
tar -xf gv-platform-$VERSION.tar
to all the VMsCreate a local DNS entry
private-docker-registry.local
across all the nodes resolving to the master1 node :Code Block cat >> /etc/hosts << EOF <Master1_node_VM_IP> private-docker-registry.local EOF
Prepare the K3s for air-gap installation files:
Code Block language bash $ mkdir -p /var/lib/rancher/k3s/agent/images/ $ gunzip -c assets/k3s-airgap-images-amd64.tar.gz > /var/lib/rancher/k3s/agent/images/airgap-images.tar $ cp assets/k3s /usr/local/bin && chmod +x /usr/local/bin/k3s $ tar -xzf assets/helm-v3.8.2-linux-amd64.tar.gz && cp linux-amd64/helm /usr/local/bin
Update the
registries.yaml
file across all the nodes.Code Block $ mkdir -p /etc/rancher/k3s $ cp assets/registries.yaml /etc/rancher/k3s/
Install K3s in the 1st master node:
To get started launch a server node using thecluster-init
flag:Code Block cat scripts/k3s.sh | INSTALL_K3S_SKIP_DOWNLOAD=true K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=master1 --cluster-init
Check for your first master node status, it should have the
Ready
state:Code Block kubectl get nodes
Use the following command to copy the TOKEN from this node that will be used to join the other nodes to the cluster:
Code Block cat /var/lib/rancher/k3s/server/node-token
Also, copy the IP address of the 1st master node which will be used by the other nodes to join the cluster.
Install K3s in the 2nd master node:
Run the following command and assign the contents of the file:
/var/lib/rancher/k3s/server/node-token
from the 1st master node to theK3S_TOKEN
variable.Set
--node-name
to “master2”Set
--server
to the IP address of the 1st master nodeCode Block cat scripts/k3s.sh | K3S_TOKEN=$K3S_TOKEN INSTALL_K3S_SKIP_DOWNLOAD=true K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=master1master2 --cluster-init
Check for your first master node status, it should have the
Ready
state:Code Block kubectl get nodes
Use the following command to copy the TOKEN from this node that will be used to join the other nodes to the cluster:
Also, copy the IP address of the 1st master node which will be used by the other nodes to join the cluster.Code Block cat /var/lib/rancher/k3s/server/node-token
server https://<ip or hostname of any master node>:6443
Check the node status:
Code Block kubectl get nodes
Install K3s in the 2nd 3rd master node:
Run the following command and assign the contents of the file:
/var/lib/rancher/k3s/server/node-token
from the 1st master node to theK3S_TOKEN
variable.Set
--node-name
to “master2” “master3”Set
--server
to the IP address of the 1st master nodeCode Block cat scripts/k3s.sh | K3S_TOKEN=$K3S_TOKEN INSTALL_K3S_SKIP_DOWNLOAD=true K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=master2master3 --server https://<ip or hostname of any master node>:6443
Check the node status:
Code Block kubectl get nodes
Install K3s in the 3rd master 1st worker node:
Run the following command and assign the contents of the file:
/var/lib/rancher/k3s/server/node-token
from the 1st master node to theK3S_TOKEN
variable.Use the same approach to install K3s and to connect the worker node to the cluster group.
The installation parameter would be different in this case. Run the following command:
Set--node-name
to “master3” Set--server
to the IP address of the 1st master node“worker1” (where n is the nth number of the worker node)Code Block cat scripts/k3s.sh | K3S_TOKEN=$K3S_TOKEN INSTALL_K3S_SKIP_DOWNLOAD=true K3S_TOKEN=$K3S_TOKEN K3S_KUBECONFIG_MODE="644" sh -s - serveragent --node-name=master3worker1 --server https://<ip or hostname of any master node>:6443
Check the node status:
Code Block kubectl get nodes
Install K3s in the 1st worker node:
Use the same approach to install K3s and to connect the worker node to the cluster group.
The installation parameter would be different in this case. Run the following command:
Set--node-name
to “worker1” (where n is the nth number of the worker node)Code Block cat scripts/k3s.sh | $K3S_TOKEN INSTALL_K3S_SKIP_DOWNLOAD=true K3S_TOKEN=$K3S_TOKEN K3S_KUBECONFIG_MODE="644" sh -s - agent --node-name=worker1 --server https://<ip or hostname of any master node>:6443
Check the node status:
Code Block kubectl get nodes
Deploy Private Docker Registry and Import Docker images
Note |
---|
Perform the following steps in the master1 node |
Extract and Import the Docker images locally to the
master1
nodeCode Block $ mkdir /tmp/import $ for f in images/*.gz; do IMG=$(basename "${f}" .gz); gunzip -c "${f}" > /tmp/import/"${IMG}"; done $ for f in /tmp/import/*.tar; do ctr -n=k8s.io images import "${f}"; done
- Install
gv-private-registry
helm chart in the master1 node:
Deploy Private Docker Registry and Import Docker images
Note |
---|
Perform the following steps in the master1 node |
Extract and Import the Docker images locally to the
master1
nodeCode Block $ mkdir /tmp/import $ for f in images/*.gz; do IMG=$(basename "${f}" .gz); gunzip -c "${f}" > /tmp/import/"${IMG}"; done $ for f in /tmp/import/*.tar; do ctr -n=k8s.io images import "${f}"; done
Install
gv-private-registry
helm chart in the master1 node:
Replace$VERSION
with the version that is present in the bundle that has been downloaded.
To check all the charts that have been download runls charts
.Code Block $ helm upgrade --install gv-private-registry charts/gv-private-registry-$VERSION.tgz --wait \ --timeout=10m0s \ --kubeconfig /etc/rancher/k3s/k3s.yaml
Tag and push the docker images to the local private docker registry deployed in the
master1
node:Code Block $ sh scripts/push-docker-images.sh
Install Helm charts
The following steps guide you through the installation of the dependencies required by DSPM and Synergy (Endpoint Agent).
Note |
---|
Perform the following steps in the master1 Node |
Info |
---|
Replace |
Install Getvisibility Essentials and set the daily UTC backup hour (0-23) for performing backups.
Code Block $ helm upgrade --install gv-essentials charts/gv-essentials-$VERSION.tgz --wait --install gv-private-registry charts/gv-private-registry-$VERSION.tgz --wait \ --timeout=10m0s \ --kubeconfig /etc/rancher/k3s/k3s.yaml
Tag and push the docker images to the local private docker registry deployed in the
master1
node:Code Block $ sh scripts/push-docker-images.sh
Install Helm charts
The following steps guide you through the installation of the dependencies required by Focus and Synergy.
Note |
---|
Perform the following steps in the master1 Node |
Info |
---|
Replace |
Install Getvisibility Essentials and set the daily UTC backup hour (0-23) for performing backups.
If you are installing Focus or Enterprise append--set eck-operator.enabled=true
to the command in order to enable ElasticSearch.\ --timeout=10m0s --kubeconfig /etc/rancher/k3s/k3s.yaml \ --set global.high_available=true \ --set eck-operator.enabled=true \ --set minio.replicas=4 \ --set minio.mode=distributed \ --set consul.server.replicas=3 \ --set updateclusterid.enabled=false \ --set backup.hour=1 --set eck-operator.enabled=true
Install Monitoring CRD:
Code Block $ helm upgrade --install gvrancher-monitoring-essentialscrd charts/gvrancher-monitoring-essentialscrd-$VERSION.tgz --wait \ --timeout=10m0s --kubeconfig /etc/rancher/k3s/k3s.yaml \ --set global.high_available=truenamespace=cattle-monitoring-system \ ---set eck-operator.enabled=true \ --set minio.replicas=4 create-namespace
Install Monitoring:
Code Block $ helm upgrade --install rancher-monitoring charts/rancher-monitoring-$VERSION.tgz --wait \ --set minio.mode=distributedkubeconfig /etc/rancher/k3s/k3s.yaml \ --set consul.server.replicas=3global.high_available=true \ --set backup.hour=1
Install Monitoring CRD:
Code Block $ helm upgrade --install rancher-monitoring-crd charts/rancher-monitoring-crd-$VERSION.tgz --wait \ --kubeconfig /etc/rancher/k3s/k3s.yaml \ --namespace=cattle-monitoring-system \ --create-namespace
- Install Monitoring:
namespace=cattle-monitoring-system \ --set loki-stack.loki.replicas=2 \ --set prometheus.prometheusSpec.replicas=2 --set prometheus.retention=5
Info |
---|
To expose Grafana via an ingress on the path |
Code Block |
---|
--set global.grafana_ingress.enabled=true |
Check all pods are
Running
with the command:kubectl get pods -A
Install DSPM/Synergy (Endpoint Agent) Helm Chart
Replace the following variables:
$VERSION
with the version that is present in the bundle that has been downloaded$RESELLER
with the reseller code (eithergetvisibility
orforcepoint
)$PRODUCT
with the product being installed (synergy
ordspm
orultimate
)
Code Block |
---|
$ helm upgrade --install |
...
gv- |
...
platform charts/ |
...
gv- |
...
platform-$VERSION.tgz --wait \ --timeout=10m0s --kubeconfig /etc/rancher/k3s/k3s.yaml \ --set |
...
high_available=true \
-- |
...
set-string clusterLabels.environment=prod \ --set-string clusterLabels.cluster_reseller=$RESELLER \ --set |
...
- |
...
string clusterLabels.cluster_name=mycluster \ --set-string |
...
clusterLabels. |
...
product= |
...
Check all pods are Running
with the command: kubectl get pods -A
Install Focus/Synergy Helm Chart
Replace the following variables:
$PRODUCT |
Install Kube-fledged
Note |
---|
Perform the following steps in the master1 node |
Install
gv-kube-fledged
helm chart.
Replace$VERSION
with the version that is present in the bundle that
...
$RESELLER
with the reseller code (either getvisibility
or forcepoint
)
...
has been downloaded.
To check all the charts that have been download runls charts
.Code Block $ helm upgrade --install gv-kube-
...
fledged charts/gv-
...
kube-fledged-$VERSION.tgz -n kube-
...
fledged \ --timeout=10m0s \ --kubeconfig /etc/rancher/k3s/k3s.yaml
...
\ --
...
Install Kube-fledged
Note |
---|
Perform the following steps in the master1 node |
Install
gv-kube-fledged
helm chart.
Replace$VERSION
with the version that is present in the bundle that has been downloaded.
To check all the charts that have been download runls charts
.Code Block $ helm upgrade --install gv-kube-fledged charts/gv-kube-fledged-$VERSION.tgz -n kube-fledged \ --timeout=10m0s \ --kubeconfig /etc/rancher/k3s/k3s.yaml \ --create-namespace
Create and deploy
imagecache.yaml
Code Block $ sh scripts/create-imagecache-file.sh $ kubectl apply -f scripts/imagecache.yaml
Install custom artifacts
Models and other artifacts, like custom agent versions or custom consul configuration can be shipped inside auto deployable bundles. The procedure to install custom artifact bundles on an HA cluster is the same as in the single node cluster case. Take a look at the guide for single-node clusters above.
Upgrade
Focus/Synergy/Enterprise Helm Chart
...
create-namespace
Create and deploy
imagecache.yaml
Code Block $ sh scripts/create-imagecache-file.sh $ kubectl apply -f scripts/imagecache.yaml
Install custom artifacts
Models and other artifacts, like custom agent versions or custom consul configuration can be shipped inside auto deployable bundles. The procedure to install custom artifact bundles on an HA cluster is the same as in the single node cluster case. Take a look at the guide for single-node clusters above.
Upgrade
View current values in config file for each chart
Before upgrading each chart, you can check the settings used in the current installation with
helm get values <chartname>
.If the current values are different from the defaults, you will need to change the parameters of the
helm upgrade
command for the chart in question.For example, if the backup is currently set to run at 2 AM instead of the 1 AM default, change
--set backup.hour=1
to--set backup.hour=2
.Below is a mostly default config.
...
DSPM/Synergy/Ultimate Helm Chart
To upgrade DSPM/Synergy/Ultimate you must:
Download the new bundle
Import Docker images
Install FocusDSPM/Synergy/Enterprise Ultimate Helm Chart
Info |
---|
|
GetVisibility Essentials Helm Chart
...
Info |
---|
|
Install custom artifacts
...