Air Gap Installation
You can install Synergy and DSPM in an air-gapped environment that is not directly connected to the Internet.
- 1 Single Node Installation
- 2 Multiple Node Installation (High Availability)
- 3 Upgrade
Single Node Installation
Install K3s
Make sure you have /usr/local/bin
configured in your PATH: export PATH=$PATH:/usr/local/bin
).
All the commands must be executed as root
user.
For RHEL, K3s needs the following package to be installed: k3s-selinux
(repo rancher-k3s-common-stable) and its dependencies container-selinux
(repo rhel-8-appstream-rhui-rpms) and policycoreutils-python-utils
(repo rhel-8-baseos-rhui-rpms). On systems without access to online repositories, the corresponding *.rpm package for each of the above dependencies should be copied to the server first and installed locally.
Other SUSE, CentOS, RedHat prerequisites: Prerequisites for k3s on RHEL/CentOS/Oracle Linux
Ubuntu prerequisites: Prerequisites for k3s on Ubuntu Linux
The steps below you guide you through the air-gap installation of K3s, a lightweight Kubernetes distribution created by Rancher Labs:
Extract the downloaded file:
tar -xf gv-platform-$VERSION.tar # (replace$VERSION according to downloaded file)
Prepare K3s for air-gap installation:
sudo su -
mkdir -p /var/lib/rancher/k3s/agent/images/
gunzip -c assets/k3s-airgap-images-amd64.tar.gz > /var/lib/rancher/k3s/agent/images/airgap-images.tar
cp assets/k3s /usr/local/bin && chmod +x /usr/local/bin/k3s
tar -xzf assets/helm-v3.8.2-linux-amd64.tar.gz
cp linux-amd64/helm /usr/local/bin
Before installation, it’s recommended to run automatic checks (as root;
PRODUCT_NAME
is either “synergy” (endpoint agent) or “dspm“ (dspm without endpoint agent) or “ultimate“ (dspm + endpoint agent). If unsure use “ultimate“):cat scripts/k3s.sh | PRODUCT_NAME=ultimate ONLY_PRECHECK=true bash -
Install K3s:
Few more arguments that can be used to customize the execution of the k3s script:SKIP_PRECHECK=true
to skip the execution of the precheck script while installing k3s service
SKIP_SYSTEM_CHECKS=true
to skip the system hardware checking during precheck
SKIP_NETWORK_CHECKS=true
to skip the system network connectivity checking during precheck
Example:
cat scripts/k3s.sh | INSTALL_K3S_SKIP_DOWNLOAD=true SKIP_PRECHECK=true K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=local-01
Wait for the 30s and check if K3s is running with the command:
kubectl get pods -A
andsystemctl status k3s.service
Import Docker images
The steps below will manually deploy the necessary images to the cluster.
Import Docker images locally:
Install Helm charts
The following steps guide you through the installation of the dependencies required by DSPM and Synergy (Endpoint Agent).
Install Getvisibility Essentials and set the daily UTC backup hour (0-23) for performing backups.
Install Monitoring CRD:
Install Monitoring:
Check all pods are
Running
with the command:kubectl get pods -A
Install DSPM/Synergy (Endpoint Agent) Helm Chart
Replace the following variables:
$VERSION
with the version that is present in the bundle that has been downloaded$RESELLER
with the reseller code (eithergetvisibility
orforcepoint
)$PRODUCT
with the product being installed (synergy, dspm, enterprise or ultimate
)
Install custom artifact bundles
Models and other artifacts, like custom agent versions or custom consul configuration can be shipped inside auto deployable bundles. These bundles are docker images that contain the artifacts to be deployed alongside scripts to deploy them. To create a new bundle or modify an existing one follow this guide first: https://getvisibility.atlassian.net/wiki/spaces/GS/pages/65372391/Model+deployment+guide#1.-Create-a-new-model-bundle-or-modify-an-existing-one . The list of all the available bundles is inside the bundles/ directory on the models-ci project on github.
After the model bundle is published, for example images.master.k3s.getvisibility.com/models:company-1.0.1 You’ll have to generate a public link to this image by running the k3s-air-gap Publish ML models GitHub CI task. The task will ask you for the docker image URL.
Once the task is complete you’ll get a public URL to download the artifact on the summary of the task. After that you have to execute the following commands.
Replace the following variables:
$URL
with the URL to the model bundle provided by the task$BUNDLE
with the name of the artifact, in this case company-1.0.1
Now you’ll need to execute the artifact deployment job. This job will unpack the artifacts from the docker image into a MinIO bucket inside the on premise cluster and restart any services that use them.
Replace the following variables:
$GV_DEPLOYER_VERSION
with the version of the model deployer available under charts/$BUNDLE_VERSION
with the version of the artifact, in this case company-1.0.1
You should be able to verify that everything went alright by locating the ml-model job that was launched. The logs should look like this:
In addition you can enter the different services that consume these artifacts to check if they have been correctly deployed. For example for the models you can open a shell inside the classifier containers and check the /models directory or check the models-data bucket inside MinIO. Both should contain the expected models.
Multiple Node Installation (High Availability)
Prerequisites
Firewall Rules for Internal Communication
K3s needs the following ports to be accessible (Inbound and Outbound) by all other nodes running in the same cluster:
Protocol | Port | Description |
---|---|---|
TCP | 6443 | Kubernetes API Server |
UDP | 8472 | Required for Flannel VXLAN |
TCP | 2379-2380 | embedded etcd |
TCP | 10250 | metrics-server for HPA |
TCP | 9796 | Prometheus node exporter |
TCP | 80 | Private Docker Registry |
All nodes in the cluster must have:
Domain Name Service (DNS) configured
Network Time Protocol (NTP) configured
Fixed private IPv4 address
Globally unique node name (use
--node-name
when installing K3s in a VM to set a static node name)
Firewall Rules for External Communication
The following port must be publicly exposed in order to allow users to access Synergy (Endpoint Agent) or DSPM product:
Protocol | Port | Description |
---|---|---|
TCP | 443 | DSPM/Synergy (Endpoint Agent) backend |
The user must not access the K3s nodes directly, instead, there should be a load balancer sitting between the end user and all the K3s nodes (master and worker nodes):
The load balancer must operate at Layer 4 of the OSI model and listen for connections on port 443. After the load balancer receives a connection request, it selects a target from the target group (which can be any of the master or worker nodes in the cluster) and then attempts to open a TCP connection to the selected target (node) on port 443.
The load balancer must have health checks enabled which are used to monitor the health of the registered targets (nodes in the cluster) so that the load balancer can send requests to healthy nodes only.
The recommended health check configuration is:
Timeout: 10 seconds
Healthy threshold: 3 consecutive health check successes
Unhealthy threshold: 3 consecutive health check failures
Interval: 30 seconds
Balance mode: round-robin
VM Count
At least 4 machines are required to provide high availability of the Getvisibility platform. The HA setup supports a single-node failure.
Install K3s
The steps below you guide you through the air-gap installation of K3s, a lightweight Kubernetes distribution created by Rancher Labs:
Create at least 4 VMs with the same specs
Extract the downloaded file:
tar -xf gv-platform-$VERSION.tar
to all the VMsCreate a local DNS entry
private-docker-registry.local
across all the nodes resolving to the master1 node :Prepare the K3s for air-gap installation files:
Update the
registries.yaml
file across all the nodes.Install K3s in the 1st master node:
To get started launch a server node using thecluster-init
flag:Check for your first master node status, it should have the
Ready
state:Use the following command to copy the TOKEN from this node that will be used to join the other nodes to the cluster:
Also, copy the IP address of the 1st master node which will be used by the other nodes to join the cluster.
Install K3s in the 2nd master node:
Run the following command and assign the contents of the file:
/var/lib/rancher/k3s/server/node-token
from the 1st master node to theK3S_TOKEN
variable.Set
--node-name
to “master2”Set
--server
to the IP address of the 1st master nodeCheck the node status:
Install K3s in the 3rd master node:
Run the following command and assign the contents of the file:
/var/lib/rancher/k3s/server/node-token
from the 1st master node to theK3S_TOKEN
variable.Set
--node-name
to “master3”Set
--server
to the IP address of the 1st master nodeCheck the node status:
Install K3s in the 1st worker node:
Use the same approach to install K3s and to connect the worker node to the cluster group.
The installation parameter would be different in this case. Run the following command:
Set--node-name
to “worker1” (where n is the nth number of the worker node)Check the node status:
Deploy Private Docker Registry and Import Docker images
Extract and Import the Docker images locally to the
master1
nodeInstall
gv-private-registry
helm chart in the master1 node:
Replace$VERSION
with the version that is present in the bundle that has been downloaded.
To check all the charts that have been download runls charts
.Tag and push the docker images to the local private docker registry deployed in the
master1
node:
Install Helm charts
The following steps guide you through the installation of the dependencies required by DSPM and Synergy (Endpoint Agent).
Install Getvisibility Essentials and set the daily UTC backup hour (0-23) for performing backups.
Install Monitoring CRD:
Install Monitoring:
Check all pods are
Running
with the command:kubectl get pods -A
Install DSPM/Synergy (Endpoint Agent) Helm Chart
Replace the following variables:
$VERSION
with the version that is present in the bundle that has been downloaded$RESELLER
with the reseller code (eithergetvisibility
orforcepoint
)$PRODUCT
with the product being installed (synergy
ordspm
orultimate
)
Install Kube-fledged
Install
gv-kube-fledged
helm chart.
Replace$VERSION
with the version that is present in the bundle that has been downloaded.
To check all the charts that have been download runls charts
.Create and deploy
imagecache.yaml
Install custom artifacts
Models and other artifacts, like custom agent versions or custom consul configuration can be shipped inside auto deployable bundles. The procedure to install custom artifact bundles on an HA cluster is the same as in the single node cluster case. Take a look at the guide for single-node clusters above.
Upgrade
View current values in config file for each chart
Before upgrading each chart, you can check the settings used in the current installation with
helm get values <chartname>
.If the current values are different from the defaults, you will need to change the parameters of the
helm upgrade
command for the chart in question.For example, if the backup is currently set to run at 2 AM instead of the 1 AM default, change
--set backup.hour=1
to--set backup.hour=2
.Below is a mostly default config.
DSPM/Synergy/Ultimate Helm Chart
To upgrade DSPM/Synergy/Ultimate you must:
Download the new bundle
Import Docker images
Install DSPM/Synergy/Ultimate Helm Chart
GetVisibility Essentials Helm Chart
To upgrade the GV Essential chart you must:
Download the new bundle
Import Docker images
Run the command from Install Getvisibility Essentials under Install Helm charts section
Install custom artifacts
Models and other artifacts, like custom agent versions or custom consul configuration can be shipped inside auto deployable bundles. The procedure to upgrade custom artifact bundles is the same as the installation one, take a look at the guides above for single-node and multi-node installations.
Related content
Classified as Getvisibility - Partner/Customer Confidential