Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This is Step 1 of the DSPM DRA Setup

Requirements

Info

We use Kubernetes, an open-source container orchestration system to manage our applications. At the moment the only Kubernetes distribution supported is K3s (click here for the official documentation) by Suse Linux for both on-premise and cloud deployments.

The minimum requirements for the Kubernetes cluster is a single node (1 virtual machine) with the following specs:

...

DSPM

CPU

20 cores

⚠️ The CPU must support the instructions SSE4.1, SSE4.2, AVX, AVX2, FMA.

Only x86_64 architecture is supported. Minimum CPU speed is 2.2 GHz

Memory

80GB

Storage

700GB

Min available inodes for ext4: 39M

Storage details

  • only SSD storage is supported

  • SWAP must be disabled

...

minimum 20 GB allocated at /var

  • / rootrequires at least 20GB

  • /var requires at least 20GB

  • /var/lib/rancher requires at least 700GB

  1. if neither /var nor /var/lib/rancher is specifically assigned to a partition you must assign the full 700GB to root

  2. if /var is specifically assigned to a partition but /var/lib/rancher is not, then you must assign the 700GB to /var

  3. if /var/lib/rancher is specifically assigned to a partition but /var is not, then you must assign the 700GB to /var/lib/rancher

Operating System

Ubuntu 20.04 LTS Server is recommended, other supported operating systems include:

  • Ubuntu 22.04, 24.04

  • RHEL

...

  • 9.2 and 9.4

  • CentOS 7.9

  • Suse Linux 15.3

Only Server edition versions are supported. No Desktop Environment installed. No other linux distributions are supported.

Note

Root access to the server is necessary during the deployment process and may also be required for support tickets and troubleshooting. Please ensure that the service user account accessing the server and deploying the K3s installer has sudo privileges to access and run the installer as root.

Firewall

  • Port 443/TCP to be open to allow the clients to access dashboard and API

  • To download application artifacts (Docker images and binaries), updates, and configuration files, the cluster requires a public internet connection with a minimum download speed of 40 Mbps and an upload speed of 8 Mbps. For a faster initial setup, a download speed of 100 Mbps or more is recommended.

K3s version support

...

1.24, 1.26

Other requirements

  • Domain Name Service (DNS) with public name resolution enabled

  • Network Time Protocol (NTP) service configured

  • Internet access to a network-based repository for software update packages

  • Fixed private IPv4 address

  • Unique static hostname

For hardened systems, see: Deploying Product in CIS hardened OS or K3s

When deploying using RHEL / CentOS / Suse:

...

When deploying using Ubuntu:

  • disable ufw, systemd-resolved, apparmor

Network settings

Your network should be configured to allow the following public URLs to be accessible over port 443 (HTTPS) and HTTPS traffic is bypassed (NOT intercepted):

Code Block
https://assets.master.k3s.getvisibility.com (Custom K3s installation files)
https://images.master.k3s.getvisibility.com (Private Docker registry)
https://charts.master.k3s.getvisibility.com (Private Helm registry)
https://prod-eu-west-1-starport-layer-bucket.s3.eu-west-1.amazonaws.com (Docker registry AWS CDN)
https://rpm.rancher.io (Rancher RPM repo for configuring SELinux packages on RHEL or CentOS)
https://api.master.k3s.getvisibility.com (Private API server)
https://rancher.master.k3s.getvisibility.com (Rancher management server)
https://rancher.$RESELLER_NAME.k3s.getvisibility.com (Rancher management server, where $RESELLER_NAME is Getvisibility for direct customers)

For Forcepoint these are:
https://rancher.forcepointus.k3s.getvisibility.com/
https://rancher.forcepointapac.k3s.getvisibility.com/
https://rancher.forcepointemea.k3s.getvisibility.com/

Installation

...

If using proxy, please run this before using curl:

Code Block
export http_proxy="$PROXY_IP"
export https_proxy="$PROXY_IP"

...

Before installation please use the following command to see if product requirements are met.

Code Block
curl -sfL https://assets.master.k3s.getvisibility.com/k3s/k3s.sh  | PRODUCT_NAME=ultimate ONLY_PRECHECK=true bash -

Run k3s installer using the following command as root user:

...

Note

You need to be logged as a root user to perform installation.

Tip

Here is the syntax for the k3s.sh installer to perform full prerequisites check and start the installation:

Code Block
languagebash
curl -sfL https://assets.master.k3s.getvisibility.com/k3s/k3s.sh | \
INSTALL_K3S_VERSION="v1.26.10+k3s1" K3S_KUBECONFIG_MODE="644" PRODUCT_NAME=ultimate sh -s - server --node-name=local-01
Info

...

We provide a number of optional switches to use with k3s.sh installer, check below what functionality they offer.

Argument

Description

SKIP_PRECHECK=true

to skip

...

all built in checks

SKIP_SYSTEM_CHECKS=true

to skip

...

hardware

...

checks

SKIP_NETWORK_CHECKS=true

to skip

...

connectivity checks

ONLY_PRECHECK=true

will run precheck only and stop after that

...

Run the kubectl registration command:

Note

For Forcepoint customers, Forcepoint’s SE generates the DSPM license key, which is shared with the customer via email.

For all other partner customers, Getvisibility will provide the kubectl registration command.

Run the kubectl registration command:

Code Block
The command below is just an example, it will not work during deployment!
kubectl apply -f https://....k3s.getvisibility.com/v3/import/dxslsxcf84....yaml
Warning

For security reasons the registration command can be used only a single time, the command becomes invalid after the first use. In case you need to run it again you must contact the support team for a new registration command.

Monitor the progress of the installation:  watch -c "kubectl get deployments -A" 

  • The K3s deployment is complete when elements of all the deployments (coredns, local-path-provisioner, metrics-server, traefik and cattle-cluster-agent) show at least "1" as "AVAILABLE"

  • In case of errors you can inspect the logs of a pod using  kubectl logs , e.g.  kubectl logs cattle-cluster-agent-d96d648d8-wjvl9 -n cattle-system

Now, go to the Step 2, which is available via this link –

...

...

...

...

...