Our K3s HA setup consists of 4 homogeneous nodes (3 master nodes + 1 worker node) and can withstand a single-node failure with a very short failover disruption (between 3 to 6 minutes).
With our HA setup we can achieve a monthly uptime of 99.9% (a maximum of 43m of downtime every month).
Please refer to https://getvisibility.atlassian.net/wiki/spaces/GS/pages/88801305/K3s+Installation#Requirements for the node specs of the product you’ll be installing.
The minimum spec allowed for a HA node is 8 CPUs, 32GB of RAM and 500GB of free SSD disk space. All nodes should also have the same spec and OS. |
We recommend running the K3s nodes in a 10Gb low latency private network for the maximum security and performance. |
K3s needs the following ports to be accessible by all other nodes running in the same cluster:
Protocol | Port | Description |
---|---|---|
TCP | 6443 | Kubernetes API Server |
| 8472 | Required for Flannel VXLAN |
TCP | 2379-2380 | embedded etcd |
TCP | 10250 | metrics-server for HPA |
TCP | 9796 | Prometheus node exporter |
The ports above should not be publicly exposed as they will open up your cluster to be accessed by anyone. Make sure to always run your nodes behind a firewall/security group/private network that disables external access to the ports mentioned above. |
All nodes in the cluster must have:
Domain Name Service (DNS) configured
Network Time Protocol (NTP) configured
Software Update Service - access to a network-based repository for software update packages
Fixed private IPv4 address
Globally unique node name (use --node-name
when installing K3s in a VM to set a static node name)
The following port must be publicly exposed in order to allow users to access Synergy or Focus product:
Protocol | Port | Description |
---|---|---|
TCP | 443 | Focus/Synergy backend |
The user must not access the K3s nodes directly, instead, there should be a load balancer sitting between the end user and all the K3s nodes (master and worker nodes):
The load balancer must operate at Layer 4 of the OSI model and listen for connections on port 443. After the load balancer receives a connection request, it selects a target from the target group (which can be any of the master or worker nodes in the cluster) and then attempt to open a TCP connection to the selected target (node) on port 443.
The load balancer must have health checks enabled which are used to monitor the health of the registered targets (nodes in the cluster) so that the load balancer can send requests to healthy nodes only.
The recommended health check configuration is:
Timeout: 10 seconds
Healthy threshold: 3 consecutive health check successes
Unhealthy threshold: 3 consecutive health check failures
Interval: 30 seconds
Balance mode: round robin
Please refer to https://getvisibility.atlassian.net/wiki/spaces/GS/pages/88801305/K3s+Installation#Proxy-settings for the list of urls you need to enable in your corporate proxy in order to connect to our private registries.
You must always have public internet access. In case of a single-node failure the remaining nodes will take over and reach out to our private registries to download the Docker images that are necessary to run our services. |
We need 3 master nodes and at least 1 worker node to run K3s in HA mode.
The nodes must be homogeneous, having the same number of CPUs, RAM and disk space. |
To get started launch a server node using the cluster-init
flag:
curl -sfL https://assets.master.k3s.getvisibility.com/k3s/k3s.sh | INSTALL_K3S_VERSION="v1.24.9+k3s2" K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=master1 --cluster-init |
Check for your first master node status, it should have the Ready
state:
kubectl get nodes |
Use the following command to copy the TOKEN that will used to join the other nodes to the cluster:
cat /var/lib/rancher/k3s/server/node-token |
Don’t also forget to copy the private IP address of the 1st master node which will be used by the other nodes to join the cluster.
SSH into the 2nd server to join it to the cluster:
Replace K3S_TOKEN
with the contents of the file /var/lib/rancher/k3s/server/node-token
from the 1st master node installation.
Set --node-name
to master2
Set --server
to the private static IP address of the 1st master node.
curl -sfL https://assets.master.k3s.getvisibility.com/k3s/k3s.sh | K3S_TOKEN=SHARED_SECRET INSTALL_K3S_VERSION="v1.24.9+k3s2" K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=master2 --server https://<ip or hostname of master1>:6443 |
Check the node status:
kubectl get nodes |
SSH into the 3rd server to join it to the cluster:
Replace K3S_TOKEN
with the contents of the file /var/lib/rancher/k3s/server/node-token
from the 1st master node installation.
Set --node-name
to master3
Set --server
to the private static IP address of the 1st master node.
curl -sfL https://assets.master.k3s.getvisibility.com/k3s/k3s.sh | K3S_TOKEN=SHARED_SECRET INSTALL_K3S_VERSION="v1.24.9+k3s2" K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=master3 --server https://<ip or hostname of master1>:6443 |
Check the node status:
kubectl get nodes |
SSH into the 4th server to join it to the cluster:
Replace K3S_TOKEN
with the contents of the file /var/lib/rancher/k3s/server/node-token
from the 1st master node installation.
Set --node-name
to worker1
Set --server
to the private static IP address of the 1st master node.
curl -sfL https://assets.master.k3s.getvisibility.com/k3s/k3s.sh | K3S_TOKEN=SHARED_SECRET INSTALL_K3S_VERSION="v1.24.9+k3s2" K3S_KUBECONFIG_MODE="644" sh -s - agent --node-name=worker1 --server https://<ip or hostname of any master node>:6443 |
You may create as many additional worker nodes as you want.
SSH into the server to join it to the cluster:
Replace K3S_TOKEN
with the contents of the file /var/lib/rancher/k3s/server/node-token
from the 1st master node installation.
Update --node-name
with your worker node name(Ex: worker2 , worker3 etc..)
Set --server
to the private static IP address of the 1st master node.
curl -sfL https://assets.master.k3s.getvisibility.com/k3s/k3s.sh | K3S_TOKEN=SHARED_SECRET INSTALL_K3S_VERSION="v1.24.9+k3s2" K3S_KUBECONFIG_MODE="644" sh -s - agent --node-name=workerX --server https://<ip or hostname of any master node>:6443 |
Check the node status:
kubectl get nodes |
You may run the registration command that you generated using Rancher UI or through license manager. You should see all master and worker nodes in your cluster through the Machine Pools
on the Rancher dashboard:
Go to Apps > Charts and install the GetVisibility Essentials Helm chart:
If you are installing Focus or Enterprise click on Enable ElasticSearch
.
Configure the UTC hour (0-23) that backups should be performed at:
Click on High Available
and set:
MinIO Replicas
to 4
MinIO Mode
to distributed
Consul Server replicas
to 3
Go to Apps > Charts and install the GetVisibility Monitoring Helm chart:
Install into Project: Default
Click on High Available
and set:
Prometheus replicas
to 2
Loki replicas
to 2
Go to the global menu Continuous Delivery > Clusters and click on Edit config for the cluster:
For Synergy: add 3 labels product=synergy
environment=prod
high_available=true
and press Save.
For Focus: add 3 labels product=focus
environment=prod
high_available=true
and press Save.
For Enterprise: add 3 labels product=enterprise
environment=prod
high_available=true
and press Save.