/
Runbook: Configuring a HA K3s cluster

Runbook: Configuring a HA K3s cluster

Our K3s HA setup consists of 4 homogeneous nodes (3 master nodes + 1 worker node) and can withstand a single-node failure with a very short failover disruption (between 3 to 6 minutes).

With our HA setup we can achieve a monthly uptime of 99.9% (a maximum of 43m of downtime every month).

Prerequisites

Please refer to https://getvisibility.atlassian.net/wiki/spaces/GS/pages/88801305/K3s+Installation#Requirements for the node specs of the product you’ll be installing.

The minimum spec allowed for a HA node is 8 CPUs, 32GB of RAM and 500GB of free SSD disk space. All nodes should also have the same spec and OS.

Networking

Internal

We recommend running the K3s nodes in a 10Gb low latency private network for the maximum security and performance.

K3s needs the following ports to be accessible by all other nodes running in the same cluster:

Protocol

Port

Description

Protocol

Port

Description

TCP

6443

Kubernetes API Server

UDP

8472

Required for Flannel VXLAN

TCP

2379-2380

embedded etcd

TCP

10250

metrics-server for HPA

TCP

9796

Prometheus node exporter

The ports above should not be publicly exposed as they will open up your cluster to be accessed by anyone. Make sure to always run your nodes behind a firewall/security group/private network that disables external access to the ports mentioned above.

All nodes in the cluster must have:

  1. Domain Name Service (DNS) configured

  2. Network Time Protocol (NTP) configured

  3. Software Update Service - access to a network-based repository for software update packages

  4. Fixed private IPv4 address

  5. Globally unique node name (use --node-name when installing K3s in a VM to set a static node name)

External

The following port must be publicly exposed in order to allow users to access Synergy or Focus product:

Protocol

Port

Description

Protocol

Port

Description

TCP

443

Focus/Synergy backend

The user must not access the K3s nodes directly, instead, there should be a load balancer sitting between the end user and all the K3s nodes (master and worker nodes):

The load balancer must operate at Layer 4 of the OSI model and listen for connections on port 443. After the load balancer receives a connection request, it selects a target from the target group (which can be any of the master or worker nodes in the cluster) and then attempt to open a TCP connection to the selected target (node) on port 443.

The load balancer must have health checks enabled which are used to monitor the health of the registered targets (nodes in the cluster) so that the load balancer can send requests to healthy nodes only.

The recommended health check configuration is:

  • Timeout: 10 seconds

  • Healthy threshold: 3 consecutive health check successes

  • Unhealthy threshold: 3 consecutive health check failures

  • Interval: 30 seconds

  • Balance mode: round robin

Public

Please refer to https://getvisibility.atlassian.net/wiki/spaces/GS/pages/88801305/K3s+Installation#Proxy-settings for the list of urls you need to enable in your corporate proxy in order to connect to our private registries.

Configuring K3s nodes

We need 3 master nodes and at least 1 worker node to run K3s in HA mode.

1st master node

To get started launch a server node using the cluster-init flag:

curl -sfL https://assets.master.k3s.getvisibility.com/k3s/k3s.sh | INSTALL_K3S_VERSION="v1.26.10+k3s1" K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=master1 --cluster-init

Check for your first master node status, it should have the Ready state:

kubectl get nodes

Use the following command to copy the TOKEN that will used to join the other nodes to the cluster:

cat /var/lib/rancher/k3s/server/node-token

Don’t also forget to copy the private IP address of the 1st master node which will be used by the other nodes to join the cluster.

2nd master node

SSH into the 2nd server to join it to the cluster:

  1. Replace K3S_TOKEN with the contents of the file /var/lib/rancher/k3s/server/node-token from the 1st master node installation.

  2. Set --node-name to master2

  3. Set --server to the private static IP address of the 1st master node.

Check the node status:

3rd master node

SSH into the 3rd server to join it to the cluster:

  1. Replace K3S_TOKEN with the contents of the file /var/lib/rancher/k3s/server/node-token from the 1st master node installation.

  2. Set --node-name to master3

  3. Set --server to the private static IP address of the 1st master node.

Check the node status:

1st worker node

SSH into the 4th server to join it to the cluster:

  1. Replace K3S_TOKEN with the contents of the file /var/lib/rancher/k3s/server/node-token from the 1st master node installation.

  2. Set --node-name to worker1

  3. Set --server to the private static IP address of the 1st master node.

Joining additional worker nodes

You may create as many additional worker nodes as you want.

SSH into the server to join it to the cluster:

  1. Replace K3S_TOKEN with the contents of the file /var/lib/rancher/k3s/server/node-token from the 1st master node installation.

  2. Update --node-name with your worker node name(Ex: worker2 , worker3 etc..)

  3. Set --server to the private static IP address of the 1st master node.

Check the node status:

Register HA K3s Cluster to Rancher

You may run the registration command that you generated using Rancher UI or through license manager. You should see all master and worker nodes in your cluster through the Machine Pools on the Rancher dashboard:

Install Helm charts

GV Essentials

  1. Go to Apps > Charts and install the GetVisibility Essentials Helm chart:

    1. If you are installing Focus or Enterprise click on Enable ElasticSearch.

    2. Configure the UTC hour (0-23) that backups should be performed at:

  2. Click on High Available and set:

    1. MinIO Replicas to 4

    2. MinIO Mode to distributed

    3. Consul Server replicas to 3

GV Monitoring

  1. Go to Apps > Charts and install the GetVisibility Monitoring Helm chart:

    1. Install into Project: Default

  2. Click on High Available and set:

    1. Prometheus replicas to 2

    2. Loki replicas to 2

Configure Fleet labels

  1. Go to the global menu Continuous Delivery > Clusters and click on Edit config for the cluster:

  2. For Synergy: add 3 labels  product=synergy environment=prod high_available=true and press Save.

  3. For Focus: add 3 labels  product=focus environment=prod high_available=true and press Save.

  4. For Enterprise: add 3 labels  product=enterprise environment=prod high_available=true and press Save.



Related content

K3s Installation
K3s Installation
Read with this
Troubleshooting
Troubleshooting
More like this
Frequently Asked Questions
Frequently Asked Questions
Read with this
Configure Results Screen (Focus)
Configure Results Screen (Focus)
Read with this
Install Synergy/Focus/Enterprise using Rancher
Install Synergy/Focus/Enterprise using Rancher
Read with this
Focus Deployment Steps
Focus Deployment Steps
Read with this

Classified as Getvisibility - Partner/Customer Confidential