/
K3s Installation

K3s Installation

Key Requirements

We use Kubernetes, an open-source container orchestration system to manage our applications. At the moment the only Kubernetes distribution supported is K3s (click here for the official documentation) by Suse Linux for both on-premise and cloud deployments.

The minimum requirements for the Kubernetes cluster is a single node (1 virtual machine) with the following specs:

 

EDC

DDC / DSPM

EDC + DDC + DSPM

CPU

8 cores

16 cores

20 cores

The CPU must support the instructions SSE4.1, SSE4.2, AVX, AVX2, FMA.

Only x86_64 architecture is supported. Minimum CPU speed is 2.2 GHz

Memory

32GB

64GB

80GB

Storage

500GB

Min available inodes for ext4: 32M

600GB

Min available inodes for ext4: 39M

700GB

Min available inodes for ext4: 45M

Storage and partition details

  • only SSD storage is supported

  • SWAP must be disabled

  • / root requires at least 20GB

    /var requires at least 20GB

    /var/lib/rancher requires at least 500GB (in case of EDC, use the correct disk space according to the type of deployment shown above).
    /tmp requires at least 75GB

  1. if neither /var nor /var/lib/rancher /tmp is specifically assigned to a partition you must assign the full 500GB to root

  2. if /var is specifically assign to a partition but /var/lib/rancher is not, then you must assign the 500GB to /var

  3. if /var/lib/rancher is specifically assign to a partition but /var is not, then you must assign the 500GB to /var/lib/rancher

Operating System

Ubuntu 20.04 LTS Server is recommended, other supported operating systems include:

  • Ubuntu 22.04, 24.04

  • RHEL 8.8, 8.10, 9.2 and 9.4

  • Suse Linux 15.5

Only Server edition versions are supported. No Desktop Environment installed. No other linux distributions are supported.

Root access to the server is necessary during the deployment process and may also be required for support tickets and troubleshooting. Please ensure that the service user account accessing the server and deploying the K3s installer has sudo privileges to access and run the installer as root.

Firewall

  • Port 443/TCP to be open to allow the clients to access dashboard and API

  • To download application artifacts (Docker images and binaries), updates, and configuration files, the cluster requires a public internet connection with a minimum download speed of 40 Mbps and an upload speed of 8 Mbps. For a faster initial setup, a download speed of 100 Mbps or more is recommended.

K3s version support

1.23, 1.24, 1.26

Other requirements

  • Domain Name Service (DNS) with public name resolution enabled

  • Network Time Protocol (NTP) service configured

  • Internet access to a network-based repository for software update packages

  • Fixed private IPv4 address

  • Unique static hostname

For hardened systems, see: Deploying Product in CIS hardened OS or K3s

When deploying using RHEL / CentOS / Suse:

When deploying using Ubuntu:

  • disable ufw, systemd-resolved, apparmor

  • /var partition should not have noexec flag

Installation

Argument

Description

Argument

Description

SKIP_PRECHECK=true

to skip all built in checks

SKIP_SYSTEM_CHECKS=true

to skip hardware checks

SKIP_NETWORK_CHECKS=true

to skip connectivity checks

ONLY_PRECHECK=true

will run precheck only and stop after that

image-20241009-120243.png
This is a sample output after running k3s.sh installer - note there is no issues being reported.

Run the kubectl registration command:

kubectl apply -f https://....k3s.getvisibility.com/v3/import/dxslsxcf84....yaml

Monitor the progress of the installation:  watch -c "kubectl get deployments -A" 

  • The K3s deployment is complete when elements of all the deployments (coredns, local-path-provisioner, metrics-server, traefik and cattle-cluster-agent) show at least "1" as "AVAILABLE"

  • In case of errors you can inspect the logs of a pod using  kubectl logs , e.g.  kubectl logs cattle-cluster-agent-d96d648d8-wjvl9 -n cattle-system

K3s support matrix

Please note that we don’t use Docker as the container runtime, instead we use containerd.

Why K3s ?

Kubernetes has been widely adopted in modern software development as it offers a powerful, portable and open-source platform that automates the management of containerized applications.

When setting up a Kubernetes environment, it comes in two flavours: vanilla Kubernetes and managed Kubernetes. With vanilla Kubernetes, a software development team has to pull the Kubernetes source code binaries, follow the code path, and build the environment on the machine. On the other hand, managed Kubernetes comes pre-compiled and pre-configured with tools that improve features to enhance a certain focus area, such as storage, security, deployment, monitoring, etc. Managed Kubernetes versions are also known as Kubernetes distributions. Some popular Kubernetes distributions are Rancher, Red Hat OpenShift, Mirantis, VMware Tanzu, EKS, GKE and AKS.

Kubernetes distributions can have different components that may cause applications that work in one distribution to not necessarily work or even crash into another. Some of the most important components that differ between distributions are:

  • Container Runtime: The container runtime is the software that is responsible for running containers. Each Kubernetes Distribution may offer support for different Container Runtimes. Some popular container runtimes include Docker, CRI-O, Apache Mesos, CoreOS, rkt, Canonical LXC and frakti among others.

  • Storage: Storage is important for Kubernetes applications as it offers a way to persist this data. Kubernetes’ Container Storage Interface (CSI) allows third-party vendors to easily create storage solutions for containerized applications. Some Kubernetes Distributions build their own storage solutions while others integrate with existing third party solutions. Popular storage solutions for Kubernetes include: Amazon ElasticBlock Storage (EBS), GlusterFS, Portworx, Rook, OpenEBS among others.

  • Networking: Kubernetes applications are typically broken down into container-based microservices which are hosted in different PODs, running in different machines. Networking implementations allow for the seamless communication and interaction between different containerized components. Networking in Kubernetes is a herculean task, and each distribution may rely on a networking solution to facilitate communication between pods, services and the internet. Popular networking implementations include Flannel, Weave Net, Calico and Canal among others.

In order to offer our customers a better and more seamless experience while configuring, running, upgrading and troubleshooting our products while also avoiding compatibility issues between different distributions we decided to officially support ONLY ONE Kubernetes distribution: K3s. The main reasons for choosing K3s are:

  1. Costs — K3s is 100% open source and there’s no need to pay for any expensive licenses.

  2. Less setup overhead — a lot of time is saved when setting up a new environment because you don’t need to go through a lengthy process of acquiring extra licenses based on how many CPU cores you have. Also, K3s can be installed using only one command.

  3. It supports many Linux distros K3s supports popular Linux distributions including open source ones, it can also run both on-premise and in the cloud (AWS, Azure, GCP).

  4. It’s fast and lightweight K3s is packaged as a single <100MB binary and its lightweight architecture makes it faster than stock Kubernetes for the workloads that it runs.

  5. Easy to update — Thanks to its reduced dependencies.

  6. Batteries included — CRI, CNI, service load balancer, and ingress controller are included.

  7. Smaller attack surface — Thanks to its small size and reduced amount of dependencies.

  8. Certified — K3s is an official CNCF project that delivers a powerful certified Kubernetes distribution.

  9. Flexible — you can run K3s using single-node or multi-node cluster setup.

Network settings

Your network should be configured to allow the following public urls to be accessible over port 443 (HTTPS) and HTTPS traffic is bypassed (NOT intercepted):

https://assets.master.k3s.getvisibility.com (Custom K3s installation files) https://images.master.k3s.getvisibility.com (Private Docker registry) https://charts.master.k3s.getvisibility.com (Private Helm registry) https://api.master.k3s.getvisibility.com (Priva https://rancher.master.k3s.getvisibility.com (Rancher management server) https://rancher.$RESELLER_NAME.k3s.getvisibility.com (Rancher management server, where $RESELLER_NAME is Getvisibility for direct customers) https://prod-eu-west-1-starport-layer-bucket.s3.eu-west-1.amazonaws.com (Docker registry AWS CDN) https://rpm.rancher.io (Rancher RPM repo for configuring SELinux packages on RHEL or CentOS)

For more details on how to configure Rancher behind a proxy click here.

Related content

Prerequisites for k3s on RHEL/CentOS/Oracle Linux
Prerequisites for k3s on RHEL/CentOS/Oracle Linux
Read with this
DSPM DRA - K3s Installation
DSPM DRA - K3s Installation
More like this
Prerequisites for k3s on Ubuntu Linux
Prerequisites for k3s on Ubuntu Linux
Read with this
Deploying Product in CIS hardened OS or K3s
Deploying Product in CIS hardened OS or K3s
More like this
Configuring Rancher and Fleet agent to run behind a HTTP proxy
Configuring Rancher and Fleet agent to run behind a HTTP proxy
Read with this
Reseller Keycloak Quick Installation Guide
Reseller Keycloak Quick Installation Guide
Read with this

Classified as Getvisibility - Partner/Customer Confidential