K3s Installation
Key Requirements
We use Kubernetes, an open-source container orchestration system to manage our applications. At the moment the only Kubernetes distribution supported is K3s (click here for the official documentation) by Suse Linux for both on-premise and cloud deployments.
The minimum requirements for the Kubernetes cluster is a single node (1 virtual machine) with the following specs:
| EDC | DDC / DSPM | EDC + DDC + DSPM |
---|---|---|---|
CPU | 8 cores | 16 cores | 20 cores |
The CPU must support the instructions SSE4.1, SSE4.2, AVX, AVX2, FMA. Only x86_64 architecture is supported. Minimum CPU speed is 2.2 GHz | |||
Memory | 32GB | 64GB | 80GB |
Storage | 500GB Min available inodes for ext4: 32M | 600GB Min available inodes for ext4: 39M | 700GB Min available inodes for ext4: 45M |
Storage and partition details |
| ||
Operating System | Ubuntu 20.04 LTS Server is recommended, other supported operating systems include:
Only Server edition versions are supported. No Desktop Environment installed. No other linux distributions are supported. Root access to the server is necessary during the deployment process and may also be required for support tickets and troubleshooting. Please ensure that the service user account accessing the server and deploying the K3s installer has sudo privileges to access and run the installer as root. | ||
Firewall |
| ||
K3s version support | 1.23, 1.24, 1.26 | ||
Other requirements |
For hardened systems, see: Deploying Product in CIS hardened OS or K3s When deploying using RHEL / CentOS / Suse:
When deploying using Ubuntu:
|
Installation
Argument | Description |
---|---|
SKIP_PRECHECK=true | to skip all built in checks |
SKIP_SYSTEM_CHECKS=true | to skip hardware checks |
SKIP_NETWORK_CHECKS=true | to skip connectivity checks |
ONLY_PRECHECK=true | will run precheck only and stop after that |
Run the kubectl registration command:
kubectl apply -f https://....k3s.getvisibility.com/v3/import/dxslsxcf84....yaml
Monitor the progress of the installation: watch -c "kubectl get deployments -A"
The K3s deployment is complete when elements of all the deployments (coredns, local-path-provisioner, metrics-server, traefik and cattle-cluster-agent) show at least "1" as "AVAILABLE"
In case of errors you can inspect the logs of a pod using
kubectl logs
, e.g.kubectl logs cattle-cluster-agent-d96d648d8-wjvl9 -n cattle-system
K3s support matrix
Please note that we don’t use Docker as the container runtime, instead we use containerd.
Why K3s ?
Kubernetes has been widely adopted in modern software development as it offers a powerful, portable and open-source platform that automates the management of containerized applications.
When setting up a Kubernetes environment, it comes in two flavours: vanilla Kubernetes and managed Kubernetes. With vanilla Kubernetes, a software development team has to pull the Kubernetes source code binaries, follow the code path, and build the environment on the machine. On the other hand, managed Kubernetes comes pre-compiled and pre-configured with tools that improve features to enhance a certain focus area, such as storage, security, deployment, monitoring, etc. Managed Kubernetes versions are also known as Kubernetes distributions. Some popular Kubernetes distributions are Rancher, Red Hat OpenShift, Mirantis, VMware Tanzu, EKS, GKE and AKS.
Kubernetes distributions can have different components that may cause applications that work in one distribution to not necessarily work or even crash into another. Some of the most important components that differ between distributions are:
Container Runtime: The container runtime is the software that is responsible for running containers. Each Kubernetes Distribution may offer support for different Container Runtimes. Some popular container runtimes include Docker, CRI-O, Apache Mesos, CoreOS, rkt, Canonical LXC and frakti among others.
Storage: Storage is important for Kubernetes applications as it offers a way to persist this data. Kubernetes’ Container Storage Interface (CSI) allows third-party vendors to easily create storage solutions for containerized applications. Some Kubernetes Distributions build their own storage solutions while others integrate with existing third party solutions. Popular storage solutions for Kubernetes include: Amazon ElasticBlock Storage (EBS), GlusterFS, Portworx, Rook, OpenEBS among others.
Networking: Kubernetes applications are typically broken down into container-based microservices which are hosted in different PODs, running in different machines. Networking implementations allow for the seamless communication and interaction between different containerized components. Networking in Kubernetes is a herculean task, and each distribution may rely on a networking solution to facilitate communication between pods, services and the internet. Popular networking implementations include Flannel, Weave Net, Calico and Canal among others.
In order to offer our customers a better and more seamless experience while configuring, running, upgrading and troubleshooting our products while also avoiding compatibility issues between different distributions we decided to officially support ONLY ONE Kubernetes distribution: K3s. The main reasons for choosing K3s are:
Costs — K3s is 100% open source and there’s no need to pay for any expensive licenses.
Less setup overhead — a lot of time is saved when setting up a new environment because you don’t need to go through a lengthy process of acquiring extra licenses based on how many CPU cores you have. Also, K3s can be installed using only one command.
It supports many Linux distros — K3s supports popular Linux distributions including open source ones, it can also run both on-premise and in the cloud (AWS, Azure, GCP).
It’s fast and lightweight — K3s is packaged as a single <100MB binary and its lightweight architecture makes it faster than stock Kubernetes for the workloads that it runs.
Easy to update — Thanks to its reduced dependencies.
Batteries included — CRI, CNI, service load balancer, and ingress controller are included.
Smaller attack surface — Thanks to its small size and reduced amount of dependencies.
Certified — K3s is an official CNCF project that delivers a powerful certified Kubernetes distribution.
Flexible — you can run K3s using single-node or multi-node cluster setup.
Network settings
Your network should be configured to allow the following public urls to be accessible over port 443 (HTTPS) and HTTPS traffic is bypassed (NOT intercepted):
https://assets.master.k3s.getvisibility.com (Custom K3s installation files)
https://images.master.k3s.getvisibility.com (Private Docker registry)
https://charts.master.k3s.getvisibility.com (Private Helm registry)
https://api.master.k3s.getvisibility.com (Priva
https://rancher.master.k3s.getvisibility.com (Rancher management server)
https://rancher.$RESELLER_NAME.k3s.getvisibility.com (Rancher management server, where $RESELLER_NAME is Getvisibility for direct customers)
https://prod-eu-west-1-starport-layer-bucket.s3.eu-west-1.amazonaws.com (Docker registry AWS CDN)
https://rpm.rancher.io (Rancher RPM repo for configuring SELinux packages on RHEL or CentOS)
For more details on how to configure Rancher behind a proxy click here.
Related content
Classified as Getvisibility - Partner/Customer Confidential