Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current View Version History

Version 1 Current »

Check

Action

Always try precheck first

curl -sfL https://assets.master.k3s.getvisibility.com/k3s/k3s.sh | PRODUCT_NAME=enterprise ONLY_PRECHECK=true bash -

What Linux version

cat /etc/os-release

Storage details

Make sure no noexec flag on /var

Always disable swap!

df -h
df -h /var
df /var/lib/rancher -h
cat /etc/fstab
sudo swapoff -a
sudo vi /etc/fstab      <- comment out the line with swap definition # UUID=xxxx-xxxx-xxxx-xxxx none swap sw 0 0
sudo restart

Check memory

Check CPUs

grep MemTotal /proc/meminfo
nproc

Ubuntu, disable:

RHEL/CentOS/Suse, disable:

Restart k3s

systemctl status apparmor
systemctl status systemd-resolved
systemctl status firewalld
systemctl status fapolicyd
systemctl status nm-cloud-setup.service
systemctl status nm-cloud-setup.timer
sysctl crypto.fips_enabled
systemctl restart k3s.service

Check name resolution

Make sure no more than 2 nameservers used - coredns has issues with it

verify ssl bypass

cat /etc/resolv.conf

verify entries with nslookup or dig, example:

nslookup assets.master.k3s.getvisibility.com 8.8.8.8      <- replace with
dig @1.1.1.1 charts.master.k3s.getvisibility.com          <- customer provided IPs
curl -vL https://rancher.master.k3s.getvisibility.com/ping  <- if you don't see Let's Encrypt cert -> NOT bypassed

Proxy settings

Check if proxy added in

env | grep proxy
curl -I assets.master.k3s.getvisibility.com               <- to see if going via proxy
cat /etc/systemd/system/k3s.service.env

Should contain:

http_proxy="$PROXY_IP"
https_proxy="$PROXY_IP"
no_proxy="$NODE_IP,localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local"

If doesn’t, add and:

systemctl restart k3s.service

Check if proxy added in Rancher

Dashboard proxy

image-20240628-124246.png

Uninstall k3s

/usr/local/bin/k3s-uninstall.sh

Get cluster name

kubectl get secret/fleet-agent -n cattle-fleet-system --template={{.data.clusterName}} | base64 -d

Check product

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
helm get values platform-prod-platform 

Grab logs

kubectl get events -A
journalctl -u k3s
journalctl -b -u k3s --since "7 days ago" --no-tail
journalctl -u k3s -f               <- follow

Get logs from failing pod (here: connector)

ubuntu@marek-ultimate:~$ kubectl get pods | grep connector
connector-generic-57b6fcb79c-zbxlz                                1/1     Running     2 (140m ago)    7d
ubuntu@marek-ultimate:~$ kubectl logs connector-generic-57b6fcb79c-zbxlz

Check if user has proper rights if SMB fails

In second terminal window

If smbclient not available

smbclient ////10.2.1.20/ -U username -W workgroup
ls
tcpdump -i any port 445
kubectl run -it --rm --image=images.master.k3s.getvisibility.com/gv-support-tools:0.2.0 -- bash

  • No labels