Skip to main content

Instalasi Kubernetes dengan K3s

Pengenalan​

K3s adalah distribusi Kubernetes yang lightweight dan certified oleh CNCF (Cloud Native Computing Foundation). K3s dirancang untuk production workloads di resource-constrained environments, edge computing, IoT, dan CI/CD pipelines.

Apa itu K3s?​

K3s adalah "five less than K8s" - versi ringan dari Kubernetes (K8s) yang:

  • πŸ“¦ Single Binary: Kurang dari 100MB
  • πŸš€ Fast: Setup dalam hitungan detik
  • πŸ’Ύ Low Memory: Minimal 512MB RAM
  • πŸ”§ Easy: Simple installation dan management
  • βœ… Production Ready: Full Kubernetes features
  • πŸ”„ Auto-Updates: Built-in update management

Perbedaan K3s dengan K8s​

FeatureK8s (Standard)K3s
Binary Size~1.5GB~100MB
Memory Usage2GB+512MB+
InstallationComplexSingle command
StorageExternalSQLite built-in
Container Runtimecontainerd/Dockercontainerd built-in
Load BalancerExternalServiceLB built-in

Keuntungan K3s​

  • βœ… Lightweight: Perfect untuk development dan testing
  • βœ… Edge Computing: Ideal untuk IoT dan edge devices
  • βœ… Resource Efficient: Minimal CPU dan RAM usage
  • βœ… Quick Setup: Production cluster dalam minutes
  • βœ… Full K8s API: 100% compatible dengan Kubernetes
  • βœ… Single Binary: Easy backup dan restore
  • βœ… Auto-Updates: Built-in upgrade mechanism

System Requirements​

Minimum Requirements​

Single Node (Server):

  • CPU: 1 core
  • RAM: 512 MB
  • Storage: 5 GB
  • OS: Linux (Ubuntu 20.04+, Debian 11+, CentOS 8+)

Multi-Node Cluster:

  • Server Node: 2 cores, 2GB RAM
  • Agent Node: 1 core, 1GB RAM
  • Network: Stable connectivity between nodes
  • Server Node: 4 cores, 4GB RAM, 50GB storage
  • Agent Node: 2 cores, 2GB RAM, 20GB storage
  • High Availability: 3+ server nodes (odd number)
  • Backup Storage: External storage untuk etcd backup

Port Requirements​

Server Node:

  • 6443: Kubernetes API Server
  • 10250: Kubelet metrics
  • 2379-2380: etcd (HA setup)

Agent Node:

  • 10250: Kubelet metrics
  • 30000-32767: NodePort Services

Instalasi K3s​

Arsitektur Deployment​

1. Single Node (Development)​

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ K3s Server Node β”‚
β”‚ (Control Plane + β”‚
β”‚ Worker Node) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

2. Multi-Node (Production)​

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ K3s Server 1 β”‚ β”‚ K3s Server 2 β”‚ β”‚ K3s Server 3 β”‚
β”‚ (Control β”‚ β”‚ (Control β”‚ β”‚ (Control β”‚
β”‚ Plane) β”‚ β”‚ Plane) β”‚ β”‚ Plane) β”‚
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚ β”‚ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ K3s Agent 1 β”‚ β”‚ K3s Agent 2 β”‚ β”‚ K3s Agent 3 β”‚
β”‚ (Worker) β”‚ β”‚ (Worker) β”‚ β”‚ (Worker) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Method 1: Single Server Installation​

Quick Install​

Ini adalah cara tercepat untuk setup K3s development environment:

# Install K3s server
curl -sfL https://get.k3s.io | sh -

# Check installation
sudo systemctl status k3s

# Verify nodes
sudo k3s kubectl get nodes

Detailed Installation Steps​

1. Persiapan System​

# Update system
sudo apt update && sudo apt upgrade -y

# Install dependencies
sudo apt install -y curl wget git

# Disable swap (required for Kubernetes)
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab

# Enable IP forwarding
sudo tee /etc/sysctl.d/k3s.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

sudo sysctl --system

2. Install K3s Server​

# Install dengan options
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server" sh -s - \
--write-kubeconfig-mode 644 \
--disable traefik \
--disable servicelb

# Penjelasan options:
# --write-kubeconfig-mode 644: Make kubeconfig readable
# --disable traefik: Disable default ingress (we'll use nginx)
# --disable servicelb: Disable default LB (for custom setup)

3. Verify Installation​

# Check service status
sudo systemctl status k3s

# Check nodes
sudo k3s kubectl get nodes

# Expected output:
# NAME STATUS ROLES AGE VERSION
# server-1 Ready control-plane,master 30s v1.28.x+k3s1

# Check pods
sudo k3s kubectl get pods -A

4. Configure kubectl Access​

# Copy kubeconfig untuk user
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config

# Set proper permissions
chmod 600 ~/.kube/config

# Test kubectl
kubectl get nodes
kubectl cluster-info

5. Install kubectl (Optional)​

# Install kubectl binary
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

# Make executable
chmod +x kubectl

# Move to PATH
sudo mv kubectl /usr/local/bin/

# Verify
kubectl version --client

Method 2: High Availability (HA) Setup​

Prerequisites​

  • 3 server nodes (odd number for quorum)
  • External database (PostgreSQL atau MySQL) atau embedded etcd
  • Load balancer (optional)

Architecture​

           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Load β”‚
β”‚ Balancer β”‚
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ β”‚ β”‚
β”Œβ”€β”€β”€β”€β”€β–Όβ”€β”€β” β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β” β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”
β”‚Server 1β”‚ β”‚Server 2β”‚ β”‚Server 3β”‚
β”‚ (etcd) β”‚ β”‚ (etcd) β”‚ β”‚ (etcd) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Setup First Server​

# On server-1
curl -sfL https://get.k3s.io | sh -s - server \
--cluster-init \
--write-kubeconfig-mode 644 \
--tls-san your-loadbalancer-ip-or-domain \
--disable traefik

# Save token for other servers
sudo cat /var/lib/rancher/k3s/server/node-token

Add Additional Servers​

# On server-2 and server-3
curl -sfL https://get.k3s.io | sh -s - server \
--server https://server-1-ip:6443 \
--token YOUR_NODE_TOKEN \
--write-kubeconfig-mode 644 \
--tls-san your-loadbalancer-ip-or-domain

# Replace:
# - server-1-ip: IP dari first server
# - YOUR_NODE_TOKEN: Token dari first server

Verify HA Cluster​

# Check nodes
kubectl get nodes

# Should show all 3 servers:
# NAME STATUS ROLES AGE VERSION
# server-1 Ready control-plane,master 5m v1.28.x+k3s1
# server-2 Ready control-plane,master 3m v1.28.x+k3s1
# server-3 Ready control-plane,master 2m v1.28.x+k3s1

Method 3: Multi-Node Cluster (Server + Agent)​

Setup Server Node​

# On server node
curl -sfL https://get.k3s.io | sh -s - server \
--write-kubeconfig-mode 644 \
--node-taint CriticalAddonsOnly=true:NoExecute

# Get token
sudo cat /var/lib/rancher/k3s/server/node-token

Add Agent Nodes​

# On agent nodes (worker nodes)
curl -sfL https://get.k3s.io | K3S_URL=https://server-ip:6443 \
K3S_TOKEN=YOUR_NODE_TOKEN sh -

# Replace:
# - server-ip: IP dari server node
# - YOUR_NODE_TOKEN: Token dari server

Verify Cluster​

kubectl get nodes

# Expected output:
# NAME STATUS ROLES AGE VERSION
# server Ready control-plane,master 5m v1.28.x+k3s1
# agent-1 Ready <none> 3m v1.28.x+k3s1
# agent-2 Ready <none> 2m v1.28.x+k3s1

Configuration Options​

Installation Environment Variables​

# Custom installation examples:

# 1. Specify K3s version
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.28.5+k3s1 sh -

# 2. Custom data directory
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--data-dir /opt/k3s" sh -

# 3. Disable components
curl -sfL https://get.k3s.io | sh -s - \
--disable traefik \
--disable servicelb \
--disable metrics-server

# 4. Custom cluster CIDR
curl -sfL https://get.k3s.io | sh -s - \
--cluster-cidr 10.42.0.0/16 \
--service-cidr 10.43.0.0/16

# 5. With Docker runtime
curl -sfL https://get.k3s.io | sh -s - --docker

# 6. Custom node labels
curl -sfL https://get.k3s.io | sh -s - \
--node-label environment=production \
--node-label region=us-east

Configuration File​

Create /etc/rancher/k3s/config.yaml:

# K3s server configuration
write-kubeconfig-mode: "0644"
tls-san:
- "k3s.example.com"
- "192.168.1.100"

# Disable default components
disable:
- traefik
- servicelb

# Cluster networking
cluster-cidr: "10.42.0.0/16"
service-cidr: "10.43.0.0/16"
cluster-dns: "10.43.0.10"

# Node configuration
node-name: "k3s-server-01"
node-label:
- "environment=production"
- "zone=az1"

# Kubelet configuration
kubelet-arg:
- "max-pods=150"
- "eviction-hard=memory.available<200Mi"

Then install:

curl -sfL https://get.k3s.io | sh -

Post-Installation Setup​

1. Install Helm​

Helm adalah package manager untuk Kubernetes:

# Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# Verify installation
helm version

# Add common repositories
helm repo add stable https://charts.helm.sh/stable
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

2. Install Nginx Ingress Controller​

# Install via Helm
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

# Install ingress-nginx
kubectl create namespace ingress-nginx

helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--set controller.service.type=LoadBalancer

# Verify installation
kubectl get pods -n ingress-nginx
kubectl get svc -n ingress-nginx

3. Install Cert-Manager (SSL/TLS)​

# Install cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml

# Verify installation
kubectl get pods -n cert-manager

# Create ClusterIssuer for Let's Encrypt
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: your-email@example.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
EOF

4. Install Metrics Server​

# K3s includes metrics-server by default
# To verify:
kubectl top nodes
kubectl top pods -A

5. Setup Storage Class​

K3s includes local-path-provisioner by default:

# Check storage class
kubectl get storageclass

# Test PVC
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 1Gi
EOF

# Verify
kubectl get pvc

kubectl Essentials​

Basic Commands​

# Cluster info
kubectl cluster-info
kubectl version

# Nodes
kubectl get nodes
kubectl describe node <node-name>

# Pods
kubectl get pods -A
kubectl get pods -n default
kubectl describe pod <pod-name>
kubectl logs <pod-name>
kubectl logs <pod-name> -f # Follow logs

# Deployments
kubectl get deployments
kubectl describe deployment <deployment-name>
kubectl scale deployment <name> --replicas=3

# Services
kubectl get services
kubectl describe service <service-name>

# Namespaces
kubectl get namespaces
kubectl create namespace my-namespace
kubectl delete namespace my-namespace

Deploy Test Application​

# Create deployment
kubectl create deployment nginx --image=nginx:latest

# Expose as service
kubectl expose deployment nginx --port=80 --type=NodePort

# Get service details
kubectl get svc nginx

# Access application
NODE_PORT=$(kubectl get svc nginx -o jsonpath='{.spec.ports[0].nodePort}')
curl http://localhost:$NODE_PORT

Complete Example: Deploy Nginx​

# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25-alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: nginx
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- hosts:
- nginx.example.com
secretName: nginx-tls
rules:
- host: nginx.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80

Apply:

kubectl apply -f nginx-deployment.yaml
kubectl get all
kubectl get ingress

Maintenance & Management​

Update K3s​

# Check current version
k3s --version

# Update to latest
curl -sfL https://get.k3s.io | sh -

# Update to specific version
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.28.5+k3s1 sh -

# Restart service
sudo systemctl restart k3s

Backup & Restore​

Backup etcd​

# For embedded etcd (default)
sudo k3s etcd-snapshot save --name backup-$(date +%Y%m%d-%H%M%S)

# List snapshots
sudo k3s etcd-snapshot ls

# Snapshots stored in: /var/lib/rancher/k3s/server/db/snapshots/

Restore from Backup​

# Stop K3s
sudo systemctl stop k3s

# Restore snapshot
sudo k3s server \
--cluster-reset \
--cluster-reset-restore-path=/var/lib/rancher/k3s/server/db/snapshots/backup-20240101-120000

# Restart K3s
sudo systemctl start k3s

Uninstall K3s​

# Server node
/usr/local/bin/k3s-uninstall.sh

# Agent node
/usr/local/bin/k3s-agent-uninstall.sh

# Clean up (if needed)
sudo rm -rf /var/lib/rancher/k3s
sudo rm -rf /etc/rancher/k3s

Troubleshooting​

Check Service Status​

# Service status
sudo systemctl status k3s

# View logs
sudo journalctl -u k3s -f

# Or
sudo tail -f /var/log/syslog | grep k3s

Network Issues​

# Check iptables
sudo iptables -L -n -v

# Check network connectivity
kubectl run test --image=busybox --rm -it -- /bin/sh
# Inside pod:
# ping 8.8.8.8
# nslookup kubernetes.default

# Check CoreDNS
kubectl get pods -n kube-system -l k8s-app=kube-dns
kubectl logs -n kube-system -l k8s-app=kube-dns

Resource Issues​

# Check node resources
kubectl top nodes
kubectl describe node <node-name>

# Check pod resources
kubectl top pods -A

# Check events
kubectl get events -A --sort-by='.lastTimestamp'

Pod Not Starting​

# Describe pod
kubectl describe pod <pod-name>

# Check logs
kubectl logs <pod-name>
kubectl logs <pod-name> --previous

# Check events
kubectl get events --field-selector involvedObject.name=<pod-name>

Reset Cluster​

# Stop K3s
sudo systemctl stop k3s

# Remove data
sudo rm -rf /var/lib/rancher/k3s/server/db

# Restart
sudo systemctl start k3s

Security Best Practices​

1. Network Policies​

# default-deny.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress

2. RBAC​

# Create service account
kubectl create serviceaccount my-app-sa

# Create role
kubectl create role pod-reader \
--verb=get,list,watch \
--resource=pods

# Bind role
kubectl create rolebinding my-app-binding \
--role=pod-reader \
--serviceaccount=default:my-app-sa

3. Pod Security​

# pod-security.yaml
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
containers:
- name: app
image: myapp:latest
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL

4. Firewall Configuration​

# UFW (Ubuntu)
sudo ufw allow 6443/tcp # Kubernetes API
sudo ufw allow 10250/tcp # Kubelet
sudo ufw allow 80/tcp # HTTP
sudo ufw allow 443/tcp # HTTPS
sudo ufw enable

Monitoring & Logging​

Install Prometheus & Grafana​

# Add helm repo
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

# Install kube-prometheus-stack
kubectl create namespace monitoring

helm install prometheus prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--set grafana.adminPassword=admin123

# Get Grafana URL
kubectl get svc -n monitoring prometheus-grafana

# Port forward to access
kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80

# Access: http://localhost:3000
# Username: admin
# Password: admin123

Next Steps​

Setelah K3s berhasil terinstall:

  1. βœ… CI/CD Integration - Integrate with Gitea
  2. βœ… Workflow Implementation - Create pipelines
  3. βœ… Best Practices - Production guidelines
  4. βœ… Case Study - Real-world examples

Selamat! K3s cluster Anda sudah siap digunakan! πŸš€