Kubernetes Homelab

2024-06-20

   I’ve been using my old laptop as a home server for about a year. Initially, I relied on Docker Compose to manage different services, which works well for single or multiple users. For a single or multiple users docker-compose is definitely the best way to set up a homelab server. But I wanted to use Kubernetes because Kubernetes is the optimal solution, especially if I want several users at once and setting Kubernetes is fun. So, I’m upgrading to Kubernetes to deploy a media server and deploy a monitoring solution too. I’m also exposing these services through domain name, making them accessible from anywhere on the internet.

Why choose Kubernetes over docker-compose?#

  1. Automatic Scaling: Kubernetes can scale up or down based on the demand. When more users access the service, Kubernetes will roll out additional pods (instances) as needed and scale back down when the demand decreases.

  2. High Reliability: Kubernetes ReplicaSets ensure that a specified number of pod replicas are running at any given time, thus maintaining the desired level of redundancy. With multiple replicas/instances running, user traffic is efficiently distributed, ensuring smooth performance.

  3. Self-Healing : Kubernetes can automatically detect and replace failing containers, rescheduling them as needed to ensure minimal disruption to your services.

  4. Extensive Ecosystem : Kubernetes offers a vast range of plugins and tools, making it easy to integrate with other technologies and services, enhancing the functionality and efficiency of your homelab server.

This makes Kubernetes not just a powerful tool but an essential upgrade for anyone looking to manage a homelab server effectively, especially with multiple users in mind.

Having a system with decent specs is recommended otherwise its a lost cause tho

Setup requirements#

A Linux Server/Desktop#

My system specs are 4C CPU, 6GB RAM ,1TB Disk. This is not the recommended specs to run media server but workable enough to certain extent.

k3s Lightweight Kubernetes. Easy to install, half the memory, all in a binary of less than 100 MB#

I’m installing k3s on my machine, which is Lightweight kubernetes. It works on the basis that your machine is master node and worker node at once but other worker nodes can be added easily.

curl -sfL https://get.k3s.io | sh -       # Installation command 

K3s runs as a daemon on the server. The dotted ip will be used later to map domain names in Cloudflare tunnels.

Best practices:

  • initial kube config is present in /etc/rancher/k3s/k3s.yaml

you can set this as default config too

mkdir ~/.kube
cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
chmod 600 ~/.kube/config
  • Put these commands in shell config file for persistence
alias k='sudo k3s kubectl'
alias kubectl='sudo k3s kubectl'
export KUBECONFIG=/home/pawan/.kube/config

A domain name#

Either buy a cheap one or apply for a free one. Set cloudflare nameservers.

Setting cloudflare tunnel#

This allows us to map socket(exposed port of our local machine/ip) with a domain name.

There are multiple ways to set this up.

I’ve installed cloudflared using docker agent.

The tunnel status should be HEALTHY.

I’ll be deploying services for these two domains.

Kubernetes deployment#

I’m setting up jellyfin (The Free Software Media System) to host my downloaded movies, tv_shows, music, books, audioBooks, podcasts, images.

k8s manifest#

Using a single manifest file for namespace, deployment, hpa, service for ease of deployment.

❯ cat jellyfin.yml

#Creating a jellyfin namespace 

apiVersion: v1
kind: Namespace
metadata:
  name: jellyfin

---

#Creating a Deployment with resource allocation

apiVersion: apps/v1
kind: Deployment
metadata:
  name: jellyfin
  namespace: jellyfin
  labels:
    app: jellyfin
spec:
  replicas: 1      # I'm the only user using it
  selector:
    matchLabels:
      app: jellyfin
  template:
    metadata:
      labels:
        app: jellyfin
    spec:
      containers:
      - name: jellyfin
        image: jellyfin/jellyfin
        ports:
        - containerPort: 8096
        resources:
          requests:
            memory: "512Mi"  # Requesting 512Mi of RAM
            cpu: "500m"      # Requesting 0.5 CPU core
          limits:
            memory: "1Gi"    # Limiting to 1Gi of RAM
            cpu: "1000m"     # Limiting to 1 CPU cores
        volumeMounts:
        - name: config-volume
          mountPath: /config
        - name: cache-volume
          mountPath: /cache
        - name: media1
          mountPath: /media1
        - name: media2
          mountPath: /media2
        - name: media3
          mountPath: /media3
        - name: media4
          mountPath: /media4
        - name: media5
          mountPath: /media5
        - name: media6
          mountPath: /media6
      volumes:
      - name: config-volume
        hostPath:
          path: /home/pawan/k8s_homelab/jellyfin/scratch/config
      - name: cache-volume
        hostPath:
          path: /home/pawan/k8s_homelab/jellyfin/scratch/cache
      - name: media1
        hostPath:
          path: /home/pawan/Storage/Movies/
      - name: media2
        hostPath:
          path: /home/pawan/Storage/tv_shows/
      - name: media3
        hostPath:
          path: /home/pawan/Storage/Books/
      - name: media4
        hostPath:
          path: /home/pawan/Storage/music/
      - name: media5
        hostPath:
          path: /home/pawan/Storage/AudioBooks/
      - name: media6
        hostPath:
          path: /home/pawan/Storage/podcast/


---
# autoscaling pods when load increases/decreases
# omit this part if you are a single user and don't have much resource on your system

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: jellyfin-hpa
  namespace: jellyfin
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: jellyfin
  minReplicas: 1
  maxReplicas: 3
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50  # Target average CPU utilization percentage

---
# Exposing jellyfin through my local_ip:30096

apiVersion: v1
kind: Service
metadata:
  name: jellyfin
  namespace: jellyfin
  labels:
    app: jellyfin
spec:
  type: NodePort
  ports:
  - port: 8096
    targetPort: 8096
    nodePort: 30096  # This is optional; Kubernetes will automatically assign a port if you omit this line
    protocol: TCP
  selector:
    app: jellyfin

❯ k apply -f jellyfin.yml#

Since we have mapped the ip:port to the domain, hitting the domain name will open up the jellyfin signup/login page. GUI configuration is straightforward.

                                jellyfin UI

grafana-prometheus helm#

Grafana-prometheus is still the most lightweight monitoring solution using kubernetes to monitor k8s resources.

I’m using helm to deploy this stack since it is the most straightforward way to install grafana-prometheus on kubernetes. I’m deploying this on default namespace.

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

# helm install [RELEASE_NAME] prometheus-community/kube-prometheus-stack
helm install my-promethues-grafana  prometheus-community/kube-prometheus-stack

# helm upgrade [RELEASE_NAME] prometheus-community/kube-prometheus-stack

#changing default values if needed
helm show values prometheus-community/kube-prometheus-stack > values.yaml
helm upgrade my-promethues-grafana prometheus-community/kube-prometheus-stack -f values.yaml  


# helm uninstall [RELEASE_NAME]

Change the grafana svc to NodePort to access it from outside and also to map to domain name. Setting NodePort svc for prometheus is optional since all the targets are up by default and the integration as data sources is also done automatically.

A lot of dashboards are loaded by default which might be enough for k8s monitoring. But we can create dashboards manually or grab an Id from grafana_dashboards of your choice and import it.

Host monitoring - Node Exporter Full(ID 1860)

K8s resource monitoring - kube-state-metrics-v2(ID 13332)

So any service that can be deployed with docker/docker-compose can be converted into kubernetes either by deploying manifest files or helm charts. How much complexity you want is your choice!