Kubernetes has become the industry standard for container orchestration at scale — but full Kubernetes is operationally complex and resource-heavy for small teams and individual VPS deployments. K3s solves this: a CNCF-certified, production-grade Kubernetes distribution from Rancher that runs comfortably on a 2 GB RAM VPS, with the same kubectl API and YAML manifests as full Kubernetes.
Running K3s on a Hong Kong VPS gives your containerised workloads CN2 GIA routing to mainland China, a familiar Kubernetes interface for teams already running Kubernetes elsewhere, and the ability to graduate workloads from Docker Compose to proper Kubernetes orchestration without changing the Asia-Pacific infrastructure provider.
K3s vs Full Kubernetes: What’s Different
| Feature | Full Kubernetes (kubeadm) | K3s |
|---|---|---|
| Minimum RAM (control plane) | 2 GB+ | 512 MB |
| Binary size | Multiple binaries, ~1 GB | Single binary, ~60 MB |
| etcd | External etcd required | SQLite or embedded etcd |
| Install time | 30–60 minutes | Under 5 minutes |
| kubectl compatibility | Full | Full |
| Production suitability | Yes | Yes (CNCF certified) |
| Built-in ingress | No (requires add-on) | Yes (Traefik) |
| Built-in load balancer | No | Yes (ServiceLB) |
K3s removes optional but rarely used features (legacy APIs, in-tree cloud providers, alpha features) and replaces heavy components with lighter alternatives — delivering a fully Kubernetes-compatible cluster that fits on a Hong Kong VPS from the 2 GB RAM tier.
Step 1: Prerequisites and VPS Preparation
apt update && apt upgrade -y
# K3s requires these kernel modules
modprobe overlay
modprobe br_netfilter
# Set kernel parameters for Kubernetes networking
cat >> /etc/sysctl.conf << 'EOF'
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p
# Disable swap (Kubernetes requirement)
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstabStep 2: Install K3s (Single Node)
# Install K3s server (control plane + worker in one node)
curl -sfL https://get.k3s.io | sh -s - server \
--disable traefik \ # We'll install our own ingress controller
--tls-san YOUR_VPS_IP \ # Include VPS IP in TLS cert for remote kubectl
--tls-san yourdomain.com \ # Include domain if using DNS
--node-name hk-node-01
# Check installation status
systemctl status k3s
kubectl get nodesExpected output after a few minutes:
NAME STATUS ROLES AGE VERSION
hk-node-01 Ready control-plane,master 2m v1.31.x+k3s1Step 3: Configure Remote kubectl Access
# On your VPS — get the kubeconfig
cat /etc/rancher/k3s/k3s.yaml# Copy to your local machine and edit the server address
# Replace 127.0.0.1 with your VPS IP
mkdir -p ~/.kube
scp -P 2277 deploy@YOUR_VPS_IP:/etc/rancher/k3s/k3s.yaml ~/.kube/config-hk-vps
sed -i 's/127.0.0.1/YOUR_VPS_IP/g' ~/.kube/config-hk-vps
# Set as active kubeconfig
export KUBECONFIG=~/.kube/config-hk-vps
# Test remote access
kubectl get nodes
kubectl get pods --all-namespacesStep 4: Install Nginx Ingress Controller
K3s includes Traefik by default, but we disabled it above to use Nginx Ingress for consistency with common production setups:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.0/deploy/static/provider/cloud/deploy.yaml
# Wait for ingress controller to be ready
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=120s
# Verify
kubectl get pods -n ingress-nginxStep 5: Install cert-manager for Automatic SSL
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.15.0/cert-manager.yaml
# Wait for cert-manager to be ready
kubectl wait --namespace cert-manager \
--for=condition=ready pod \
--selector=app.kubernetes.io/instance=cert-manager \
--timeout=120s
# Create ClusterIssuer for Let's Encrypt
cat << 'EOF' | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: your@email.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
ingressClassName: nginx
EOFStep 6: Deploy Your First Application
cat << 'EOF' | kubectl apply -f -
# Namespace
apiVersion: v1
kind: Namespace
metadata:
name: myapp
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: myapp
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: nginx:alpine
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "200m"
---
# Service
apiVersion: v1
kind: Service
metadata:
name: myapp-svc
namespace: myapp
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 80
---
# Ingress with automatic SSL
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
namespace: myapp
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- app.yourdomain.com
secretName: myapp-tls
rules:
- host: app.yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-svc
port:
number: 80
EOF# Check deployment status
kubectl get pods -n myapp
kubectl get ingress -n myapp
kubectl describe certificate myapp-tls -n myappStep 7: Add a Worker Node (Multi-Node Cluster)
# Get the node join token from your master VPS
cat /var/lib/rancher/k3s/server/node-token
# On a second Hong Kong VPS (or any VPS):
curl -sfL https://get.k3s.io | K3S_URL=https://MASTER_VPS_IP:6443 \
K3S_TOKEN=YOUR_NODE_TOKEN sh -
# Back on master — verify new node joined
kubectl get nodesUseful K3s Operations
# Scale a deployment
kubectl scale deployment myapp --replicas=3 -n myapp
# Rolling update (zero downtime)
kubectl set image deployment/myapp myapp=nginx:1.27 -n myapp
kubectl rollout status deployment/myapp -n myapp
# Rollback if needed
kubectl rollout undo deployment/myapp -n myapp
# View resource usage
kubectl top nodes
kubectl top pods --all-namespaces
# View logs
kubectl logs -f deployment/myapp -n myapp
# Execute into a pod
kubectl exec -it deployment/myapp -n myapp -- shConclusion
K3s on a Hong Kong VPS delivers production-grade Kubernetes orchestration at a fraction of the operational complexity of full Kubernetes — with the same API, tooling, and YAML manifests. CN2 GIA routing ensures your containerised workloads serve mainland Chinese users at optimal latency.
Deploy your K3s cluster on Server.HK’s Hong Kong VPS plans — KVM virtualisation with full kernel module support (required for K3s) and NVMe SSD storage for fast container image pulls and persistent volume I/O.
Frequently Asked Questions
What is the minimum VPS size for running K3s?
K3s runs on as little as 512 MB RAM for a single-node cluster with minimal workloads. For practical production use with several deployments and an ingress controller, 2 GB RAM and 2 vCPU is the comfortable minimum. For multi-node clusters with stateful workloads (databases, message queues), 4 GB RAM per node is recommended.
Can I use K3s with persistent storage on a Hong Kong VPS?
Yes. K3s includes a built-in local path provisioner that creates PersistentVolumes backed by the VPS’s NVMe SSD. For production stateful workloads, use the local path provisioner for single-node clusters, or consider Longhorn (distributed block storage for multi-node clusters) for replication across nodes.
Is K3s suitable for production use?
Yes — K3s is CNCF-certified Kubernetes and is widely used in production for edge computing, IoT, and resource-constrained environments. Rancher (now part of SUSE) maintains K3s actively. For high-availability requirements, K3s supports embedded etcd mode with 3+ control plane nodes, providing the same HA guarantees as full Kubernetes.