Running Docker on a Hong Kong VPS gives you a portable, reproducible deployment environment that works identically across development and production — with the added advantage of CN2 GIA network routing for low-latency container services reaching users across mainland China and Asia-Pacific.
Docker requires a KVM-based VPS with full kernel access. OpenVZ containers cannot run Docker due to shared kernel restrictions. All Server.HK Hong Kong VPS plans use KVM virtualisation, making them fully compatible with Docker out of the box.
This guide covers Docker Engine installation, docker-compose setup, deploying common production stacks (Nginx, MySQL, Redis, a Node.js application), and production hardening practices for container workloads.
Prerequisites
- A Hong Kong KVM VPS running Ubuntu 22.04 LTS (recommended)
- Minimum 1 vCPU and 2 GB RAM (4 GB recommended for multi-container stacks)
- Root or sudo SSH access
- Basic familiarity with Linux command line
Step 1: Initial Server Preparation
Connect to your VPS and update the system:
ssh root@YOUR_VPS_IP
apt update && apt upgrade -yInstall prerequisite packages:
apt install -y \
ca-certificates \
curl \
gnupg \
lsb-release \
ufwConfigure basic firewall rules before proceeding:
ufw allow OpenSSH
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable
ufw statusStep 2: Install Docker Engine
Always install Docker from the official Docker repository rather than Ubuntu’s default package manager — the version in Ubuntu’s apt cache is typically several major versions behind.
Add Docker’s official GPG key:
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpgAdd the Docker repository:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | \
tee /etc/apt/sources.list.d/docker.list > /dev/nullInstall Docker Engine, CLI, containerd, and the Compose plugin:
apt update
apt install -y \
docker-ce \
docker-ce-cli \
containerd.io \
docker-buildx-plugin \
docker-compose-pluginVerify the installation:
docker --version
docker compose versionYou should see output similar to Docker version 26.x.x and Docker Compose version v2.x.x.
Enable Docker to start on boot and verify it is running:
systemctl enable docker
systemctl start docker
systemctl status dockerAllow a non-root user to run Docker (optional but recommended)
adduser deploy
usermod -aG docker deployLog out and back in as the deploy user for the group membership to take effect. Verify with:
docker run hello-worldA successful Hello from Docker! message confirms Docker is working correctly under the non-root user.
Step 3: Understand docker-compose Fundamentals
Modern Docker deployments use docker-compose (now integrated as docker compose) to define multi-container applications in a single docker-compose.yml file. This declarative approach makes deployments reproducible and easy to version-control.
A typical docker-compose.yml structure:
version: '3.8'
services:
app:
image: node:20-alpine
container_name: myapp
restart: unless-stopped
ports:
- "3000:3000"
environment:
- NODE_ENV=production
volumes:
- ./app:/usr/src/app
networks:
- appnet
depends_on:
- db
- redis
db:
image: mysql:8.0
container_name: mysql
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
volumes:
- mysql_data:/var/lib/mysql
networks:
- appnet
redis:
image: redis:7-alpine
container_name: redis
restart: unless-stopped
command: redis-server --requirepass ${REDIS_PASSWORD}
volumes:
- redis_data:/data
networks:
- appnet
volumes:
mysql_data:
redis_data:
networks:
appnet:
driver: bridgeKey concepts to understand:
- services — each entry is a separate container
- restart: unless-stopped — container restarts automatically on crash or reboot
- volumes — persistent data stored on the host, survives container restarts
- networks — containers on the same network communicate by service name (e.g.
db,redis) - ${VARIABLE} — reads from a
.envfile in the same directory, keeping secrets out of version control
Step 4: Deploy a Production Stack with Nginx, MySQL, and Redis
Create a project directory:
mkdir -p /home/deploy/myapp
cd /home/deploy/myappCreate the environment file with your credentials:
nano .envMYSQL_ROOT_PASSWORD=choose_a_strong_root_password
MYSQL_DATABASE=myapp_db
MYSQL_USER=myapp_user
MYSQL_PASSWORD=choose_a_strong_db_password
REDIS_PASSWORD=choose_a_strong_redis_password
DOMAIN=yourdomain.comRestrict permissions on the .env file:
chmod 600 .envCreate the docker-compose.yml:
nano docker-compose.ymlversion: '3.8'
services:
nginx:
image: nginx:alpine
container_name: nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
- certbot_www:/var/www/certbot:ro
- certbot_conf:/etc/letsencrypt:ro
networks:
- appnet
depends_on:
- app
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot_www:/var/www/certbot
- certbot_conf:/etc/letsencrypt
entrypoint: >
sh -c "trap exit TERM;
while :; do
certbot renew --webroot -w /var/www/certbot --quiet;
sleep 12h & wait $${!};
done"
app:
image: node:20-alpine
container_name: myapp
restart: unless-stopped
working_dir: /usr/src/app
volumes:
- ./app:/usr/src/app
command: node index.js
environment:
- NODE_ENV=production
- PORT=3000
- DB_HOST=db
- DB_PORT=3306
- DB_NAME=${MYSQL_DATABASE}
- DB_USER=${MYSQL_USER}
- DB_PASS=${MYSQL_PASSWORD}
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASS=${REDIS_PASSWORD}
networks:
- appnet
depends_on:
- db
- redis
db:
image: mysql:8.0
container_name: mysql
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
volumes:
- mysql_data:/var/lib/mysql
networks:
- appnet
redis:
image: redis:7-alpine
container_name: redis
restart: unless-stopped
command: redis-server --requirepass ${REDIS_PASSWORD}
volumes:
- redis_data:/data
networks:
- appnet
volumes:
mysql_data:
redis_data:
certbot_www:
certbot_conf:
networks:
appnet:
driver: bridgeStep 5: Configure Nginx Inside Docker
Create the Nginx configuration directory and virtual host file:
mkdir -p /home/deploy/myapp/nginx/conf.d
nano /home/deploy/myapp/nginx/conf.d/default.confserver {
listen 80;
server_name yourdomain.com www.yourdomain.com;
# Let's Encrypt webroot challenge
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
# Redirect everything else to HTTPS
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl http2;
server_name yourdomain.com www.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Strict-Transport-Security "max-age=31536000" always;
# Proxy to Node.js app container
location / {
proxy_pass http://app:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
# Gzip
gzip on;
gzip_types text/plain text/css application/json application/javascript;
gzip_min_length 1024;
}Note: Nginx references the app container by its service name (app:3000) rather than an IP address — Docker’s internal DNS resolves service names automatically within the same network.
Step 6: Obtain SSL Certificate
Start only the Nginx container first (without SSL config) to complete the Let’s Encrypt HTTP challenge:
Temporarily simplify the Nginx config to HTTP-only, start the stack, obtain the certificate, then switch to the full HTTPS config:
# Start containers
docker compose up -d nginx certbot
# Obtain initial certificate
docker compose run --rm certbot certonly \
--webroot \
-w /var/www/certbot \
-d yourdomain.com \
-d www.yourdomain.com \
--email your@email.com \
--agree-tos \
--no-eff-emailOnce the certificate is obtained, start the full stack:
docker compose up -dThe certbot container runs in the background and automatically renews certificates every 12 hours when they are within 30 days of expiry.
Step 7: Essential Docker Management Commands
# Start all services in background
docker compose up -d
# View running containers
docker compose ps
# View logs for all services
docker compose logs -f
# View logs for a specific service
docker compose logs -f app
# Restart a specific service
docker compose restart app
# Stop all services
docker compose down
# Stop and remove volumes (WARNING: deletes data)
docker compose down -v
# Pull latest images and redeploy
docker compose pull && docker compose up -d
# Execute a command inside a running container
docker compose exec app sh
# Check MySQL from inside the db container
docker compose exec db mysql -u root -pStep 8: Useful Single-Container Deployments
Beyond full application stacks, Docker excels at running individual services in isolation. Here are common single-container deployments for a Hong Kong VPS:
Nginx Proxy Manager (visual reverse proxy with GUI)
mkdir -p /home/deploy/npm
cd /home/deploy/npm
nano docker-compose.ymlversion: '3.8'
services:
npm:
image: jc21/nginx-proxy-manager:latest
container_name: nginx-proxy-manager
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "81:81"
volumes:
- npm_data:/data
- npm_letsencrypt:/etc/letsencrypt
volumes:
npm_data:
npm_letsencrypt:docker compose up -dAccess the management UI at http://YOUR_VPS_IP:81. Default credentials: admin@example.com / changeme — change immediately after first login.
Portainer (Docker management GUI)
docker volume create portainer_data
docker run -d \
-p 9443:9443 \
--name portainer \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce:latestAccess at https://YOUR_VPS_IP:9443. Portainer provides a full visual interface for managing containers, images, volumes, and networks — ideal if you prefer GUI management over CLI.
Uptime Kuma (self-hosted monitoring)
docker run -d \
--name uptime-kuma \
--restart=always \
-p 3001:3001 \
-v uptime_kuma_data:/app/data \
louislam/uptime-kuma:1Access at http://YOUR_VPS_IP:3001. Monitor your other services, websites, and APIs from your own Hong Kong VPS with sub-minute check intervals.
Step 9: Production Best Practices
Never store secrets in docker-compose.yml
Always use a .env file (added to .gitignore) or Docker secrets for credentials. The docker-compose.yml file should be safe to commit to version control — secrets should not be.
Always use named volumes for persistent data
Bind mounts (./data:/container/data) work for development but named volumes (mysql_data:/var/lib/mysql) are more reliable for production — they are managed by Docker, easier to back up, and survive container recreation without path dependency issues.
Set resource limits to prevent runaway containers
services:
app:
image: node:20-alpine
deploy:
resources:
limits:
cpus: '1.0'
memory: 512M
reservations:
memory: 256MRegularly prune unused images and volumes
# Remove unused images, stopped containers, unused networks
docker system prune -f
# Remove unused volumes (careful — verify before running)
docker volume prune -fAdd this to a weekly cron job to prevent disk space accumulation from old image layers:
crontab -e
# Add:
0 3 * * 0 docker system prune -f >> /var/log/docker-prune.log 2>&1Back up named volumes regularly
# Backup MySQL data volume to a compressed archive
docker run --rm \
-v mysql_data:/data \
-v /home/deploy/backups:/backup \
alpine tar czf /backup/mysql_$(date +%Y%m%d).tar.gz -C /data .Conclusion
You now have a fully configured Docker environment on a Hong Kong VPS with:
- Docker Engine and Compose plugin installed from the official repository
- A production-ready multi-container stack (Nginx + Node.js + MySQL + Redis)
- Automated Let’s Encrypt SSL with containerised certbot renewal
- Portainer GUI for visual container management
- Production hardening: secrets management, resource limits, volume backups, and automated cleanup
Docker on a Hong Kong VPS combines the portability and reproducibility of containerisation with CN2 GIA low-latency network access — making it an excellent platform for deploying modern web applications, APIs, and microservices to Asia-Pacific users.
Need a KVM VPS for your Docker workloads? Server.HK’s Hong Kong VPS plans include full KVM virtualisation, NVMe SSD storage, and CN2 GIA routing — fully compatible with Docker, docker-compose, and all container workloads from the entry tier upwards.
Frequently Asked Questions
Does Docker work on all Hong Kong VPS plans?
Docker requires KVM (or similar full virtualisation) — it does not work on OpenVZ VPS plans due to shared kernel restrictions. Server.HK’s Hong Kong VPS plans all use KVM virtualisation, making them fully Docker-compatible. Always verify the virtualisation type before purchasing any VPS for Docker workloads.
How much RAM do I need to run Docker on a Hong Kong VPS?
A single lightweight container (Nginx, Redis, a small Node.js app) runs comfortably in 512 MB of RAM. A full stack with Nginx, Node.js, MySQL, and Redis requires a minimum of 2 GB RAM, with 4 GB recommended for production workloads under real traffic.
Can I run Docker and non-Docker services on the same Hong Kong VPS?
Yes, but be careful about port conflicts. If you run a host-level Nginx alongside a Docker Nginx container, both will compete for port 80/443. The cleanest approach is to choose one — either manage all services through Docker, or run everything directly on the host. Mixing both is possible but requires careful port management.
How do I update a running Docker container to a new image version?
Pull the new image and recreate the container with docker-compose: docker compose pull app && docker compose up -d app. Docker Compose will stop the old container, pull the new image, and start a fresh container with the same volume mounts and environment variables — data in named volumes is preserved.
Is Docker on a Hong Kong VPS suitable for production workloads?
Yes, for most small to mid-size applications. Large-scale production deployments with high availability requirements typically graduate to Kubernetes or Docker Swarm, but single-node Docker with proper volume backups, restart policies, and resource limits is a robust and widely used production setup for applications serving up to several thousand concurrent users.