A reverse proxy is the traffic controller in front of your applications — receiving all inbound requests and forwarding them to the appropriate backend service based on domain name, path, or other rules. On a Hong Kong VPS running multiple applications or services, a well-configured Nginx reverse proxy provides SSL termination, request routing, rate limiting, compression, and security headers for all applications from a single entry point.
This guide covers two approaches: manual Nginx reverse proxy configuration for full control, and Nginx Proxy Manager for a web-based GUI that makes reverse proxy management accessible without deep Nginx expertise.
Why a Reverse Proxy Matters on a Hong Kong VPS
Without a reverse proxy, each application on your VPS must handle SSL termination independently, manage its own port configuration, and implement security headers separately. A reverse proxy centralises these concerns:
- Single SSL management point: Certbot renews certificates once; all applications benefit automatically
- Port standardisation: All applications appear on ports 80/443 regardless of their internal port
- Security headers in one place: HSTS, X-Frame-Options, CSP applied once for all services
- Rate limiting and DDoS mitigation: Centralised protection before requests reach application code
- Logging: Unified access logs across all applications
- Static file serving: Nginx serves static assets directly, bypassing application processing
Approach A: Manual Nginx Reverse Proxy Configuration
Step 1: Install Nginx and Certbot
apt update
apt install -y nginx certbot python3-certbot-nginxStep 2: Create upstream definitions
Define your backend applications in an upstream configuration file:
nano /etc/nginx/conf.d/upstreams.conf# Node.js API backend
upstream nodejs_api {
server 127.0.0.1:3000;
keepalive 32;
}
# Python/Django application
upstream django_app {
server unix:/run/myapp.sock;
keepalive 8;
}
# Go application
upstream go_service {
server 127.0.0.1:8080;
keepalive 16;
}
# Portainer (Docker management)
upstream portainer {
server 127.0.0.1:9443;
}
# Uptime Kuma
upstream uptime_kuma {
server 127.0.0.1:3001;
}Step 3: Create virtual host configurations per domain
nano /etc/nginx/sites-available/api.yourdomain.comserver {
listen 80;
server_name api.yourdomain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name api.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/api.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.yourdomain.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_session_cache shared:SSL:10m;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Content-Type-Options nosniff always;
add_header X-Frame-Options SAMEORIGIN always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
# Rate limiting
limit_req zone=api burst=50 nodelay;
limit_conn addr 20;
# API proxy
location / {
proxy_pass http://nodejs_api;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 30s;
proxy_connect_timeout 10s;
}
# Health check endpoint — bypass rate limiting
location /health {
limit_req off;
proxy_pass http://nodejs_api;
}
# Gzip
gzip on;
gzip_types application/json text/plain;
access_log /var/log/nginx/api.yourdomain.com.access.log;
error_log /var/log/nginx/api.yourdomain.com.error.log;
}Step 4: Define rate limiting zones in nginx.conf
nano /etc/nginx/nginx.confInside the http {} block, add:
# Rate limiting zones
limit_req_zone $binary_remote_addr zone=api:10m rate=30r/s;
limit_req_zone $binary_remote_addr zone=web:10m rate=10r/s;
limit_conn_zone $binary_remote_addr zone=addr:10m;
# Real IP from Cloudflare (if using Cloudflare proxy)
set_real_ip_from 173.245.48.0/20;
set_real_ip_from 103.21.244.0/22;
set_real_ip_from 103.22.200.0/22;
set_real_ip_from 103.31.4.0/22;
set_real_ip_from 141.101.64.0/18;
set_real_ip_from 108.162.192.0/18;
set_real_ip_from 190.93.240.0/20;
set_real_ip_from 188.114.96.0/20;
set_real_ip_from 197.234.240.0/22;
set_real_ip_from 198.41.128.0/17;
set_real_ip_from 162.158.0.0/15;
set_real_ip_from 104.16.0.0/13;
set_real_ip_from 104.24.0.0/14;
set_real_ip_from 172.64.0.0/13;
set_real_ip_from 131.0.72.0/22;
real_ip_header CF-Connecting-IP;Step 5: Obtain SSL certificates for all domains at once
certbot --nginx \
-d api.yourdomain.com \
-d app.yourdomain.com \
-d monitor.yourdomain.com \
--email your@email.com \
--agree-tos \
--no-eff-emailStep 6: Enable sites and test
ln -s /etc/nginx/sites-available/api.yourdomain.com /etc/nginx/sites-enabled/
nginx -t && systemctl reload nginxApproach B: Nginx Proxy Manager (Web GUI)
Nginx Proxy Manager provides a clean web interface for managing proxy hosts, SSL certificates, and access lists — ideal for teams that manage multiple services without deep Nginx configuration expertise.
mkdir -p /home/deploy/nginx-proxy-manager
nano /home/deploy/nginx-proxy-manager/docker-compose.ymlversion: '3.8'
services:
app:
image: jc21/nginx-proxy-manager:latest
container_name: nginx-proxy-manager
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "81:81" # Admin UI
volumes:
- npm_data:/data
- npm_letsencrypt:/etc/letsencrypt
environment:
DISABLE_IPV6: 'false'
volumes:
npm_data:
npm_letsencrypt:cd /home/deploy/nginx-proxy-manager
docker compose up -dAccess the admin UI at http://YOUR_VPS_IP:81
- Default email:
admin@example.com - Default password:
changeme - Change both immediately after first login
Adding a proxy host in Nginx Proxy Manager
- Hosts → Proxy Hosts → Add Proxy Host
- Domain Names: enter your domain
- Forward Hostname / IP:
host.docker.internal(to reach services on the VPS host) or service container name - Forward Port: your application port
- SSL tab: Request new SSL certificate via Let’s Encrypt, enable Force SSL and HTTP/2
- Advanced tab: Add custom Nginx directives if needed
WebSocket Proxying
For applications using WebSockets (Socket.io, real-time dashboards, live streaming controls):
location /socket.io/ {
proxy_pass http://nodejs_api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 86400s; # Keep WebSocket connections open
}Conclusion
A well-configured Nginx reverse proxy on a Hong Kong VPS centralises SSL management, request routing, security headers, and rate limiting for all applications from a single configuration layer. Whether you prefer manual Nginx configuration for maximum control or Nginx Proxy Manager’s GUI for operational convenience, the result is a clean, maintainable multi-application architecture on a single VPS.
Run your reverse proxy infrastructure on Server.HK’s Hong Kong VPS plans — KVM virtualisation with Docker support and CN2 GIA routing for optimal Asia-Pacific performance.
Frequently Asked Questions
Can Nginx Proxy Manager manage SSL certificates for non-Docker applications?
Yes. Nginx Proxy Manager can proxy to any host:port combination, including applications running directly on the VPS host (not in Docker). Use host.docker.internal as the forward hostname to reach services on the Docker host machine, or use the host’s actual IP address.
How do I proxy a service that requires WebSocket upgrades with Nginx Proxy Manager?
Enable the “WebSockets Support” toggle in the proxy host configuration — Nginx Proxy Manager automatically adds the required Upgrade and Connection headers when this option is enabled.
What is the performance overhead of adding a reverse proxy layer?
Nginx’s reverse proxy overhead is minimal — typically 0.5–2 ms per request for the proxy layer itself. For most applications, this is negligible compared to application processing time. Nginx’s memory efficiency (a few MB per worker process) and high concurrency (thousands of simultaneous connections per worker) make it effectively transparent in production workloads.