Managing a single Hong Kong VPS manually is straightforward. Managing five, ten, or twenty VPS instances — across Hong Kong, US, Singapore, and dedicated servers — with consistent configuration, security patches, and application deployments becomes error-prone and time-consuming without automation.
Ansible is the industry standard for idempotent configuration management: describe the desired state of your servers in YAML playbooks, run them against your fleet, and Ansible ensures every server matches the specification — regardless of its current state. No agents required on managed nodes, just SSH access.
Prerequisites
- A control node (your local machine or a management VPS) with Ansible installed
- SSH key access to all managed Hong Kong VPS instances
- Python 3 on managed nodes (pre-installed on Ubuntu 22.04)
# Install Ansible on control node (Ubuntu/Debian)
apt update && apt install -y ansible
# Or via pip
pip install ansible --break-system-packages
ansible --versionStep 1: Configure the Inventory
Ansible’s inventory defines the servers you manage. Create a structured inventory for a typical Server.HK multi-server deployment:
mkdir -p ~/ansible/inventory
nano ~/ansible/inventory/hosts.yml---
all:
vars:
ansible_user: deploy
ansible_port: 2277 # Your custom SSH port
ansible_python_interpreter: /usr/bin/python3
children:
hong_kong_vps:
vars:
datacenter: hong_kong
network: cn2_gia
hosts:
hk-web-01:
ansible_host: 103.x.x.1
role: web
hk-web-02:
ansible_host: 103.x.x.2
role: web
hk-db-01:
ansible_host: 103.x.x.3
role: database
us_vps:
vars:
datacenter: us_west
hosts:
us-web-01:
ansible_host: 45.x.x.1
role: web
dedicated_servers:
hosts:
hk-dedicated-01:
ansible_host: 103.x.x.10
ansible_port: 22
role: high_performance
web_servers:
children:
hong_kong_vps:
us_vps:
database_servers:
hosts:
hk-db-01:# Test connectivity to all servers
ansible all -i inventory/hosts.yml -m pingStep 2: Security Hardening Playbook
Apply consistent security configuration across your entire fleet:
nano ~/ansible/playbooks/security-hardening.yml---
- name: Security hardening for all VPS instances
hosts: all
become: true
vars:
ssh_port: 2277
allowed_ssh_users:
- deploy
fail2ban_bantime: 86400
fail2ban_maxretry: 3
tasks:
- name: Update all packages
apt:
update_cache: yes
upgrade: dist
cache_valid_time: 3600
- name: Install security packages
apt:
name:
- fail2ban
- ufw
- unattended-upgrades
- rkhunter
state: present
- name: Configure SSH hardening
lineinfile:
path: /etc/ssh/sshd_config
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
backup: yes
loop:
- { regexp: '^#?Port ', line: 'Port {{ ssh_port }}' }
- { regexp: '^#?PermitRootLogin', line: 'PermitRootLogin no' }
- { regexp: '^#?PasswordAuthentication', line: 'PasswordAuthentication no' }
- { regexp: '^#?MaxAuthTries', line: 'MaxAuthTries 3' }
- { regexp: '^#?X11Forwarding', line: 'X11Forwarding no' }
notify: restart sshd
- name: Configure UFW defaults
ufw:
direction: "{{ item.direction }}"
policy: "{{ item.policy }}"
loop:
- { direction: incoming, policy: deny }
- { direction: outgoing, policy: allow }
- name: Allow SSH on custom port
ufw:
rule: allow
port: "{{ ssh_port }}"
proto: tcp
- name: Allow HTTP and HTTPS
ufw:
rule: allow
port: "{{ item }}"
proto: tcp
loop:
- "80"
- "443"
- name: Enable UFW
ufw:
state: enabled
- name: Configure Fail2ban for SSH
copy:
dest: /etc/fail2ban/jail.local
content: |
[DEFAULT]
bantime = {{ fail2ban_bantime }}
findtime = 600
maxretry = {{ fail2ban_maxretry }}
backend = systemd
[sshd]
enabled = true
port = {{ ssh_port }}
maxretry = {{ fail2ban_maxretry }}
notify: restart fail2ban
- name: Enable automatic security updates
copy:
dest: /etc/apt/apt.conf.d/20auto-upgrades
content: |
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
APT::Periodic::AutocleanInterval "7";
- name: Set kernel hardening parameters
sysctl:
name: "{{ item.name }}"
value: "{{ item.value }}"
state: present
reload: yes
loop:
- { name: net.ipv4.tcp_syncookies, value: '1' }
- { name: net.ipv4.conf.all.rp_filter, value: '1' }
- { name: net.ipv4.conf.all.accept_redirects, value: '0' }
- { name: net.ipv4.icmp_echo_ignore_broadcasts, value: '1' }
handlers:
- name: restart sshd
service:
name: sshd
state: restarted
- name: restart fail2ban
service:
name: fail2ban
state: restarted# Run against all servers
ansible-playbook -i inventory/hosts.yml playbooks/security-hardening.yml
# Dry run first (check mode)
ansible-playbook -i inventory/hosts.yml playbooks/security-hardening.yml --checkStep 3: Nginx Web Server Playbook
nano ~/ansible/playbooks/nginx-setup.yml---
- name: Install and configure Nginx
hosts: web_servers
become: true
vars:
nginx_worker_processes: auto
nginx_worker_connections: 4096
tasks:
- name: Install Nginx
apt:
name: nginx
state: present
update_cache: yes
- name: Configure Nginx main settings
template:
src: templates/nginx.conf.j2
dest: /etc/nginx/nginx.conf
backup: yes
notify: reload nginx
- name: Remove default site
file:
path: /etc/nginx/sites-enabled/default
state: absent
notify: reload nginx
- name: Ensure Nginx is started and enabled
service:
name: nginx
state: started
enabled: yes
handlers:
- name: reload nginx
service:
name: nginx
state: reloadedStep 4: Application Deployment Playbook
nano ~/ansible/playbooks/deploy-app.yml---
- name: Deploy application to Hong Kong VPS
hosts: hong_kong_vps
become: false
vars:
app_dir: /home/deploy/apps/myapp
app_repo: https://github.com/yourusername/your-repo.git
app_branch: main
node_version: 20
tasks:
- name: Ensure app directory exists
file:
path: "{{ app_dir }}"
state: directory
mode: '0755'
- name: Clone or update repository
git:
repo: "{{ app_repo }}"
dest: "{{ app_dir }}"
version: "{{ app_branch }}"
force: yes
- name: Install npm dependencies
npm:
path: "{{ app_dir }}"
production: yes
- name: Restart application via PM2
command: pm2 reload myapp --update-env
changed_when: true
- name: Verify application is running
uri:
url: http://127.0.0.1:3000/health
status_code: 200
retries: 5
delay: 3Step 5: Fleet-Wide Operations
# Apply security updates to all servers simultaneously
ansible all -i inventory/hosts.yml -m apt -a "upgrade=dist update_cache=yes" --become
# Restart Nginx on all web servers
ansible web_servers -i inventory/hosts.yml -m service -a "name=nginx state=reloaded" --become
# Check disk usage across all servers
ansible all -i inventory/hosts.yml -m command -a "df -h /"
# Check which servers need reboots after kernel update
ansible all -i inventory/hosts.yml -m stat -a "path=/var/run/reboot-required"
# Deploy to Hong Kong servers only (not US)
ansible hong_kong_vps -i inventory/hosts.yml -m playbook playbooks/deploy-app.yml
# Run ad-hoc command on a single server
ansible hk-web-01 -i inventory/hosts.yml -m command -a "pm2 list"Conclusion
Ansible transforms multi-server Hong Kong VPS management from a series of manual SSH sessions into a codified, repeatable, auditable process. Your security hardening, application deployments, and operational tasks become version-controlled playbooks that execute consistently across your entire fleet — whether that is two servers or twenty.
Build your Ansible-managed fleet on Server.HK’s Hong Kong, US, Singapore, and dedicated server plans — a consistent infrastructure provider simplifies inventory management and ensures predictable SSH access patterns across your fleet.
Frequently Asked Questions
Does Ansible require any agent software on managed Hong Kong VPS instances?
No. Ansible connects via standard SSH and executes Python scripts temporarily on the managed node. No persistent agent is installed. This is one of Ansible’s key advantages over agent-based tools like Puppet or Chef — your VPS only needs SSH access and Python 3 (pre-installed on Ubuntu 22.04) to be managed by Ansible.
How do I store sensitive variables (passwords, API keys) in Ansible?
Use Ansible Vault to encrypt sensitive variables: ansible-vault encrypt_string 'your_password' --name 'db_password'. Store the encrypted value in your playbook or variables file — it is safe to commit to version control. Run playbooks with --ask-vault-pass or a vault password file to decrypt at runtime.
Can Ansible manage both Linux VPS and Windows VPS?
Yes. Ansible uses WinRM (Windows Remote Management) instead of SSH for Windows nodes. Windows playbooks use Windows-specific modules (win_service, win_package, win_file) instead of Linux equivalents. A mixed inventory with both Linux and Windows hosts is supported — use host groups to target the correct module set per OS type.