• Home
  • Cloud VPS
    • Hong Kong VPS
    • US VPS
  • Dedicated Servers
    • Hong Kong Servers
    • US Servers
    • Singapore Servers
    • Japan Servers
  • Company
    • Contact Us
    • Blog
logo logo
  • Home
  • Cloud VPS
    • Hong Kong VPS
    • US VPS
  • Dedicated Servers
    • Hong Kong Servers
    • US Servers
    • Singapore Servers
    • Japan Servers
  • Company
    • Contact Us
    • Blog
ENEN
  • 简体简体
  • 繁體繁體
Client Area

Ubuntu in Virtual Machines and Containers: Configuration and Optimization

February 13, 2026

Running Ubuntu inside virtual machines (VMs) and containers introduces a set of specific behaviors, performance characteristics, and best practices that differ significantly from bare-metal deployments. The differences stem from:

  • Paravirtualized or emulated hardware interfaces
  • Shared resource scheduling by the hypervisor
  • Missing or limited direct hardware access
  • Different I/O paths and interrupt delivery models
  • Container-specific namespace and cgroup isolation

This article focuses on the most common and production-relevant setups in 2026: Ubuntu 24.04 LTS and 25.04+ guests under KVM/QEMU (libvirt), VMware ESXi/vSphere, Hyper-V, VirtualBox, and the two dominant container runtimes — Docker and containerd (Kubernetes/CRI-O).

1. Key Differences Between Bare Metal, VM, and Container Environments

AspectBare MetalKVM/QEMU VMDocker / containerd ContainerImplications for Optimization
CPU schedulingDirectHypervisor scheduler + steal timeCFS cgroup quota / sharesWatch steal time in VMs; respect CPU limits in containers
Memory ballooningNoneYes (virtio-balloon)No (cgroup memory limit)Enable ballooning in VMs; avoid overcommit in containers
I/O pathNativevirtio-scsi / virtio-blkOverlayFS / overlay2 + host storagePrefer virtio; tune I/O scheduler inside guest
Network stackDirect NICvirtio-net / vhost-netveth bridge / macvlan / hostEnable vhost-net/multiqueue in VMs; use host network when safe
Clock & timekeepingHPET / TSCkvm-clock / hyperv-clockHost clock (shared)Use kvm-clock in KVM guests
Entropy availabilityHardware RNGvirtio-rngHost /dev/random (limited)Install haveged or enable virtio-rng

2. Virtual Machine Specific Optimizations (KVM/QEMU – Most Common)

A. Hypervisor-side best practices (host tuning)

  • Use virtio for all devices (disk, network, serial, balloon, rng)
  • Enable multi-queue virtio-net (queues = vCPUs)
  • Set vhost=on for network (kernel bypass for lower latency)
  • CPU topology: match guest vCPU count to physical cores when possible; use host-passthrough model
  • Enable nested virtualization if running Kubernetes inside VM
  • Use hugepages (2 MB or 1 GB) for guest memory when running memory-intensive workloads (databases, Java)

B. Guest-side Ubuntu tuning

  • Install qemu-guest-agent → allows host to cleanly shut down guest, report IP, etc. sudo apt install qemu-guest-agent
  • Use the correct clock source:
    Bash
    # Should show kvm-clock on KVM guests
    cat /sys/devices/system/clocksource/clocksource0/current_clocksource
  • Disable mitigations if workload is trusted and performance-critical (careful!) GRUB_CMDLINE_LINUX_DEFAULT=”… mitigations=off”
  • Tune I/O scheduler inside guest:
    • For virtio-blk / virtio-scsi → mq-deadline or none
    • echo none > /sys/block/vda/queue/scheduler
  • Increase entropy pool if RNG is slow (common in VMs): sudo apt install haveged

3. Container-Specific Optimizations (Docker & containerd)

A. Runtime and storage driver choices

  • Preferred storage driver in 2026: overlay2 (stable, performant) Avoid devicemapper (thin provisioning issues), aufs (deprecated)
  • Use containerd directly (or CRI-O) for Kubernetes instead of dockerd when possible — lower overhead
  • Mount propagation and host bind mounts:
    • Prefer :ro,Z or :shared,Z for SELinux/AppArmor compatibility
    • Avoid –privileged unless absolutely required

B. Resource control & performance tuning

  • Always set CPU and memory limits (even if generous) — prevents noisy neighbor issues Example (docker run): –cpus=2.0 –memory=4g –memory-swap=4g
  • CPU shares vs quota: quota is hard limit (better predictability); shares are relative
  • Network mode choices:
    • bridge (default) → good isolation
    • host → lowest latency (no NAT, direct host NIC)
    • macvlan → when container needs own MAC/IP
  • Disable unnecessary features:
    • –security-opt no-new-privileges
    • –cap-drop=ALL –cap-add only needed ones
  • For high I/O workloads → use direct-lvm or btrfs with direct host storage passthrough

4. Common Patterns & Anti-Patterns

Pattern / Anti-PatternRecommendationReason
Running Ubuntu Desktop in VMPrefer Ubuntu Server imageLess overhead, no unnecessary graphical stack
Nested Docker inside VMUse –privileged or sysbox / sysbox-nestAvoids overlayfs-on-overlayfs performance hit
Running databases in containers without volumeAlways use named volumes or host bindContainer restart → data loss
Using default bridge network for prodPrefer host or macvlanNAT adds latency and connection tracking overhead
Disabling swap in containersSet memory-swap = memory limitPrevents OOM killer surprises
Running Ubuntu 22.04+ in old hypervisorsUse HWE kernel or cloud imagesOlder virtio drivers cause poor performance

5. Monitoring & Observability Differences

VMs:

  • Monitor steal time (steal% in top / vmstat) — high values indicate CPU overcommitment on host
  • Watch balloon inflation (free memory decreases while host pressure increases)

Containers:

  • cgroup v2 metrics are critical (cpu.stat.nr_periods vs nr_throttled)
  • Memory.high / memory.max pressure → use PSI (pressure stall information)
  • Network namespace → host netstat/ss won’t show container connections unless using host network

Summary – Mental Model for Ubuntu in Virtualized Environments

  • VMs: Treat them as almost-bare-metal but with paravirtualized drivers and hypervisor scheduling overhead. Optimize around virtio, clocksource, ballooning, and steal time awareness.
  • Containers: Treat them as heavily constrained processes sharing the host kernel. Optimize around cgroup limits, storage driver choice, network mode, and capability reduction.

The single most important principle in both cases is predictability over raw throughput:

  • Set explicit resource limits
  • Use paravirtualized drivers everywhere possible
  • Avoid layers of indirection (NAT, overlayfs-on-overlayfs, unnecessary security opt-outs)
  • Monitor the new signals that virtualization introduces (steal time, PSI, cgroup throttling)

When these invariants are respected, Ubuntu performs exceptionally well in virtualized and containerized environments — often within 5–15% of bare-metal throughput for most workloads while gaining massive gains in density, isolation, and operational agility.

Leave a Reply

You must be logged in to post a comment.

Recent Posts

  • Automating Ubuntu Server Provisioning
  • Ubuntu in Virtual Machines and Containers: Configuration and Optimization
  • Troubleshooting Boot and Startup Issues on Ubuntu – Deeper Technical Perspective
  • Monitoring and Observability on Ubuntu Servers – A Deeper Technical Perspective
  • Kernel Management on Ubuntu: Updates, Modules, and Parameters

Recent Comments

No comments to show.

Knowledge Base

Access detailed guides, tutorials, and resources.

Live Chat

Get instant help 24/7 from our support team.

Send Ticket

Our team typically responds within 10 minutes.

logo
Alipay Cc-paypal Cc-stripe Cc-visa Cc-mastercard Bitcoin
Cloud VPS
  • Hong Kong VPS
  • US VPS
Dedicated Servers
  • Hong Kong Servers
  • US Servers
  • Singapore Servers
  • Japan Servers
More
  • Contact Us
  • Blog
  • Legal
© 2026 Server.HK | Hosting Limited, Hong Kong | Company Registration No. 77008912
Telegram
Telegram @ServerHKBot